title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 2. Eviction [policy/v1]
Chapter 2. Eviction [policy/v1] Description Eviction evicts a pod from its node subject to certain policies and safety constraints. This is a subresource of Pod. A request to cause such an eviction is created by POSTing to ... /pods/<pod name>/evictions. Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources deleteOptions DeleteOptions DeleteOptions may be provided kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta ObjectMeta describes the pod that is being evicted. 2.2. API endpoints The following API endpoints are available: /api/v1/namespaces/{namespace}/pods/{name}/eviction POST : create eviction of a Pod 2.2.1. /api/v1/namespaces/{namespace}/pods/{name}/eviction Table 2.1. Global path parameters Parameter Type Description name string name of the Eviction Table 2.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create eviction of a Pod Table 2.3. Body parameters Parameter Type Description body Eviction schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Eviction schema 201 - Created Eviction schema 202 - Accepted Eviction schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/policy_apis/eviction-policy-v1
Chapter 7. Organizations
Chapter 7. Organizations An organization is a logical collection of users, teams, projects, and inventories. It is the highest level object in the controller object hierarchy. From the navigation menu, select Organizations to display the existing organizations for your installation. Organizations can be searched by Name or Description . Modify organizations using the icon. Click Delete to remove a selected organization. 7.1. Creating an organization Note Automation controller automatically creates a default organization. If you have a Self-support level license, you have only the default organization available and must not delete it. You can use the default organization as it is initially set up and edit it later. Click Add to create a new organization. You can configure several attributes of an organization: Enter the Name for your organization (required). Enter a Description for the organization. Max Hosts is only editable by a superuser to set an upper limit on the number of license hosts that an organization can have. Setting this value to 0 signifies no limit. If you try to add a host to an organization that has reached or exceeded its cap on hosts, an error message displays: The inventory sync output view also shows the host limit error. Click Details for additional information about the error. Enter the name of the Instance Groups on which to run this organization. Enter the name of the execution environment or search for one that exists on which to run this organization. For more information, see Upgrading to Execution Environments . Optional: Enter the Galaxy Credentials or search from a list of existing ones. Click Save to finish creating the organization. When the organization is created, automation controller displays the Organization details, and enables you to manage access and execution environments for the organization. From the Details tab, you can edit or delete the organization. Note If you attempt to delete items that are used by other work items, a message lists the items that are affected by the deletion and prompts you to confirm the deletion. Some screens contain items that are invalid or have been deleted previously, and will fail to run. The following is an example of such a message: 7.2. Access to organizations Select Access when viewing your organization to display the users associated with this organization, and their roles. Use this page to complete the following tasks: Manage the user membership for this organization. Click Users on the navigation panel to manage user membership on a per-user basis from the Users page. Assign specific users certain levels of permissions within your organization. Enable them to act as an administrator for a particular resource. For more information, see Role-Based Access Controls . Click a user to display that user's details. You can review, grant, edit, and remove associated permissions for that user. For more information, see Users . 7.2.1. Add a User or Team To add a user or team to an organization, the user or team must already exist. For more information, see Creating a User and Creating a Team . To add existing users or team to the Organization: Procedure In the Access tab of the Organization page, click Add . Select a user or team to add. Click . Select one or more users or teams from the list by clicking the checkbox to the name to add them as members. Click . In this example, two users have been selected. Select the role you want the selected user or team to have. Scroll down for a complete list of roles. Different resources have different options available. Click Save to apply the roles to the selected user or team, and to add them as members. The Add Users or Add Teams window displays the updated roles assigned for each user and team. Note A user or team with associated roles retains them if they are reassigned to another organization. To remove roles for a particular user, click the disassociate icon to its resource. This launches a confirmation dialog, asking you to confirm the disassociation. 7.2.2. Work with Notifications Selecting the Notifications tab on the Organization details page enables you to review any notification integrations you have set up. Use the toggles to enable or disable the notifications to use with your particular organization. For more information, see Enable and Disable Notifications . If no notifications have been set up, select Administration Notifications from the navigation panel. For information on configuring notification types, see Notification Types .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_user_guide/assembly-controller-organizations
Chapter 37. FTP
Chapter 37. FTP Both producer and consumer are supported This component provides access to remote file systems over the FTP and SFTP protocols. When consuming from remote FTP server make sure you read the section titled Default when consuming files further below for details related to consuming files. Absolute path is not supported. Camel translates absolute path to relative by trimming all leading slashes from directoryname . There'll be WARN message printed in the logs. 37.1. Dependencies When using ftp with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-ftp-starter</artifactId> </dependency> 37.2. URI format Where directoryname represents the underlying directory. The directory name is a relative path. Absolute path's is not supported. The relative path can contain nested folders, such as /inbox/us. The autoCreate option is supported. When consumer starts, before polling is scheduled, there's additional FTP operation performed to create the directory configured for endpoint. The default value for autoCreate is true . If no username is provided, then anonymous login is attempted using no password. If no port number is provided, Camel will provide default values according to the protocol (ftp = 21, sftp = 22, ftps = 2222). You can append query options to the URI in the following format, ?option=value&option=value&... This component uses two different libraries for the actual FTP work. FTP and FTPS uses Apache Commons Net while SFTP uses JCraft JSCH . FTPS (also known as FTP Secure) is an extension to FTP that adds support for the Transport Layer Security (TLS) and the Secure Sockets Layer (SSL) cryptographic protocols. 37.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 37.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 37.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 37.4. Component Options The FTP component supports 3 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 37.5. Endpoint Options The FTP endpoint is configured using URI syntax: with the following path and query parameters: 37.5.1. Path Parameters (3 parameters) Name Description Default Type host (common) Required Hostname of the FTP server. String port (common) Port of the FTP server. int directoryName (common) The starting directory. String 37.5.2. Query Parameters (111 parameters) Name Description Default Type binary (common) Specifies the file transfer mode, BINARY or ASCII. Default is ASCII (false). false boolean charset (common) This option is used to specify the encoding of the file. You can use this on the consumer, to specify the encodings of the files, which allow Camel to know the charset it should load the file content in case the file content is being accessed. Likewise when writing a file, you can use this option to specify which charset to write the file as well. Do mind that when writing the file Camel may have to read the message content into memory to be able to convert the data into the configured charset, so do not use this if you have big messages. String disconnect (common) Whether or not to disconnect from remote FTP server right after use. Disconnect will only disconnect the current connection to the FTP server. If you have a consumer which you want to stop, then you need to stop the consumer/route instead. false boolean doneFileName (common) Producer: If provided, then Camel will write a 2nd done file when the original file has been written. The done file will be empty. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders. The done file will always be written in the same folder as the original file. Consumer: If provided, Camel will only consume files if a done file exists. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders.The done file is always expected in the same folder as the original file. Only USD\\{file.name} and USD\\{file.name.} is supported as dynamic placeholders. String fileName (common) Use Expression such as File Language to dynamically set the filename. For consumers, it's used as a filename filter. For producers, it's used to evaluate the filename to write. If an expression is set, it take precedence over the CamelFileName header. (Note: The header itself can also be an Expression). The expression options support both String and Expression types. If the expression is a String type, it is always evaluated using the File Language. If the expression is an Expression type, the specified Expression type is used - this allows you, for instance, to use OGNL expressions. For the consumer, you can use it to filter filenames, so you can for instance consume today's file using the File Language syntax: mydata-USD\\{date:now:yyyyMMdd}.txt. The producers support the CamelOverruleFileName header which takes precedence over any existing CamelFileName header; the CamelOverruleFileName is a header that is used only once, and makes it easier as this avoids to temporary store CamelFileName and have to restore it afterwards. String passiveMode (common) Sets passive mode connections. Default is active mode connections. false boolean separator (common) Sets the path separator to be used. UNIX = Uses unix style path separator Windows = Uses windows style path separator Auto = (is default) Use existing path separator in file name. Enum values: UNIX Windows Auto UNIX PathSeparator transferLoggingIntervalSeconds (common) Configures the interval in seconds to use when logging the progress of upload and download operations that are in-flight. This is used for logging progress when operations takes longer time. 5 int transferLoggingLevel (common) Configure the logging level to use when logging the progress of upload and download operations. Enum values: TRACE DEBUG INFO WARN ERROR OFF DEBUG LoggingLevel transferLoggingVerbose (common) Configures whether the perform verbose (fine grained) logging of the progress of upload and download operations. false boolean fastExistsCheck (common (advanced)) If set this option to be true, camel-ftp will use the list file directly to check if the file exists. Since some FTP server may not support to list the file directly, if the option is false, camel-ftp will use the old way to list the directory and check if the file exists. This option also influences readLock=changed to control whether it performs a fast check to update file information or not. This can be used to speed up the process if the FTP server has a lot of files. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean delete (consumer) If true, the file will be deleted after it is processed successfully. false boolean moveFailed (consumer) Sets the move failure expression based on Simple language. For example, to move files into a .error subdirectory use: .error. Note: When moving the files to the fail location Camel will handle the error and will not pick up the file again. String noop (consumer) If true, the file is not moved or deleted in any way. This option is good for readonly data, or for ETL type requirements. If noop=true, Camel will set idempotent=true as well, to avoid consuming the same files over and over again. false boolean preMove (consumer) Expression (such as File Language) used to dynamically set the filename when moving it before processing. For example to move in-progress files into the order directory set this value to order. String preSort (consumer) When pre-sort is enabled then the consumer will sort the file and directory names during polling, that was retrieved from the file system. You may want to do this in case you need to operate on the files in a sorted order. The pre-sort is executed before the consumer starts to filter, and accept files to process by Camel. This option is default=false meaning disabled. false boolean recursive (consumer) If a directory, will look for files in all the sub-directories as well. false boolean resumeDownload (consumer) Configures whether resume download is enabled. This must be supported by the FTP server (almost all FTP servers support it). In addition the options localWorkDirectory must be configured so downloaded files are stored in a local directory, and the option binary must be enabled, which is required to support resuming of downloads. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean streamDownload (consumer) Sets the download method to use when not using a local working directory. If set to true, the remote files are streamed to the route as they are read. When set to false, the remote files are loaded into memory before being sent into the route. If enabling this option then you must set stepwise=false as both cannot be enabled at the same time. false boolean download (consumer (advanced)) Whether the FTP consumer should download the file. If this option is set to false, then the message body will be null, but the consumer will still trigger a Camel Exchange that has details about the file such as file name, file size, etc. It's just that the file will not be downloaded. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern handleDirectoryParserAbsoluteResult (consumer (advanced)) Allows you to set how the consumer will handle subfolders and files in the path if the directory parser results in with absolute paths The reason for this is that some FTP servers may return file names with absolute paths, and if so then the FTP component needs to handle this by converting the returned path into a relative path. false boolean ignoreFileNotFoundOrPermissionError (consumer (advanced)) Whether to ignore when (trying to list files in directories or when downloading a file), which does not exist or due to permission error. By default when a directory or file does not exists or insufficient permission, then an exception is thrown. Setting this option to true allows to ignore that instead. false boolean inProgressRepository (consumer (advanced)) A pluggable in-progress repository org.apache.camel.spi.IdempotentRepository. The in-progress repository is used to account the current in progress files being consumed. By default a memory based repository is used. IdempotentRepository localWorkDirectory (consumer (advanced)) When consuming, a local work directory can be used to store the remote file content directly in local files, to avoid loading the content into memory. This is beneficial, if you consume a very big remote file and thus can conserve memory. String onCompletionExceptionHandler (consumer (advanced)) To use a custom org.apache.camel.spi.ExceptionHandler to handle any thrown exceptions that happens during the file on completion process where the consumer does either a commit or rollback. The default implementation will log any exception at WARN level and ignore. ExceptionHandler pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy processStrategy (consumer (advanced)) A pluggable org.apache.camel.component.file.GenericFileProcessStrategy allowing you to implement your own readLock option or similar. Can also be used when special conditions must be met before a file can be consumed, such as a special ready file exists. If this option is set then the readLock option does not apply. GenericFileProcessStrategy useList (consumer (advanced)) Whether to allow using LIST command when downloading a file. Default is true. In some use cases you may want to download a specific file and are not allowed to use the LIST command, and therefore you can set this option to false. Notice when using this option, then the specific file to download does not include meta-data information such as file size, timestamp, permissions etc, because those information is only possible to retrieve when LIST command is in use. true boolean fileExist (producer) What to do if a file already exists with the same name. Override, which is the default, replaces the existing file. - Append - adds content to the existing file. - Fail - throws a GenericFileOperationException, indicating that there is already an existing file. - Ignore - silently ignores the problem and does not override the existing file, but assumes everything is okay. - Move - option requires to use the moveExisting option to be configured as well. The option eagerDeleteTargetFile can be used to control what to do if an moving the file, and there exists already an existing file, otherwise causing the move operation to fail. The Move option will move any existing files, before writing the target file. - TryRename is only applicable if tempFileName option is in use. This allows to try renaming the file from the temporary name to the actual name, without doing any exists check. This check may be faster on some file systems and especially FTP servers. Enum values: Override Append Fail Ignore Move TryRename Override GenericFileExist flatten (producer) Flatten is used to flatten the file name path to strip any leading paths, so it's just the file name. This allows you to consume recursively into sub-directories, but when you eg write the files to another directory they will be written in a single directory. Setting this to true on the producer enforces that any file name in CamelFileName header will be stripped for any leading paths. false boolean jailStartingDirectory (producer) Used for jailing (restricting) writing files to the starting directory (and sub) only. This is enabled by default to not allow Camel to write files to outside directories (to be more secured out of the box). You can turn this off to allow writing files to directories outside the starting directory, such as parent or root folders. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean moveExisting (producer) Expression (such as File Language) used to compute file name to use when fileExist=Move is configured. To move files into a backup subdirectory just enter backup. This option only supports the following File Language tokens: file:name, file:name.ext, file:name.noext, file:onlyname, file:onlyname.noext, file:ext, and file:parent. Notice the file:parent is not supported by the FTP component, as the FTP component can only move any existing files to a relative directory based on current dir as base. String tempFileName (producer) The same as tempPrefix option but offering a more fine grained control on the naming of the temporary filename as it uses the File Language. The location for tempFilename is relative to the final file location in the option 'fileName', not the target directory in the base uri. For example if option fileName includes a directory prefix: dir/finalFilename then tempFileName is relative to that subdirectory dir. String tempPrefix (producer) This option is used to write the file using a temporary name and then, after the write is complete, rename it to the real name. Can be used to identify files being written and also avoid consumers (not using exclusive read locks) reading in progress files. Is often used by FTP when uploading big files. String allowNullBody (producer (advanced)) Used to specify if a null body is allowed during file writing. If set to true then an empty file will be created, when set to false, and attempting to send a null body to the file component, a GenericFileWriteException of 'Cannot write null body to file.' will be thrown. If the fileExist option is set to 'Override', then the file will be truncated, and if set to append the file will remain unchanged. false boolean chmod (producer (advanced)) Allows you to set chmod on the stored file. For example chmod=640. String disconnectOnBatchComplete (producer (advanced)) Whether or not to disconnect from remote FTP server right after a Batch upload is complete. disconnectOnBatchComplete will only disconnect the current connection to the FTP server. false boolean eagerDeleteTargetFile (producer (advanced)) Whether or not to eagerly delete any existing target file. This option only applies when you use fileExists=Override and the tempFileName option as well. You can use this to disable (set it to false) deleting the target file before the temp file is written. For example you may write big files and want the target file to exists during the temp file is being written. This ensure the target file is only deleted until the very last moment, just before the temp file is being renamed to the target filename. This option is also used to control whether to delete any existing files when fileExist=Move is enabled, and an existing file exists. If this option copyAndDeleteOnRenameFails false, then an exception will be thrown if an existing file existed, if its true, then the existing file is deleted before the move operation. true boolean keepLastModified (producer (advanced)) Will keep the last modified timestamp from the source file (if any). Will use the Exchange.FILE_LAST_MODIFIED header to located the timestamp. This header can contain either a java.util.Date or long with the timestamp. If the timestamp exists and the option is enabled it will set this timestamp on the written file. Note: This option only applies to the file producer. You cannot use this option with any of the ftp producers. false boolean moveExistingFileStrategy (producer (advanced)) Strategy (Custom Strategy) used to move file with special naming token to use when fileExist=Move is configured. By default, there is an implementation used if no custom strategy is provided. FileMoveExistingStrategy sendNoop (producer (advanced)) Whether to send a noop command as a pre-write check before uploading files to the FTP server. This is enabled by default as a validation of the connection is still valid, which allows to silently re-connect to be able to upload the file. However if this causes problems, you can turn this option off. true boolean activePortRange (advanced) Set the client side port range in active mode. The syntax is: minPort-maxPort Both port numbers are inclusive, eg 10000-19999 to include all 1xxxx ports. String autoCreate (advanced) Automatically create missing directories in the file's pathname. For the file consumer, that means creating the starting directory. For the file producer, it means the directory the files should be written to. true boolean bufferSize (advanced) Buffer size in bytes used for writing files (or in case of FTP for downloading and uploading files). 131072 int connectTimeout (advanced) Sets the connect timeout for waiting for a connection to be established Used by both FTPClient and JSCH. 10000 int ftpClient (advanced) To use a custom instance of FTPClient. FTPClient ftpClientConfig (advanced) To use a custom instance of FTPClientConfig to configure the FTP client the endpoint should use. FTPClientConfig ftpClientConfigParameters (advanced) Used by FtpComponent to provide additional parameters for the FTPClientConfig. Map ftpClientParameters (advanced) Used by FtpComponent to provide additional parameters for the FTPClient. Map maximumReconnectAttempts (advanced) Specifies the maximum reconnect attempts Camel performs when it tries to connect to the remote FTP server. Use 0 to disable this behavior. int reconnectDelay (advanced) Delay in millis Camel will wait before performing a reconnect attempt. 1000 long siteCommand (advanced) Sets optional site command(s) to be executed after successful login. Multiple site commands can be separated using a new line character. String soTimeout (advanced) Sets the so timeout FTP and FTPS Is the SocketOptions.SO_TIMEOUT value in millis. Recommended option is to set this to 300000 so as not have a hanged connection. On SFTP this option is set as timeout on the JSCH Session instance. 300000 int stepwise (advanced) Sets whether we should stepwise change directories while traversing file structures when downloading files, or as well when uploading a file to a directory. You can disable this if you for example are in a situation where you cannot change directory on the FTP server due security reasons. Stepwise cannot be used together with streamDownload. true boolean synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean throwExceptionOnConnectFailed (advanced) Should an exception be thrown if connection failed (exhausted)By default exception is not thrown and a WARN is logged. You can use this to enable exception being thrown and handle the thrown exception from the org.apache.camel.spi.PollingConsumerPollStrategy rollback method. false boolean timeout (advanced) Sets the data timeout for waiting for reply Used only by FTPClient. 30000 int antExclude (filter) Ant style filter exclusion. If both antInclude and antExclude are used, antExclude takes precedence over antInclude. Multiple exclusions may be specified in comma-delimited format. String antFilterCaseSensitive (filter) Sets case sensitive flag on ant filter. true boolean antInclude (filter) Ant style filter inclusion. Multiple inclusions may be specified in comma-delimited format. String eagerMaxMessagesPerPoll (filter) Allows for controlling whether the limit from maxMessagesPerPoll is eager or not. If eager then the limit is during the scanning of files. Where as false would scan all files, and then perform sorting. Setting this option to false allows for sorting all files first, and then limit the poll. Mind that this requires a higher memory usage as all file details are in memory to perform the sorting. true boolean exclude (filter) Is used to exclude files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris. String excludeExt (filter) Is used to exclude files matching file extension name (case insensitive). For example to exclude bak files, then use excludeExt=bak. Multiple extensions can be separated by comma, for example to exclude bak and dat files, use excludeExt=bak,dat. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options. String filter (filter) Pluggable filter as a org.apache.camel.component.file.GenericFileFilter class. Will skip files if filter returns false in its accept() method. GenericFileFilter filterDirectory (filter) Filters the directory based on Simple language. For example to filter on current date, you can use a simple date pattern such as USD\\{date:now:yyyMMdd}. String filterFile (filter) Filters the file based on Simple language. For example to filter on file size, you can use USD\\{file:size} 5000. String idempotent (filter) Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files. Will by default use a memory based LRUCache that holds 1000 entries. If noop=true then idempotent will be enabled as well to avoid consuming the same files over and over again. false Boolean idempotentKey (filter) To use a custom idempotent key. By default the absolute path of the file is used. You can use the File Language, for example to use the file name and file size, you can do: idempotentKey=USD\\{file:name}-USD\\{file:size}. String idempotentRepository (filter) A pluggable repository org.apache.camel.spi.IdempotentRepository which by default use MemoryIdempotentRepository if none is specified and idempotent is true. IdempotentRepository include (filter) Is used to include files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris. String includeExt (filter) Is used to include files matching file extension name (case insensitive). For example to include txt files, then use includeExt=txt. Multiple extensions can be separated by comma, for example to include txt and xml files, use includeExt=txt,xml. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options. String maxDepth (filter) The maximum depth to traverse when recursively processing a directory. 2147483647 int maxMessagesPerPoll (filter) To define a maximum messages to gather per poll. By default no maximum is set. Can be used to set a limit of e.g. 1000 to avoid when starting up the server that there are thousands of files. Set a value of 0 or negative to disabled it. Notice: If this option is in use then the File and FTP components will limit before any sorting. For example if you have 100000 files and use maxMessagesPerPoll=500, then only the first 500 files will be picked up, and then sorted. You can use the eagerMaxMessagesPerPoll option and set this to false to allow to scan all files first and then sort afterwards. int minDepth (filter) The minimum depth to start processing when recursively processing a directory. Using minDepth=1 means the base directory. Using minDepth=2 means the first sub directory. int move (filter) Expression (such as Simple Language) used to dynamically set the filename when moving it after processing. To move files into a .done subdirectory just enter .done. String exclusiveReadLockStrategy (lock) Pluggable read-lock as a org.apache.camel.component.file.GenericFileExclusiveReadLockStrategy implementation. GenericFileExclusiveReadLockStrategy readLock (lock) Used by consumer, to only poll the files if it has exclusive read-lock on the file (i.e. the file is not in-progress or being written). Camel will wait until the file lock is granted. This option provides the build in strategies: - none - No read lock is in use - markerFile - Camel creates a marker file (fileName.camelLock) and then holds a lock on it. This option is not available for the FTP component - changed - Changed is using file length/modification timestamp to detect whether the file is currently being copied or not. Will at least use 1 sec to determine this, so this option cannot consume files as fast as the others, but can be more reliable as the JDK IO API cannot always determine whether a file is currently being used by another process. The option readLockCheckInterval can be used to set the check frequency. - fileLock - is for using java.nio.channels.FileLock. This option is not avail for Windows OS and the FTP component. This approach should be avoided when accessing a remote file system via a mount/share unless that file system supports distributed file locks. - rename - rename is for using a try to rename the file as a test if we can get exclusive read-lock. - idempotent - (only for file component) idempotent is for using a idempotentRepository as the read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-changed - (only for file component) idempotent-changed is for using a idempotentRepository and changed as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-rename - (only for file component) idempotent-rename is for using a idempotentRepository and rename as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that.Notice: The various read locks is not all suited to work in clustered mode, where concurrent consumers on different nodes is competing for the same files on a shared file system. The markerFile using a close to atomic operation to create the empty marker file, but its not guaranteed to work in a cluster. The fileLock may work better but then the file system need to support distributed file locks, and so on. Using the idempotent read lock can support clustering if the idempotent repository supports clustering, such as Hazelcast Component or Infinispan. Enum values: none markerFile fileLock rename changed idempotent idempotent-changed idempotent-rename none String readLockCheckInterval (lock) Interval in millis for the read-lock, if supported by the read lock. This interval is used for sleeping between attempts to acquire the read lock. For example when using the changed read lock, you can set a higher interval period to cater for slow writes. The default of 1 sec. may be too fast if the producer is very slow writing the file. Notice: For FTP the default readLockCheckInterval is 5000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that amble time is allowed for the read lock process to try to grab the lock before the timeout was hit. 1000 long readLockDeleteOrphanLockFiles (lock) Whether or not read lock with marker files should upon startup delete any orphan read lock files, which may have been left on the file system, if Camel was not properly shutdown (such as a JVM crash). If turning this option to false then any orphaned lock file will cause Camel to not attempt to pickup that file, this could also be due another node is concurrently reading files from the same shared directory. true boolean readLockLoggingLevel (lock) Logging level used when a read lock could not be acquired. By default a DEBUG is logged. You can change this level, for example to OFF to not have any logging. This option is only applicable for readLock of types: changed, fileLock, idempotent, idempotent-changed, idempotent-rename, rename. Enum values: TRACE DEBUG INFO WARN ERROR OFF DEBUG LoggingLevel readLockMarkerFile (lock) Whether to use marker file with the changed, rename, or exclusive read lock types. By default a marker file is used as well to guard against other processes picking up the same files. This behavior can be turned off by setting this option to false. For example if you do not want to write marker files to the file systems by the Camel application. true boolean readLockMinAge (lock) This option is applied only for readLock=changed. It allows to specify a minimum age the file must be before attempting to acquire the read lock. For example use readLockMinAge=300s to require the file is at last 5 minutes old. This can speedup the changed read lock as it will only attempt to acquire files which are at least that given age. 0 long readLockMinLength (lock) This option is applied only for readLock=changed. It allows you to configure a minimum file length. By default Camel expects the file to contain data, and thus the default value is 1. You can set this option to zero, to allow consuming zero-length files. 1 long readLockRemoveOnCommit (lock) This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file is succeeded and a commit happens. By default the file is not removed which ensures that any race-condition do not occur so another active node may attempt to grab the file. Instead the idempotent repository may support eviction strategies that you can configure to evict the file name entry after X minutes - this ensures no problems with race conditions. See more details at the readLockIdempotentReleaseDelay option. false boolean readLockRemoveOnRollback (lock) This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file failed and a rollback happens. If this option is false, then the file name entry is confirmed (as if the file did a commit). true boolean readLockTimeout (lock) Optional timeout in millis for the read-lock, if supported by the read-lock. If the read-lock could not be granted and the timeout triggered, then Camel will skip the file. At poll Camel, will try the file again, and this time maybe the read-lock could be granted. Use a value of 0 or lower to indicate forever. Currently fileLock, changed and rename support the timeout. Notice: For FTP the default readLockTimeout value is 20000 instead of 10000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that amble time is allowed for the read lock process to try to grab the lock before the timeout was hit. 10000 long backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean account (security) Account to use for login. String password (security) Password to use for login. String username (security) Username to use for login. String shuffle (sort) To shuffle the list of files (sort in random order). false boolean sortBy (sort) Built-in sort by using the File Language. Supports nested sorts, so you can have a sort by file name and as a 2nd group sort by modified date. String sorter (sort) Pluggable sorter as a java.util.Comparator class. Comparator 37.6. FTPS component default trust store When using the ftpClient. properties related to SSL with the FTPS component, the trust store accept all certificates. If you only want trust selective certificates, you have to configure the trust store with the ftpClient.trustStore.xxx options or by configuring a custom ftpClient . When using sslContextParameters , the trust store is managed by the configuration of the provided SSLContextParameters instance. You can configure additional options on the ftpClient and ftpClientConfig from the URI directly by using the ftpClient. or ftpClientConfig. prefix. For example to set the setDataTimeout on the FTPClient to 30 seconds you can do: from("ftp://foo@myserver?password=secret&ftpClient.dataTimeout=30000").to("bean:foo"); You can mix and match and have use both prefixes, for example to configure date format or timezones. from("ftp://foo@myserver?password=secret&ftpClient.dataTimeout=30000&ftpClientConfig.serverLanguageCode=fr").to("bean:foo"); You can have as many of these options as you like. See the documentation of the Apache Commons FTP FTPClientConfig for possible options and more details. And as well for Apache Commons FTP FTPClient. If you do not like having many and long configuration in the url you can refer to the ftpClient or ftpClientConfig to use by letting Camel lookup in the Registry for it. For example: <bean id="myConfig" class="org.apache.commons.net.ftp.FTPClientConfig"> <property name="lenientFutureDates" value="true"/> <property name="serverLanguageCode" value="fr"/> </bean> And then let Camel lookup this bean when you use the # notation in the url. from("ftp://foo@myserver?password=secret&ftpClientConfig=#myConfig").to("bean:foo"); 37.7. Examples 37.8. Concurrency FTP Consumer does not support concurrency The FTP consumer (with the same endpoint) does not support concurrency (the backing FTP client is not thread safe). You can use multiple FTP consumers to poll from different endpoints. It is only a single endpoint that does not support concurrent consumers. The FTP producer does not have this issue, it supports concurrency. 37.9. More information This component is an extension of the File component. So there are more samples and details on the File component page. 37.10. Default when consuming files The FTP consumer will by default leave the consumed files untouched on the remote FTP server. You have to configure it explicitly if you want it to delete the files or move them to another location. For example you can use delete=true to delete the files, or use move=.done to move the files into a hidden done sub directory. The regular File consumer is different as it will by default move files to a .camel sub directory. The reason Camel does not do this by default for the FTP consumer is that it may lack permissions by default to be able to move or delete files. 37.10.1. limitations The option readLock can be used to force Camel not to consume files that is currently in the progress of being written. However, this option is turned off by default, as it requires that the user has write access. See the options table at File2 for more details about read locks. There are other solutions to avoid consuming files that are currently being written over FTP; for instance, you can write to a temporary destination and move the file after it has been written. When moving files using move or preMove option the files are restricted to the FTP_ROOT folder. That prevents you from moving files outside the FTP area. If you want to move files to another area you can use soft links and move files into a soft linked folder. 37.11. Message Headers The following message headers can be used to affect the behavior of the component Header Description CamelFileName Specifies the output file name (relative to the endpoint directory) to be used for the output message when sending to the endpoint. If this is not present and no expression either, then a generated message ID is used as the filename instead. CamelFileNameProduced The actual filepath (path + name) for the output file that was written. This header is set by Camel and its purpose is providing end-users the name of the file that was written. CamelFileNameConsumed The file name of the file consumed CamelFileHost The remote hostname. CamelFileLocalWorkPath Path to the local work file, if local work directory is used. In addition the FTP/FTPS consumer and producer will enrich the Camel Message with the following headers Header Description CamelFtpReplyCode The FTP client reply code (the type is a integer) CamelFtpReplyString The FTP client reply string 37.11.1. Exchange Properties Camel sets the following exchange properties Header Description CamelBatchIndex Current index out of total number of files being consumed in this batch. CamelBatchSize Total number of files being consumed in this batch. CamelBatchComplete True if there are no more files in this batch. 37.12. About timeouts The two set of libraries (see top) has different API for setting timeout. You can use the connectTimeout option for both of them to set a timeout in millis to establish a network connection. An individual soTimeout can also be set on the FTP/FTPS, which corresponds to using ftpClient.soTimeout . Notice SFTP will automatically use connectTimeout as its soTimeout . The timeout option only applies for FTP/FTPS as the data timeout, which corresponds to the ftpClient.dataTimeout value. All timeout values are in millis. 37.13. Using Local Work Directory Camel supports consuming from remote FTP servers and downloading the files directly into a local work directory. This avoids reading the entire remote file content into memory as it is streamed directly into the local file using FileOutputStream . Camel will store to a local file with the same name as the remote file, though with .inprogress as extension while the file is being downloaded. Afterwards, the file is renamed to remove the .inprogress suffix. And finally, when the Exchange is complete the local file is deleted. So if you want to download files from a remote FTP server and store it as files then you need to route to a file endpoint such as: from("ftp://[email protected]?password=secret&localWorkDirectory=/tmp").to("file://inbox"); Note The route above is ultra efficient as it avoids reading the entire file content into memory. It will download the remote file directly to a local file stream. The java.io.File handle is then used as the Exchange body. The file producer leverages this fact and can work directly on the work file java.io.File handle and perform a java.io.File.rename to the target filename. As Camel knows it's a local work file, it can optimize and use a rename instead of a file copy, as the work file is meant to be deleted anyway. 37.14. Stepwise changing directories Camel FTP can operate in two modes in terms of traversing directories when consuming files (eg downloading) or producing files (eg uploading) stepwise not stepwise You may want to pick either one depending on your situation and security issues. Some Camel end users can only download files if they use stepwise, while others can only download if they do not. You can use the stepwise option to control the behavior. Note that stepwise changing of directory will in most cases only work when the user is confined to it's home directory and when the home directory is reported as "/" . The difference between the two of them is best illustrated with an example. Suppose we have the following directory structure on the remote FTP server we need to traverse and download files: And that we have a file in each of sub-a (a.txt) and sub-b (b.txt) folder. 37.15. Using stepwise=true (default mode) As you can see when stepwise is enabled, it will traverse the directory structure using CD xxx. 37.16. Using stepwise=false As you can see when not using stepwise, there are no CD operation invoked at all. 37.17. Samples In the sample below we set up Camel to download all the reports from the FTP server once every hour (60 min) as BINARY content and store it as files on the local file system. And the route using XML DSL: <route> <from uri="ftp://scott@localhost/public/reports?password=tiger&amp;binary=true&amp;delay=60000"/> <to uri="file://target/test-reports"/> </route> 37.17.1. Consuming a remote FTPS server (implicit SSL) and client authentication from("ftps://admin@localhost:2222/public/camel?password=admin&securityProtocol=SSL&implicit=true &ftpClient.keyStore.file=./src/test/resources/server.jks &ftpClient.keyStore.password=password&ftpClient.keyStore.keyPassword=password") .to("bean:foo"); 37.17.2. Consuming a remote FTPS server (explicit TLS) and a custom trust store configuration from("ftps://admin@localhost:2222/public/camel?password=admin&ftpClient.trustStore.file=./src/test/resources/server.jks&ftpClient.trustStore.password=password") .to("bean:foo"); 37.18. Custom filtering Camel supports pluggable filtering strategies. This strategy it to use the build in org.apache.camel.component.file.GenericFileFilter in Java. You can then configure the endpoint with such a filter to skip certain filters before being processed. In the sample we have built our own filter that only accepts files starting with report in the filename. And then we can configure our route using the filter attribute to reference our filter (using # notation) that we have defined in the spring XML file: <!-- define our sorter as a plain spring bean --> <bean id="myFilter" class="com.mycompany.MyFileFilter"/> <route> <from uri="ftp://[email protected]?password=secret&amp;filter=#myFilter"/> <to uri="bean:processInbox"/> </route> 37.19. Filtering using ANT path matcher The ANT path matcher is a filter that is shipped out-of-the-box in the camel-spring jar. So you need to depend on camel-spring if you are using Maven. The reason is that we leverage Spring's AntPathMatcher to do the actual matching. The file paths are matched with the following rules: ? matches one character * matches zero or more characters ** matches zero or more directories in a path The sample below demonstrates how to use it: 37.20. Using a proxy with SFTP To use an HTTP proxy to connect to your remote host, you can configure your route in the following way: <!-- define our sorter as a plain spring bean --> <bean id="proxy" class="com.jcraft.jsch.ProxyHTTP"> <constructor-arg value="localhost"/> <constructor-arg value="7777"/> </bean> <route> <from uri="sftp://localhost:9999/root?username=admin&password=admin&proxy=#proxy"/> <to uri="bean:processFile"/> </route> You can also assign a user name and password to the proxy, if necessary. Please consult the documentation for com.jcraft.jsch.Proxy to discover all options. 37.21. Setting preferred SFTP authentication method If you want to explicitly specify the list of authentication methods that should be used by sftp component, use preferredAuthentications option. If for example you would like Camel to attempt to authenticate with private/public SSH key and fallback to user/password authentication in the case when no public key is available, use the following route configuration: from("sftp://localhost:9999/root?username=admin&password=admin&preferredAuthentications=publickey,password"). to("bean:processFile"); 37.22. Consuming a single file using a fixed name When you want to download a single file and knows the file name, you can use fileName=myFileName.txt to tell Camel the name of the file to download. By default the consumer will still do a FTP LIST command to do a directory listing and then filter these files based on the fileName option. Though in this use-case it may be desirable to turn off the directory listing by setting useList=false . For example the user account used to login to the FTP server may not have permission to do a FTP LIST command. So you can turn off this with useList=false , and then provide the fixed name of the file to download with fileName=myFileName.txt , then the FTP consumer can still download the file. If the file for some reason does not exist, then Camel will by default throw an exception, you can turn this off and ignore this by setting ignoreFileNotFoundOrPermissionError=true . For example to have a Camel route that pickup a single file, and delete it after use you can do from("ftp://admin@localhost:21/nolist/?password=admin&stepwise=false&useList=false&ignoreFileNotFoundOrPermissionError=true&fileName=report.txt&delete=true") .to("activemq:queue:report"); Notice that we have used all the options we talked above. You can also use this with ConsumerTemplate . For example to download a single file (if it exists) and grab the file content as a String type: String data = template.retrieveBodyNoWait("ftp://admin@localhost:21/nolist/?password=admin&stepwise=false&useList=false&ignoreFileNotFoundOrPermissionError=true&fileName=report.txt&delete=true", String.class); 37.23. Debug logging This component has log level TRACE that can be helpful if you have problems. 37.24. Spring Boot Auto-Configuration The component supports 13 options, which are listed below. Name Description Default Type camel.component.ftp.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.ftp.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.ftp.enabled Whether to enable auto configuration of the ftp component. This is enabled by default. Boolean camel.component.ftp.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.ftps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.ftps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.ftps.enabled Whether to enable auto configuration of the ftps component. This is enabled by default. Boolean camel.component.ftps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.ftps.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.sftp.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.sftp.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.sftp.enabled Whether to enable auto configuration of the sftp component. This is enabled by default. Boolean camel.component.sftp.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-ftp-starter</artifactId> </dependency>", "ftp://[username@]hostname[:port]/directoryname[?options] sftp://[username@]hostname[:port]/directoryname[?options] ftps://[username@]hostname[:port]/directoryname[?options]", "ftp:host:port/directoryName", "from(\"ftp://foo@myserver?password=secret&ftpClient.dataTimeout=30000\").to(\"bean:foo\");", "from(\"ftp://foo@myserver?password=secret&ftpClient.dataTimeout=30000&ftpClientConfig.serverLanguageCode=fr\").to(\"bean:foo\");", "<bean id=\"myConfig\" class=\"org.apache.commons.net.ftp.FTPClientConfig\"> <property name=\"lenientFutureDates\" value=\"true\"/> <property name=\"serverLanguageCode\" value=\"fr\"/> </bean>", "from(\"ftp://foo@myserver?password=secret&ftpClientConfig=#myConfig\").to(\"bean:foo\");", "ftp://[email protected]/public/upload/images/holiday2008?password=secret&binary=true ftp://[email protected]:12049/reports/2008/password=secret&binary=false ftp://publicftpserver.com/download", "from(\"ftp://[email protected]?password=secret&localWorkDirectory=/tmp\").to(\"file://inbox\");", "/ /one /one/two /one/two/sub-a /one/two/sub-b", "TYPE A 200 Type set to A PWD 257 \"/\" is current directory. CWD one 250 CWD successful. \"/one\" is current directory. CWD two 250 CWD successful. \"/one/two\" is current directory. SYST 215 UNIX emulated by FileZilla PORT 127,0,0,1,17,94 200 Port command successful LIST 150 Opening data channel for directory list. 226 Transfer OK CWD sub-a 250 CWD successful. \"/one/two/sub-a\" is current directory. PORT 127,0,0,1,17,95 200 Port command successful LIST 150 Opening data channel for directory list. 226 Transfer OK CDUP 200 CDUP successful. \"/one/two\" is current directory. CWD sub-b 250 CWD successful. \"/one/two/sub-b\" is current directory. PORT 127,0,0,1,17,96 200 Port command successful LIST 150 Opening data channel for directory list. 226 Transfer OK CDUP 200 CDUP successful. \"/one/two\" is current directory. CWD / 250 CWD successful. \"/\" is current directory. PWD 257 \"/\" is current directory. CWD one 250 CWD successful. \"/one\" is current directory. CWD two 250 CWD successful. \"/one/two\" is current directory. PORT 127,0,0,1,17,97 200 Port command successful RETR foo.txt 150 Opening data channel for file transfer. 226 Transfer OK CWD / 250 CWD successful. \"/\" is current directory. PWD 257 \"/\" is current directory. CWD one 250 CWD successful. \"/one\" is current directory. CWD two 250 CWD successful. \"/one/two\" is current directory. CWD sub-a 250 CWD successful. \"/one/two/sub-a\" is current directory. PORT 127,0,0,1,17,98 200 Port command successful RETR a.txt 150 Opening data channel for file transfer. 226 Transfer OK CWD / 250 CWD successful. \"/\" is current directory. PWD 257 \"/\" is current directory. CWD one 250 CWD successful. \"/one\" is current directory. CWD two 250 CWD successful. \"/one/two\" is current directory. CWD sub-b 250 CWD successful. \"/one/two/sub-b\" is current directory. PORT 127,0,0,1,17,99 200 Port command successful RETR b.txt 150 Opening data channel for file transfer. 226 Transfer OK CWD / 250 CWD successful. \"/\" is current directory. QUIT 221 Goodbye disconnected.", "230 Logged on TYPE A 200 Type set to A SYST 215 UNIX emulated by FileZilla PORT 127,0,0,1,4,122 200 Port command successful LIST one/two 150 Opening data channel for directory list 226 Transfer OK PORT 127,0,0,1,4,123 200 Port command successful LIST one/two/sub-a 150 Opening data channel for directory list 226 Transfer OK PORT 127,0,0,1,4,124 200 Port command successful LIST one/two/sub-b 150 Opening data channel for directory list 226 Transfer OK PORT 127,0,0,1,4,125 200 Port command successful RETR one/two/foo.txt 150 Opening data channel for file transfer. 226 Transfer OK PORT 127,0,0,1,4,126 200 Port command successful RETR one/two/sub-a/a.txt 150 Opening data channel for file transfer. 226 Transfer OK PORT 127,0,0,1,4,127 200 Port command successful RETR one/two/sub-b/b.txt 150 Opening data channel for file transfer. 226 Transfer OK QUIT 221 Goodbye disconnected.", "<route> <from uri=\"ftp://scott@localhost/public/reports?password=tiger&amp;binary=true&amp;delay=60000\"/> <to uri=\"file://target/test-reports\"/> </route>", "from(\"ftps://admin@localhost:2222/public/camel?password=admin&securityProtocol=SSL&implicit=true &ftpClient.keyStore.file=./src/test/resources/server.jks &ftpClient.keyStore.password=password&ftpClient.keyStore.keyPassword=password\") .to(\"bean:foo\");", "from(\"ftps://admin@localhost:2222/public/camel?password=admin&ftpClient.trustStore.file=./src/test/resources/server.jks&ftpClient.trustStore.password=password\") .to(\"bean:foo\");", "<!-- define our sorter as a plain spring bean --> <bean id=\"myFilter\" class=\"com.mycompany.MyFileFilter\"/> <route> <from uri=\"ftp://[email protected]?password=secret&amp;filter=#myFilter\"/> <to uri=\"bean:processInbox\"/> </route>", "<!-- define our sorter as a plain spring bean --> <bean id=\"proxy\" class=\"com.jcraft.jsch.ProxyHTTP\"> <constructor-arg value=\"localhost\"/> <constructor-arg value=\"7777\"/> </bean> <route> <from uri=\"sftp://localhost:9999/root?username=admin&password=admin&proxy=#proxy\"/> <to uri=\"bean:processFile\"/> </route>", "from(\"sftp://localhost:9999/root?username=admin&password=admin&preferredAuthentications=publickey,password\"). to(\"bean:processFile\");", "from(\"ftp://admin@localhost:21/nolist/?password=admin&stepwise=false&useList=false&ignoreFileNotFoundOrPermissionError=true&fileName=report.txt&delete=true\") .to(\"activemq:queue:report\");", "String data = template.retrieveBodyNoWait(\"ftp://admin@localhost:21/nolist/?password=admin&stepwise=false&useList=false&ignoreFileNotFoundOrPermissionError=true&fileName=report.txt&delete=true\", String.class);" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-ftp-component-starter
Chapter 14. Replacing storage nodes
Chapter 14. Replacing storage nodes You can choose one of the following procedures to replace storage nodes: Section 14.1, "Replacing operational nodes on Red Hat OpenStack Platform installer-provisioned infrastructure" Section 14.2, "Replacing failed nodes on Red Hat OpenStack Platform installer-provisioned infrastructure" 14.1. Replacing operational nodes on Red Hat OpenStack Platform installer-provisioned infrastructure Use this procedure to replace an operational node on Red Hat OpenStack Platform installer-provisioned infrastructure (IPI). Procedure Log in to OpenShift Web Console and click Compute Nodes . Identify the node that needs to be replaced. Take a note of its Machine Name . Mark the node as unschedulable using the following command: Drain the node using the following command: Important This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional. Click Compute Machines . Search for the required machine. Besides the required machine, click the Action menu (...) Delete Machine . Click Delete to confirm the machine deletion. A new machine is automatically created. Wait for new machine to start and transition into Running state. Important This activity may take at least 5-10 minutes or more. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels Add cluster.ocs.openshift.io/openshift-storage and click Save . From Command line interface Execute the following command to apply the OpenShift Data Foundation label to the new node: Verification steps Execute the following command and verify that the new node is present in the output: Click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Verify that new OSD pods are running on the replacement node. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in step, do the following: Create a debug pod and open a chroot environment for the selected host(s). Run "lsblk" and check for the "crypt" keyword beside the ocs-deviceset name(s) If verification steps fail, contact Red Hat Support . 14.2. Replacing failed nodes on Red Hat OpenStack Platform installer-provisioned infrastructure Perform this procedure to replace a failed node which is not operational on Red Hat OpenStack Platform installer-provisioned infrastructure (IPI) for OpenShift Data Foundation. Procedure Log in to OpenShift Web Console and click Compute Nodes . Identify the faulty node and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created, wait for new machine to start. Important This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels Add cluster.ocs.openshift.io/openshift-storage and click Save . From Command line interface Execute the following command to apply the OpenShift Data Foundation label to the new node: [Optional]: If the failed Red Hat OpenStack Platform instance is not removed automatically, terminate the instance from Red Hat OpenStack Platform console. Verification steps Execute the following command and verify that the new node is present in the output: Click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Verify that new OSD pods are running on the replacement node. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in step, do the following: Create a debug pod and open a chroot environment for the selected host(s). Run "lsblk" and check for the "crypt" keyword beside the ocs-deviceset name(s) If verification steps fail, contact Red Hat Support .
[ "oc adm cordon <node_name>", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd", "oc debug node/<node name> chroot /host", "lsblk", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd", "oc debug node/<node name> chroot /host", "lsblk" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/replacing_storage_nodes
Monitoring Ceph with Nagios Guide
Monitoring Ceph with Nagios Guide Red Hat Ceph Storage 6 Monitoring Ceph with Nagios Core. Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/monitoring_ceph_with_nagios_guide/index
Chapter 13. Installing on vSphere
Chapter 13. Installing on vSphere The Assisted Installer integrates the OpenShift Container Platform cluster with the vSphere platform, which exposes the Machine API to vSphere and enables autoscaling. 13.1. Adding hosts on vSphere You can add hosts to the Assisted Installer cluster using the online vSphere client or the govc vSphere CLI tool. The following procedure demonstrates adding hosts with the govc CLI tool. To use the online vSphere Client, refer to the documentation for vSphere. To add hosts on vSphere with the vSphere govc CLI, generate the discovery image ISO from the Assisted Installer. The minimal discovery image ISO is the default setting. This image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size. After this is complete, you must create an image for the vSphere platform and create the vSphere virtual machines. Prerequisites You are using vSphere 7.0.2 or higher. You have the vSphere govc CLI tool installed and configured. You have set clusterSet disk.EnableUUID to TRUE in vSphere. You have created a cluster in the Assisted Installer web console, or You have created an Assisted Installer cluster profile and infrastructure environment with the API. You have exported your infrastructure environment ID in your shell as USDINFRA_ENV_ID . Procedure Configure the discovery image if you want it to boot with an ignition file. In Cluster details , select vSphere from the Integrate with external partner platforms dropdown list. The Include custom manifest checkbox is optional. In Host discovery , click the Add hosts button and select the provisioning type. Add an SSH public key so that you can connect to the vSphere VMs as the core user. Having a login to the cluster hosts can provide you with debugging information during the installation. If you do not have an existing SSH key pair on your local machine, follow the steps in Generating a key pair for cluster node SSH access . In the SSH public key field, click Browse to upload the id_rsa.pub file containing the SSH public key. Alternatively, drag and drop the file into the field from the file manager. To see the file in the file manager, select Show hidden files in the menu. Select the required discovery image ISO. Note Minimal image file: Provision with virtual media downloads a smaller image that will fetch the data needed to boot. In Networking , select Cluster-managed networking or User-managed networking : Optional: If the cluster hosts are behind a firewall that requires the use of a proxy, select Configure cluster-wide proxy settings . Enter the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server. Note The proxy username and password must be URL-encoded. Optional: If the cluster hosts are in a network with a re-encrypting man-in-the-middle (MITM) proxy or the cluster needs to trust certificates for other purposes such as container image registries, select Configure cluster-wide trusted certificates and add the additional certificates. Optional: Configure the discovery image if you want to boot it with an ignition file. For more information, see Additional Resources . Click Generate Discovery ISO . Copy the Discovery ISO URL . Download the discovery ISO: USD wget - O vsphere-discovery-image.iso <discovery_url> Replace <discovery_url> with the Discovery ISO URL from the preceding step. On the command line, power off and delete any preexisting virtual machines: USD for VM in USD(/usr/local/bin/govc ls /<datacenter>/vm/<folder_name>) do /usr/local/bin/govc vm.power -off USDVM /usr/local/bin/govc vm.destroy USDVM done Replace <datacenter> with the name of the data center. Replace <folder_name> with the name of the VM inventory folder. Remove preexisting ISO images from the data store, if there are any: USD govc datastore.rm -ds <iso_datastore> <image> Replace <iso_datastore> with the name of the data store. Replace image with the name of the ISO image. Upload the Assisted Installer discovery ISO: USD govc datastore.upload -ds <iso_datastore> vsphere-discovery-image.iso Replace <iso_datastore> with the name of the data store. Note All nodes in the cluster must boot from the discovery image. Boot three to five control plane nodes: USD govc vm.create -net.adapter <network_adapter_type> \ -disk.controller <disk_controller_type> \ -pool=<resource_pool> \ -c=16 \ -m=32768 \ -disk=120GB \ -disk-datastore=<datastore_file> \ -net.address="<nic_mac_address>" \ -iso-datastore=<iso_datastore> \ -iso="vsphere-discovery-image.iso" \ -folder="<inventory_folder>" \ <hostname>.<cluster_name>.example.com See vm.create for details. Note The foregoing example illustrates the minimum required resources for control plane nodes. Boot at least two worker nodes: USD govc vm.create -net.adapter <network_adapter_type> \ -disk.controller <disk_controller_type> \ -pool=<resource_pool> \ -c=4 \ -m=8192 \ -disk=120GB \ -disk-datastore=<datastore_file> \ -net.address="<nic_mac_address>" \ -iso-datastore=<iso_datastore> \ -iso="vsphere-discovery-image.iso" \ -folder="<inventory_folder>" \ <hostname>.<cluster_name>.example.com See vm.create for details. Note The foregoing example illustrates the minimum required resources for worker nodes. Ensure the VMs are running: USD govc ls /<datacenter>/vm/<folder_name> Replace <datacenter> with the name of the data center. Replace <folder_name> with the name of the VM inventory folder. After 2 minutes, shut down the VMs: USD for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -s=true USDVM done Replace <datacenter> with the name of the data center. Replace <folder_name> with the name of the VM inventory folder. Set the disk.EnableUUID setting to TRUE : USD for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.change -vm USDVM -e disk.EnableUUID=TRUE done Replace <datacenter> with the name of the data center. Replace <folder_name> with the name of the VM inventory folder. Note You must set disk.EnableUUID to TRUE on all of the nodes to enable autoscaling with vSphere. Restart the VMs: USD for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -on=true USDVM done Replace <datacenter> with the name of the data center. Replace <folder_name> with the name of the VM inventory folder. Return to the Assisted Installer user interface and wait until the Assisted Installer discovers the hosts and each of them have a Ready status. Select roles if needed. In Networking , clear the Allocate IPs via DHCP server checkbox. Set the API VIP address. Set the Ingress VIP address. Continue with the installation procedure. Additional resources Configuring the discovery image 13.2. vSphere postinstallation configuration using the CLI After installing an OpenShift Container Platform cluster using the Assisted Installer on vSphere with the platform integration feature enabled, you must update the following vSphere configuration settings manually: vCenter username vCenter password vCenter address vCenter cluster Data center Data store Folder Prerequisites The Assisted Installer has finished installing the cluster successfully. The cluster is connected to console.redhat.com . Procedure Generate a base64-encoded username and password for vCenter: USD echo -n "<vcenter_username>" | base64 -w0 Replace <vcenter_username> with your vCenter username. USD echo -n "<vcenter_password>" | base64 -w0 Replace <vcenter_password> with your vCenter password. Backup the vSphere credentials: USD oc get secret vsphere-creds -o yaml -n kube-system > creds_backup.yaml Edit the vSphere credentials: USD cp creds_backup.yaml vsphere-creds.yaml USD vi vsphere-creds.yaml apiVersion: v1 data: <vcenter_address>.username: <vcenter_username_encoded> <vcenter_address>.password: <vcenter_password_encoded> kind: Secret metadata: annotations: cloudcredential.openshift.io/mode: passthrough creationTimestamp: "2022-01-25T17:39:50Z" name: vsphere-creds namespace: kube-system resourceVersion: "2437" uid: 06971978-e3a5-4741-87f9-2ca3602f2658 type: Opaque Replace <vcenter_address> with the vCenter address. Replace <vcenter_username_encoded> with the base64-encoded version of your vSphere username. Replace <vcenter_password_encoded> with the base64-encoded version of your vSphere password. Replace the vSphere credentials: USD oc replace -f vsphere-creds.yaml Redeploy the kube-controller-manager pods: USD oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Backup the vSphere cloud provider configuration: USD oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config_backup.yaml Edit the cloud provider configuration: USD cp cloud-provider-config_backup.yaml cloud-provider-config.yaml USD vi cloud-provider-config.yaml apiVersion: v1 data: config: | [Global] secret-name = "vsphere-creds" secret-namespace = "kube-system" insecure-flag = "1" [Workspace] server = "<vcenter_address>" datacenter = "<datacenter>" default-datastore = "<datastore>" folder = "/<datacenter>/vm/<folder>" [VirtualCenter "<vcenter_address>"] datacenters = "<datacenter>" kind: ConfigMap metadata: creationTimestamp: "2022-01-25T17:40:49Z" name: cloud-provider-config namespace: openshift-config resourceVersion: "2070" uid: 80bb8618-bf25-442b-b023-b31311918507 Replace <vcenter_address> with the vCenter address. Replace <datacenter> with the name of the data center. Replace <datastore> with the name of the data store. Replace <folder> with the folder containing the cluster VMs. Apply the cloud provider configuration: USD oc apply -f cloud-provider-config.yaml Taint the nodes with the uninitialized taint: Important Follow steps 9 through 12 if you are installing OpenShift Container Platform 4.13 or later. Identify the nodes to taint: USD oc get nodes Run the following command for each node: USD oc adm taint node <node_name> node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule Replace <node_name> with the name of the node. Example USD oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready control-plane,master 45h v1.26.3+379cd9f master-1 Ready control-plane,master 45h v1.26.3+379cd9f worker-0 Ready worker 45h v1.26.3+379cd9f worker-1 Ready worker 45h v1.26.3+379cd9f master-2 Ready control-plane,master 45h v1.26.3+379cd9f USD oc adm taint node master-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule USD oc adm taint node master-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule USD oc adm taint node master-2 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule USD oc adm taint node worker-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule USD oc adm taint node worker-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule Back up the infrastructures configuration: USD oc get infrastructures.config.openshift.io -o yaml > infrastructures.config.openshift.io.yaml.backup Edit the infrastructures configuration: USD cp infrastructures.config.openshift.io.yaml.backup infrastructures.config.openshift.io.yaml USD vi infrastructures.config.openshift.io.yaml apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: creationTimestamp: "2022-05-07T10:19:55Z" generation: 1 name: cluster resourceVersion: "536" uid: e8a5742c-6d15-44e6-8a9e-064b26ab347d spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: assisted-generated-failure-domain region: assisted-generated-region server: <vcenter_address> topology: computeCluster: /<data_center>/host/<vcenter_cluster> datacenter: <data_center> datastore: /<data_center>/datastore/<datastore> folder: "/<data_center>/path/to/folder" networks: - "VM Network" resourcePool: /<data_center>/host/<vcenter_cluster>/Resources zone: assisted-generated-zone nodeNetworking: external: {} internal: {} vcenters: - datacenters: - <data_center> server: <vcenter_address> kind: List metadata: resourceVersion: "" Replace <vcenter_address> with your vCenter address. Replace <datacenter> with the name of your vCenter data center. Replace <datastore> with the name of your vCenter data store. Replace <folder> with the folder containing the cluster VMs. Replace <vcenter_cluster> with the vSphere vCenter cluster where OpenShift Container Platform is installed. Apply the infrastructures configuration: USD oc apply -f infrastructures.config.openshift.io.yaml --overwrite=true 13.3. vSphere postinstallation configuration using the web console After installing an OpenShift Container Platform cluster by using the Assisted Installer on vSphere with the platform integration feature enabled, you must update the following vSphere configuration settings manually: vCenter address vCenter cluster vCenter username vCenter password Data center Default data store Virtual machine folder Prerequisites The Assisted Installer has finished installing the cluster successfully. The cluster is connected to console.redhat.com . Procedure In the Administrator perspective, navigate to Home Overview . Under Status , click vSphere connection to open the vSphere connection configuration wizard. In the vCenter field, enter the network address of the vSphere vCenter server. This can be either a domain name or an IP address. It appears in the vSphere web client URL; for example https://[your_vCenter_address]/ui . In the vCenter cluster field, enter the name of the vSphere vCenter cluster where OpenShift Container Platform is installed. Important This step is mandatory if you installed OpenShift Container Platform 4.13 or later. In the Username field, enter your vSphere vCenter username. In the Password field, enter your vSphere vCenter password. Warning The system stores the username and password in the vsphere-creds secret in the kube-system namespace of the cluster. An incorrect vCenter username or password makes the cluster nodes unschedulable. In the Datacenter field, enter the name of the vSphere data center that contains the virtual machines used to host the cluster; for example, SDDC-Datacenter . In the Default data store field, enter the vSphere data store that stores the persistent data volumes; for example, /SDDC-Datacenter/datastore/datastorename . Warning Updating the vSphere data center or default data store after the configuration has been saved detaches any active vSphere PersistentVolumes . In the Virtual Machine Folder field, enter the data center folder that contains the virtual machine of the cluster; for example, /SDDC-Datacenter/vm/ci-ln-hjg4vg2-c61657-t2gzr . For the OpenShift Container Platform installation to succeed, all virtual machines comprising the cluster must be located in a single data center folder. Click Save Configuration . This updates the cloud-provider-config file in the openshift-config namespace, and starts the configuration process. Reopen the vSphere connection configuration wizard and expand the Monitored operators panel. Check that the status of the operators is either Progressing or Healthy . Verification The connection configuration process updates operator statuses and control plane nodes. It takes approximately an hour to complete. During the configuration process, the nodes will reboot. Previously bound PersistentVolumeClaims objects might become disconnected. Follow the steps below to monitor the configuration process. Check that the configuration process completed successfully: In the Administrator perspective, navigate to Home > Overview . Under Status click Operators . Wait for all operator statuses to change from Progressing to All succeeded . A Failed status indicates that the configuration failed. Under Status , click Control Plane . Wait for the response rate of all Control Pane components to return to 100%. A Failed control plane component indicates that the configuration failed. A failure indicates that at least one of the connection settings is incorrect. Change the settings in the vSphere connection configuration wizard and save the configuration again. Check that you are able to bind PersistentVolumeClaims objects by performing the following steps: Create a StorageClass object using the following YAML: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate Create a PersistentVolumeClaims object using the following YAML: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem For instructions, see Dynamic provisioning in the OpenShift Container Platform documentation. To troubleshoot a PersistentVolumeClaims object, navigate to Storage PersistentVolumeClaims in the Administrator perspective of the OpenShift Container Platform web console.
[ "wget - O vsphere-discovery-image.iso <discovery_url>", "for VM in USD(/usr/local/bin/govc ls /<datacenter>/vm/<folder_name>) do /usr/local/bin/govc vm.power -off USDVM /usr/local/bin/govc vm.destroy USDVM done", "govc datastore.rm -ds <iso_datastore> <image>", "govc datastore.upload -ds <iso_datastore> vsphere-discovery-image.iso", "govc vm.create -net.adapter <network_adapter_type> -disk.controller <disk_controller_type> -pool=<resource_pool> -c=16 -m=32768 -disk=120GB -disk-datastore=<datastore_file> -net.address=\"<nic_mac_address>\" -iso-datastore=<iso_datastore> -iso=\"vsphere-discovery-image.iso\" -folder=\"<inventory_folder>\" <hostname>.<cluster_name>.example.com", "govc vm.create -net.adapter <network_adapter_type> -disk.controller <disk_controller_type> -pool=<resource_pool> -c=4 -m=8192 -disk=120GB -disk-datastore=<datastore_file> -net.address=\"<nic_mac_address>\" -iso-datastore=<iso_datastore> -iso=\"vsphere-discovery-image.iso\" -folder=\"<inventory_folder>\" <hostname>.<cluster_name>.example.com", "govc ls /<datacenter>/vm/<folder_name>", "for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -s=true USDVM done", "for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.change -vm USDVM -e disk.EnableUUID=TRUE done", "for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -on=true USDVM done", "echo -n \"<vcenter_username>\" | base64 -w0", "echo -n \"<vcenter_password>\" | base64 -w0", "oc get secret vsphere-creds -o yaml -n kube-system > creds_backup.yaml", "cp creds_backup.yaml vsphere-creds.yaml", "vi vsphere-creds.yaml", "apiVersion: v1 data: <vcenter_address>.username: <vcenter_username_encoded> <vcenter_address>.password: <vcenter_password_encoded> kind: Secret metadata: annotations: cloudcredential.openshift.io/mode: passthrough creationTimestamp: \"2022-01-25T17:39:50Z\" name: vsphere-creds namespace: kube-system resourceVersion: \"2437\" uid: 06971978-e3a5-4741-87f9-2ca3602f2658 type: Opaque", "oc replace -f vsphere-creds.yaml", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config_backup.yaml", "cp cloud-provider-config_backup.yaml cloud-provider-config.yaml", "vi cloud-provider-config.yaml", "apiVersion: v1 data: config: | [Global] secret-name = \"vsphere-creds\" secret-namespace = \"kube-system\" insecure-flag = \"1\" [Workspace] server = \"<vcenter_address>\" datacenter = \"<datacenter>\" default-datastore = \"<datastore>\" folder = \"/<datacenter>/vm/<folder>\" [VirtualCenter \"<vcenter_address>\"] datacenters = \"<datacenter>\" kind: ConfigMap metadata: creationTimestamp: \"2022-01-25T17:40:49Z\" name: cloud-provider-config namespace: openshift-config resourceVersion: \"2070\" uid: 80bb8618-bf25-442b-b023-b31311918507", "oc apply -f cloud-provider-config.yaml", "oc get nodes", "oc adm taint node <node_name> node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule", "oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready control-plane,master 45h v1.26.3+379cd9f master-1 Ready control-plane,master 45h v1.26.3+379cd9f worker-0 Ready worker 45h v1.26.3+379cd9f worker-1 Ready worker 45h v1.26.3+379cd9f master-2 Ready control-plane,master 45h v1.26.3+379cd9f oc adm taint node master-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node master-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node master-2 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node worker-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node worker-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule", "oc get infrastructures.config.openshift.io -o yaml > infrastructures.config.openshift.io.yaml.backup", "cp infrastructures.config.openshift.io.yaml.backup infrastructures.config.openshift.io.yaml", "vi infrastructures.config.openshift.io.yaml", "apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: creationTimestamp: \"2022-05-07T10:19:55Z\" generation: 1 name: cluster resourceVersion: \"536\" uid: e8a5742c-6d15-44e6-8a9e-064b26ab347d spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: assisted-generated-failure-domain region: assisted-generated-region server: <vcenter_address> topology: computeCluster: /<data_center>/host/<vcenter_cluster> datacenter: <data_center> datastore: /<data_center>/datastore/<datastore> folder: \"/<data_center>/path/to/folder\" networks: - \"VM Network\" resourcePool: /<data_center>/host/<vcenter_cluster>/Resources zone: assisted-generated-zone nodeNetworking: external: {} internal: {} vcenters: - datacenters: - <data_center> server: <vcenter_address> kind: List metadata: resourceVersion: \"\"", "oc apply -f infrastructures.config.openshift.io.yaml --overwrite=true", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_openshift_container_platform_with_the_assisted_installer/installing-on-vsphere
Chapter 7. Event [events.k8s.io/v1]
Chapter 7. Event [events.k8s.io/v1] Description Event is a report of an event somewhere in the cluster. It generally denotes some state change in the system. Events have a limited retention time and triggers and messages may evolve with time. Event consumers should not rely on the timing of an event with a given Reason reflecting a consistent underlying trigger, or the continued existence of events with that Reason. Events should be treated as informative, best-effort, supplemental data. Type object Required eventTime 7.1. Specification Property Type Description action string action is what action was taken/failed regarding to the regarding object. It is machine-readable. This field cannot be empty for new Events and it can have at most 128 characters. apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources deprecatedCount integer deprecatedCount is the deprecated field assuring backward compatibility with core.v1 Event type. deprecatedFirstTimestamp Time deprecatedFirstTimestamp is the deprecated field assuring backward compatibility with core.v1 Event type. deprecatedLastTimestamp Time deprecatedLastTimestamp is the deprecated field assuring backward compatibility with core.v1 Event type. deprecatedSource EventSource deprecatedSource is the deprecated field assuring backward compatibility with core.v1 Event type. eventTime MicroTime eventTime is the time when this Event was first observed. It is required. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata note string note is a human-readable description of the status of this operation. Maximal length of the note is 1kB, but libraries should be prepared to handle values up to 64kB. reason string reason is why the action was taken. It is human-readable. This field cannot be empty for new Events and it can have at most 128 characters. regarding ObjectReference regarding contains the object this Event is about. In most cases it's an Object reporting controller implements, e.g. ReplicaSetController implements ReplicaSets and this event is emitted because it acts on some changes in a ReplicaSet object. related ObjectReference related is the optional secondary object for more complex actions. E.g. when regarding object triggers a creation or deletion of related object. reportingController string reportingController is the name of the controller that emitted this Event, e.g. kubernetes.io/kubelet . This field cannot be empty for new Events. reportingInstance string reportingInstance is the ID of the controller instance, e.g. kubelet-xyzf . This field cannot be empty for new Events and it can have at most 128 characters. series object EventSeries contain information on series of events, i.e. thing that was/is happening continuously for some time. How often to update the EventSeries is up to the event reporters. The default event reporter in "k8s.io/client-go/tools/events/event_broadcaster.go" shows how this struct is updated on heartbeats and can guide customized reporter implementations. type string type is the type of this event (Normal, Warning), new types could be added in the future. It is machine-readable. This field cannot be empty for new Events. 7.1.1. .series Description EventSeries contain information on series of events, i.e. thing that was/is happening continuously for some time. How often to update the EventSeries is up to the event reporters. The default event reporter in "k8s.io/client-go/tools/events/event_broadcaster.go" shows how this struct is updated on heartbeats and can guide customized reporter implementations. Type object Required count lastObservedTime Property Type Description count integer count is the number of occurrences in this series up to the last heartbeat time. lastObservedTime MicroTime lastObservedTime is the time when last Event from the series was seen before last heartbeat. 7.2. API endpoints The following API endpoints are available: /apis/events.k8s.io/v1/events GET : list or watch objects of kind Event /apis/events.k8s.io/v1/watch/events GET : watch individual changes to a list of Event. deprecated: use the 'watch' parameter with a list operation instead. /apis/events.k8s.io/v1/namespaces/{namespace}/events DELETE : delete collection of Event GET : list or watch objects of kind Event POST : create an Event /apis/events.k8s.io/v1/watch/namespaces/{namespace}/events GET : watch individual changes to a list of Event. deprecated: use the 'watch' parameter with a list operation instead. /apis/events.k8s.io/v1/namespaces/{namespace}/events/{name} DELETE : delete an Event GET : read the specified Event PATCH : partially update the specified Event PUT : replace the specified Event /apis/events.k8s.io/v1/watch/namespaces/{namespace}/events/{name} GET : watch changes to an object of kind Event. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 7.2.1. /apis/events.k8s.io/v1/events HTTP method GET Description list or watch objects of kind Event Table 7.1. HTTP responses HTTP code Reponse body 200 - OK EventList schema 401 - Unauthorized Empty 7.2.2. /apis/events.k8s.io/v1/watch/events HTTP method GET Description watch individual changes to a list of Event. deprecated: use the 'watch' parameter with a list operation instead. Table 7.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.3. /apis/events.k8s.io/v1/namespaces/{namespace}/events HTTP method DELETE Description delete collection of Event Table 7.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Event Table 7.5. HTTP responses HTTP code Reponse body 200 - OK EventList schema 401 - Unauthorized Empty HTTP method POST Description create an Event Table 7.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.7. Body parameters Parameter Type Description body Event schema Table 7.8. HTTP responses HTTP code Reponse body 200 - OK Event schema 201 - Created Event schema 202 - Accepted Event schema 401 - Unauthorized Empty 7.2.4. /apis/events.k8s.io/v1/watch/namespaces/{namespace}/events HTTP method GET Description watch individual changes to a list of Event. deprecated: use the 'watch' parameter with a list operation instead. Table 7.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.5. /apis/events.k8s.io/v1/namespaces/{namespace}/events/{name} Table 7.10. Global path parameters Parameter Type Description name string name of the Event HTTP method DELETE Description delete an Event Table 7.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Event Table 7.13. HTTP responses HTTP code Reponse body 200 - OK Event schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Event Table 7.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.15. HTTP responses HTTP code Reponse body 200 - OK Event schema 201 - Created Event schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Event Table 7.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.17. Body parameters Parameter Type Description body Event schema Table 7.18. HTTP responses HTTP code Reponse body 200 - OK Event schema 201 - Created Event schema 401 - Unauthorized Empty 7.2.6. /apis/events.k8s.io/v1/watch/namespaces/{namespace}/events/{name} Table 7.19. Global path parameters Parameter Type Description name string name of the Event HTTP method GET Description watch changes to an object of kind Event. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/metadata_apis/event-events-k8s-io-v1
Chapter 1. Distributed tracing release notes
Chapter 1. Distributed tracing release notes 1.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use distributed tracing for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With distributed tracing you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis Red Hat OpenShift distributed tracing consists of two main components: Red Hat OpenShift distributed tracing platform - This component is based on the open source Jaeger project . Red Hat OpenShift distributed tracing data collection - This component is based on the open source OpenTelemetry project . Both of these components are based on the vendor-neutral OpenTracing APIs and instrumentation. 1.2. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 1.3. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.4. New features and enhancements This release adds improvements related to the following components and concepts. 1.4.1. New features and enhancements Red Hat OpenShift distributed tracing 2.5 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. This release introduces support for ingesting OpenTelemetry protocol (OTLP) to the Red Hat OpenShift distributed tracing platform Operator. The Operator now automatically enables the OTLP ports: Port 4317 is used for OTLP gRPC protocol. Port 4318 is used for OTLP HTTP protocol. This release also adds support for collecting Kubernetes resource attributes to the Red Hat OpenShift distributed tracing data collection Operator. 1.4.1.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.5 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.36 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.56 1.4.2. New features and enhancements Red Hat OpenShift distributed tracing 2.4 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. This release also adds support for auto-provisioning certificates using the Red Hat Elasticsearch Operator. Self-provisioning, which means using the Red Hat OpenShift distributed tracing platform Operator to call the Red Hat Elasticsearch Operator during installation. Self provisioning is fully supported with this release. Creating the Elasticsearch instance and certificates first and then configuring the distributed tracing platform to use the certificate is a Technology Preview for this release. Note When upgrading to Red Hat OpenShift distributed tracing 2.4, the Operator recreates the Elasticsearch instance, which might take five to ten minutes. Distributed tracing will be down and unavailable for that period. 1.4.2.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.4 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.34.1 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.49 1.4.3. New features and enhancements Red Hat OpenShift distributed tracing 2.3.1 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.4.3.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.3.1 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.30.2 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.44.1-1 1.4.4. New features and enhancements Red Hat OpenShift distributed tracing 2.3.0 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. With this release, the Red Hat OpenShift distributed tracing platform Operator is now installed to the openshift-distributed-tracing namespace by default. Previously the default installation had been in the openshift-operators namespace. 1.4.4.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.3.0 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.30.1 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.44.0 1.4.5. New features and enhancements Red Hat OpenShift distributed tracing 2.2.0 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.4.5.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.2.0 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.30.0 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.42.0 1.4.6. New features and enhancements Red Hat OpenShift distributed tracing 2.1.0 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.4.6.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.1.0 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.29.1 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.41.1 1.4.7. New features and enhancements Red Hat OpenShift distributed tracing 2.0.0 This release marks the rebranding of Red Hat OpenShift Jaeger to Red Hat OpenShift distributed tracing. This release consists of the following changes, additions, and improvements: Red Hat OpenShift distributed tracing now consists of the following two main components: Red Hat OpenShift distributed tracing platform - This component is based on the open source Jaeger project . Red Hat OpenShift distributed tracing data collection - This component is based on the open source OpenTelemetry project . Updates Red Hat OpenShift distributed tracing platform Operator to Jaeger 1.28. Going forward, Red Hat OpenShift distributed tracing will only support the stable Operator channel. Channels for individual releases are no longer supported. Introduces a new Red Hat OpenShift distributed tracing data collection Operator based on OpenTelemetry 0.33. Note that this Operator is a Technology Preview feature. Adds support for OpenTelemetry protocol (OTLP) to the Query service. Introduces a new distributed tracing icon that appears in the OpenShift OperatorHub. Includes rolling updates to the documentation to support the name change and new features. This release also addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.4.7.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.0.0 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.28.0 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.33.0 1.5. Red Hat OpenShift distributed tracing Technology Preview Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . 1.5.1. Red Hat OpenShift distributed tracing 2.4.0 Technology Preview This release also adds support for auto-provisioning certificates using the Red Hat Elasticsearch Operator. Self-provisioning, which means using the Red Hat OpenShift distributed tracing platform Operator to call the Red Hat Elasticsearch Operator during installation. Self provisioning is fully supported with this release. Creating the Elasticsearch instance and certificates first and then configuring the distributed tracing platform to use the certificate is a Technology Preview for this release. 1.5.2. Red Hat OpenShift distributed tracing 2.2.0 Technology Preview Unsupported OpenTelemetry Collector components included in the 2.1 release have been removed. 1.5.3. Red Hat OpenShift distributed tracing 2.1.0 Technology Preview This release introduces a breaking change to how to configure certificates in the OpenTelemetry custom resource file. In the new version, the ca_file moves under tls in the custom resource, as shown in the following examples. CA file configuration for OpenTelemetry version 0.33 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" CA file configuration for OpenTelemetry version 0.41.1 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" 1.5.4. Red Hat OpenShift distributed tracing 2.0.0 Technology Preview This release includes the addition of the Red Hat OpenShift distributed tracing data collection, which you install using the Red Hat OpenShift distributed tracing data collection Operator. Red Hat OpenShift distributed tracing data collection is based on the OpenTelemetry APIs and instrumentation. Red Hat OpenShift distributed tracing data collection includes the OpenTelemetry Operator and Collector. The Collector can be used to receive traces in either the OpenTelemetry or Jaeger protocol and send the trace data to Red Hat OpenShift distributed tracing. Other capabilities of the Collector are not supported at this time. The OpenTelemetry Collector allows developers to instrument their code with vendor agnostic APIs, avoiding vendor lock-in and enabling a growing ecosystem of observability tooling. 1.6. Red Hat OpenShift distributed tracing known issues These limitations exist in Red Hat OpenShift distributed tracing: Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on IBM Z and IBM Power Systems. These are the known issues for Red Hat OpenShift distributed tracing: TRACING-2057 The Kafka API has been updated to v1beta2 to support the Strimzi Kafka Operator 0.23.0. However, this API version is not supported by AMQ Streams 1.6.3. If you have the following environment, your Jaeger services will not be upgraded, and you cannot create new Jaeger services or modify existing Jaeger services: Jaeger Operator channel: 1.17.x stable or 1.20.x stable AMQ Streams Operator channel: amq-streams-1.6.x To resolve this issue, switch the subscription channel for your AMQ Streams Operator to either amq-streams-1.7.x or stable . 1.7. Red Hat OpenShift distributed tracing fixed issues TRACING-2337 Jaeger is logging a repetitive warning message in the Jaeger logs similar to the following: {"level":"warn","ts":1642438880.918793,"caller":"channelz/logging.go:62","msg":"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \"transport: http2Server.HandleStreams received bogus greeting from client: \\\"\\\\x16\\\\x03\\\\x01\\\\x02\\\\x00\\\\x01\\\\x00\\\\x01\\\\xfc\\\\x03\\\\x03vw\\\\x1a\\\\xc9T\\\\xe7\\\\xdaCj\\\\xb7\\\\x8dK\\\\xa6\\\"\"","system":"grpc","grpc_log":true} This issue was resolved by exposing only the HTTP(S) port of the query service, and not the gRPC port. TRACING-2009 The Jaeger Operator has been updated to include support for the Strimzi Kafka Operator 0.23.0. TRACING-1907 The Jaeger agent sidecar injection was failing due to missing config maps in the application namespace. The config maps were getting automatically deleted due to an incorrect OwnerReference field setting and as a result, the application pods were not moving past the "ContainerCreating" stage. The incorrect settings have been removed. TRACING-1725 Follow-up to TRACING-1631. Additional fix to ensure that Elasticsearch certificates are properly reconciled when there are multiple Jaeger production instances, using same name but within different namespaces. See also BZ-1918920 . TRACING-1631 Multiple Jaeger production instances, using same name but within different namespaces, causing Elasticsearch certificate issue. When multiple service meshes were installed, all of the Jaeger Elasticsearch instances had the same Elasticsearch secret instead of individual secrets, which prevented the OpenShift Elasticsearch Operator from communicating with all of the Elasticsearch clusters. TRACING-1300 Failed connection between Agent and Collector when using Istio sidecar. An update of the Jaeger Operator enabled TLS communication by default between a Jaeger sidecar agent and the Jaeger Collector. TRACING-1208 Authentication "500 Internal Error" when accessing Jaeger UI. When trying to authenticate to the UI using OAuth, I get a 500 error because oauth-proxy sidecar doesn't trust the custom CA bundle defined at installation time with the additionalTrustBundle . TRACING-1166 It is not currently possible to use the Jaeger streaming strategy within a disconnected environment. When a Kafka cluster is being provisioned, it results in a error: Failed to pull image registry.redhat.io/amq7/amq-streams-kafka-24-rhel7@sha256:f9ceca004f1b7dccb3b82d9a8027961f9fe4104e0ed69752c0bdd8078b4a1076 . TRACING-809 Jaeger Ingester is incompatible with Kafka 2.3. When there are two or more instances of the Jaeger Ingester and enough traffic it will continuously generate rebalancing messages in the logs. This is due to a regression in Kafka 2.3 that was fixed in Kafka 2.3.1. For more information, see Jaegertracing-1819 . BZ-1918920 / LOG-1619 The Elasticsearch pods does not get restarted automatically after an update. Workaround: Restart the pods manually.
[ "spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"", "spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"", "{\"level\":\"warn\",\"ts\":1642438880.918793,\"caller\":\"channelz/logging.go:62\",\"msg\":\"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \\\"transport: http2Server.HandleStreams received bogus greeting from client: \\\\\\\"\\\\\\\\x16\\\\\\\\x03\\\\\\\\x01\\\\\\\\x02\\\\\\\\x00\\\\\\\\x01\\\\\\\\x00\\\\\\\\x01\\\\\\\\xfc\\\\\\\\x03\\\\\\\\x03vw\\\\\\\\x1a\\\\\\\\xc9T\\\\\\\\xe7\\\\\\\\xdaCj\\\\\\\\xb7\\\\\\\\x8dK\\\\\\\\xa6\\\\\\\"\\\"\",\"system\":\"grpc\",\"grpc_log\":true}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/distributed_tracing/distr-tracing-release-notes
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/net/6.0/html/getting_started_with_.net_on_rhel_8/making-open-source-more-inclusive
5.16. Configuring Firewall Lockdown
5.16. Configuring Firewall Lockdown Local applications or services are able to change the firewall configuration if they are running as root (for example, libvirt ). With this feature, the administrator can lock the firewall configuration so that either no applications or only applications that are added to the lockdown whitelist are able to request firewall changes. The lockdown settings default to disabled. If enabled, the user can be sure that there are no unwanted configuration changes made to the firewall by local applications or services. 5.16.1. Configuring Lockdown with the Command-Line Client To query whether lockdown is enabled, use the following command as root : The command prints yes with exit status 0 if lockdown is enabled. It prints no with exit status 1 otherwise. To enable lockdown, enter the following command as root : To disable lockdown, use the following command as root : 5.16.2. Configuring Lockdown Whitelist Options with the Command-Line Client The lockdown whitelist can contain commands, security contexts, users and user IDs. If a command entry on the whitelist ends with an asterisk " * " , then all command lines starting with that command will match. If the " * " is not there then the absolute command including arguments must match. The context is the security (SELinux) context of a running application or service. To get the context of a running application use the following command: That command returns all running applications. Pipe the output through the grep tool to get the application of interest. For example: To list all command lines that are on the whitelist, enter the following command as root : To add a command command to the whitelist, enter the following command as root : To remove a command command from the whitelist, enter the following command as root : To query whether the command command is on the whitelist, enter the following command as root : The command prints yes with exit status 0 if true. It prints no with exit status 1 otherwise. To list all security contexts that are on the whitelist, enter the following command as root : To add a context context to the whitelist, enter the following command as root : To remove a context context from the whitelist, enter the following command as root : To query whether the context context is on the whitelist, enter the following command as root : Prints yes with exit status 0 , if true, prints no with exit status 1 otherwise. To list all user IDs that are on the whitelist, enter the following command as root : To add a user ID uid to the whitelist, enter the following command as root : To remove a user ID uid from the whitelist, enter the following command as root : To query whether the user ID uid is on the whitelist, enter the following command: Prints yes with exit status 0 , if true, prints no with exit status 1 otherwise. To list all user names that are on the whitelist, enter the following command as root : To add a user name user to the whitelist, enter the following command as root : To remove a user name user from the whitelist, enter the following command as root : To query whether the user name user is on the whitelist, enter the following command: Prints yes with exit status 0 , if true, prints no with exit status 1 otherwise. 5.16.3. Configuring Lockdown Whitelist Options with Configuration Files The default whitelist configuration file contains the NetworkManager context and the default context of libvirt . The user ID 0 is also on the list. Following is an example whitelist configuration file enabling all commands for the firewall-cmd utility, for a user called user whose user ID is 815 : This example shows both user id and user name , but only one option is required. Python is the interpreter and is prepended to the command line. You can also use a specific command, for example: /usr/bin/python /bin/firewall-cmd --lockdown-on In that example, only the --lockdown-on command is allowed. Note In Red Hat Enterprise Linux 7, all utilities are placed in the /usr/bin/ directory and the /bin/ directory is sym-linked to the /usr/bin/ directory. In other words, although the path for firewall-cmd when run as root might resolve to /bin/firewall-cmd , /usr/bin/firewall-cmd can now be used. All new scripts should use the new location. But be aware that if scripts that run as root have been written to use the /bin/firewall-cmd path, then that command path must be whitelisted in addition to the /usr/bin/firewall-cmd path traditionally used only for non- root users. The " * " at the end of the name attribute of a command means that all commands that start with this string will match. If the " * " is not there then the absolute command including arguments must match.
[ "~]# firewall-cmd --query-lockdown", "~]# firewall-cmd --lockdown-on", "~]# firewall-cmd --lockdown-off", "~]USD ps -e --context", "~]USD ps -e --context | grep example_program", "~]# firewall-cmd --list-lockdown-whitelist-commands", "~]# firewall-cmd --add-lockdown-whitelist-command='/usr/bin/python -Es /usr/bin/ command '", "~]# firewall-cmd --remove-lockdown-whitelist-command='/usr/bin/python -Es /usr/bin/ command '", "~]# firewall-cmd --query-lockdown-whitelist-command='/usr/bin/python -Es /usr/bin/ command '", "~]# firewall-cmd --list-lockdown-whitelist-contexts", "~]# firewall-cmd --add-lockdown-whitelist-context= context", "~]# firewall-cmd --remove-lockdown-whitelist-context= context", "~]# firewall-cmd --query-lockdown-whitelist-context= context", "~]# firewall-cmd --list-lockdown-whitelist-uids", "~]# firewall-cmd --add-lockdown-whitelist-uid= uid", "~]# firewall-cmd --remove-lockdown-whitelist-uid= uid", "~]USD firewall-cmd --query-lockdown-whitelist-uid= uid", "~]# firewall-cmd --list-lockdown-whitelist-users", "~]# firewall-cmd --add-lockdown-whitelist-user= user", "~]# firewall-cmd --remove-lockdown-whitelist-user= user", "~]USD firewall-cmd --query-lockdown-whitelist-user= user", "<?xml version=\"1.0\" encoding=\"utf-8\"?> <whitelist> <selinux context=\"system_u:system_r:NetworkManager_t:s0\"/> <selinux context=\"system_u:system_r:virtd_t:s0-s0:c0.c1023\"/> <user id=\"0\"/> </whitelist>", "<?xml version=\"1.0\" encoding=\"utf-8\"?> <whitelist> <command name=\"/usr/bin/python -Es /bin/firewall-cmd*\"/> <selinux context=\"system_u:system_r:NetworkManager_t:s0\"/> <user id=\"815\"/> <user name=\"user\"/> </whitelist>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/configuring_firewall_lockdown
Chapter 2. Supported architectures
Chapter 2. Supported architectures The first version of Red Hat Enterprise Linux 8 for SAP Solutions to include E4S repositories and packages for SAP was RHEL 8.0 (kernel 4.18.0-80), which provides support for the following architectures: Intel 64-bit architecture (x86_64) IBM Power, Little Endian (ppc64le) For more information, see Red Hat Enterprise Linux Technology Capabilities and Limits . Subsequent RHEL 8 versions that included E4S repositories and packages for SAP were: RHEL 8.1 (kernel 4.18.0-147) RHEL 8.2 (kernel 4.18.0-193) RHEL 8.4 (kernel 4.18.0-305) RHEL 8.6 (kernel 4.18.0-372)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/8.x_release_notes/con_supported-architectures_8.x_release_notes
Chapter 6. New Packages
Chapter 6. New Packages 6.1. RHEA-2013:0278 - new packages: dev86 and iasl New dev86 and iasl packages are now available for Red Hat Enterprise Linux 6. The dev86 and iasl packages are build dependencies of the qemu-kvm package. This enhancement update adds the dev86 and iasl packages to the 32-bit x862 Optional channels of Red Hat Enterprise Linux 6. (BZ# 901677 , BZ# 901678 ) All users who require dev86 and iasl are advised to install these new packages.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/ch06
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/operating_ansible_automation_platform/providing-feedback
Chapter 1. Introduction to Red Hat JBoss Enterprise Application Platform
Chapter 1. Introduction to Red Hat JBoss Enterprise Application Platform Before you start working with Red Hat JBoss Enterprise Application Platform, you must understand some general components used by JBoss EAP. When you understand these components, you can enhance both your use of Red Hat JBoss Enterprise Application Platform and your ability to configure Red Hat JBoss Enterprise Application Platform. 1.1. Uses of JBoss EAP 7 Red Hat JBoss Enterprise Application Platform (JBoss EAP) 7.4 is compatible with Jakarta EE 8 specifications, such as web profile and full-platform specifications. Each major version of JBoss EAP provides you with a tested, stabilized, and certified product. JBoss EAP provides preconfigured options for features such as high-availability clustering, messaging, and distributed caching. You can use JBoss EAP to deploy and run applications using supported APIs and services. Additionally, you can configure JBoss EAP to meet your needs, for example: You can customize JBoss EAP to include only the subsystems required to meet your needs. This can improve the startup speed of your JBoss EAP 7.4 instance. You can script and automate tasks by using the web-based management console and the management command line interface (CLI) to avoid having to edit XML configuration files. Major versions of JBoss EAP are forked from the WildFly community project at intervals when the community project has reached the desired feature completeness level. The major version is tested until it is stabilized, certified, and enhanced for production use. During the life cycle of a JBoss EAP major version, selected features are cherry-picked and back-ported from the community project into minor releases within the major release. Each minor release introduces feature enhancements to the major release. Additional resources For information about the WildFly community project see WildFly . 1.2. JBoss EAP 7 features JBoss EAP includes a variety of features to meet the needs of your organization. Table 1.1. Features of JBoss EAP Feature Description Jakarta EE compatible JBoss EAP 7.4 is Jakarta EE 8 compatible implementation for both Web Profile and Full Platform specifications. Managed Domain Centralized management of multiple server instances and physical hosts, compared to a standalone server that supports just a single server instance. Provides server-group management of configuration, deployment, socket bindings, modules, extensions, and system properties. Centralized and simplified management of application security and security domains. Management console and management CLI New domain or standalone server management interfaces. The management CLI includes a batch mode that scripts and automates management tasks. NOTE: To avoid making changes to your system configuration while a domain is active, do not edit the config.xml file for the domain. Do not directly edit the JBoss EAP XML configuration files. Use the management CLI to modify configurations. Simplified directory layout The modules directory contains application server modules. The domain directories contain the artifacts and configuration files for domain deployments. The standalone directories contain the standalone deployments. Modular class-loading mechanism Modules are loaded and unloaded on demand. This practice improves security performance and reduces startup and restart times. Streamlined datasource management Database drivers are deployed similarly to other JBoss EAP services. Datasources are created and managed with the management console and management CLI. Unified security framework Elytron provides a single unified framework for managing and configuring access for both standalone servers and managed domains. Additionally, Elytron is used to configure security access for applications deployed on JBoss EAP servers. 1.3. Application servers An application server, or app server, is software that provides an environment to run web applications. Most app servers use a set of APIs to provide functionality to web applications. For example, an app server can use an API to connect to a database. 1.4. JBoss EAP subsystems JBoss EAP organizes APIs into subsystems. You can configure these subsystems to enhance the capabilities of your JBoss EAP instance. Administrators can configure these subsystems to support different behavior, depending on the goal of the application. For instance, if an application requires a database, you must configure a datasource so that a deployed application on a JBoss EAP server or a domain server can access the database. 1.5. High availability (HA) functionality of JBoss EAP You can use the JBoss EAP HA functionality to enhance any running applications by providing improved data sharing among multiple running JBoss EAP instances. HA in JBoss EAP refers to multiple JBoss EAP instances working together to deliver enhanced applications, which are most resistant to fluctuations in data flow, server load, and server failure. HA incorporates numerous qualities, including scalability, load balancing, and fault tolerance. 1.6. Supported operating modes in JBoss EAP JBoss EAP has powerful management capabilities for deployed applications. These capabilities differ depending on which operating mode is used to start JBoss EAP. JBoss EAP offers the following operating modes: Standalone server to manage instances individually Managed domain for managing groups of instances from a single control point
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/introduction_to_jboss_eap/assembly-introduction-jboss-eap_default
13.4. Save Copy of Model
13.4. Save Copy of Model The Save As... action performs a similar function as the Refactor > Rename action except the renamed model is a structural copy of the original model. Note Each model object maintains its own unique ID, so copying a model will result in a exact structural copy of your original model but with regenerated unique object IDs. Be aware that locating and copying your models via your local file system may result in runtime errors within Teiid Designer . Each model is expected to be unique and duplicate models are not permitted. To create a duplicate model using Save As...: Open the model you wish to copy in a Model Editor by double-clicking the model in Model Explorer view or right-click and click Open action. Select the editor tab for the model you opened. Figure 13.4. Select Editor Tab Click File > Save As... action to open the Save Model As dialog. Figure 13.5. Save Model As Dialog Enter a unique model name in the new model name text field and click OK . If dependent models are detected, the Save Model As - Import References dialog is presented to give you the opportunity to change any of the dependent models imports to reference the new model or not. Figure 13.6. Save Model As Dialog
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/save_copy_of_model
5.281. rgmanager
5.281. rgmanager 5.281.1. RHBA-2012:0897 - rgmanager bug fix and enhancement update Updated rgmanager packages that fix several bugs and add an enhancement are now available for Red Hat Enterprise Linux 6. The rgmanager packages contain the Red Hat Resource Group Manager, which provides the ability to create and manage high-availability server applications in the event of system downtime. Bug Fixes BZ# 635152 Previously, rgmanager incorrectly called the rg_wait_threads() function during cluster reconfiguration. This could lead to an internal deadlock in rgmanager which caused the cluster services to become unresponsive. This irrelevant call has been removed from the code and deadlocks now no longer occur during cluster reconfiguration. BZ# 727326 When enabling a service using the clusvcadm command with the "-F" option, rgmanager did not update the service owner information before responding to clusvcadmn. Consequently, clusvcadm could print incorrect information about which cluster node the service was running on. This update modifies rgmanager to update the owner information prior to responding to clusvcadm, and the command now provides the correct information. BZ# 743218 Under certain circumstances, a "stopped" event could be processed after a service and its dependent services had already been restarted. This forced the dependent services to restart erroneously. This update allows rgmanager to ignore the "stopped" events if dependent services have already been started, and the services are no longer restarted unnecessarily. BZ# 744824 Resource Group Manager did not handle certain inter-service dependencies correctly. Therefore, if a service was dependent on another service that was running on the same cluster node, the dependent service became unresponsive during the service failover and remained in the recovering state. With this update, rgmanager has been modified to check a service state during failover and stop the service if it is dependent on the service that is failing over. Resource Group Manager then tries to start this dependent service on other nodes as expected. BZ# 745226 The "-F" option of the clusvcadm command allows rgmanager to start a service according to failover domain rules. This option was not previously described in the command's manual pages. With this update, the "-F" option has been properly documented in the clusvcadm(8) manual page. BZ# 796272 Previously, if a newly added service failed to start on the first cluster node, rgmanager could try to relocate the service to another cluster node before the cluster configuration was updated on that node. Consequently, the service was set to the "recovering" state and had to be manually re-enabled in order to start. This update modifies rgmanager to retry the relocation process until after the cluster configuration has been updated on the node. The service can now be relocated as expected. BZ# 803474 Due to an invalid pointer dereference, rgmanager could terminate unexpectedly with a segmentation fault when central processing mode was enabled on a cluster node. With this update, the pointer dereference has been corrected, and rgmanager no longer crashes when central processing mode is enabled. BZ# 807165 Previously, in central processing mode, rgmanager failed to restart services that depended on a service that failed and was recovered. With this update, during the recovery of a failed service, any services that depend on it are restarted. Enhancement BZ# 799505 This update introduces a feature which enables rgmanager to utilize Corosync's Closed Process Group (CPG) API for inter-node locking. This feature is automatically enabled when Corosync's Redundant Ring Protocol (RRP) feature is enabled. Corosync's RRP feature is considered fully supported. However, when used with the rest of the High-Availability Add-Ons, it is considered a Technology Preview. Users are advised to upgrade to these updated rgmanager packages, which fix these bugs and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/rgmanager
1.3. DM-Multipath Components
1.3. DM-Multipath Components Table 1.1, "DM-Multipath Components" . describes the components of DM-Multipath. Table 1.1. DM-Multipath Components Component Description dm-multipath kernel module Reroutes I/O and supports failover for paths and path groups. multipath command Lists and configures multipath devices. Normally started up with /etc/rc.sysinit , it can also be started up by a udev program whenever a block device is added or it can be run by the initramfs file system. multipathd daemon Monitors paths; as paths fail and come back, it may initiate path group switches. Provides for interactive changes to multipath devices. This must be restarted for any changes to the /etc/multipath.conf file. kpartx command Creates device mapper devices for the partitions on a device It is necessary to use this command for DOS-based partitions with DM-MP. The kpartx is provided in its own package, but the device-mapper-multipath package depends on it.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/dm_multipath/MPIO_Components
Chapter 2. Alertmanager [monitoring.coreos.com/v1]
Chapter 2. Alertmanager [monitoring.coreos.com/v1] Description The Alertmanager custom resource definition (CRD) defines a desired [Alertmanager]( https://prometheus.io/docs/alerting ) setup to run in a Kubernetes cluster. It allows to specify many options such as the number of replicas, persistent storage and many more. For each Alertmanager resource, the Operator deploys a StatefulSet in the same namespace. When there are two or more configured replicas, the Operator runs the Alertmanager instances in high-availability mode. The resource defines via label and namespace selectors which AlertmanagerConfig objects should be associated to the deployed Alertmanager instances. Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of the Alertmanager cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status object Most recent observed status of the Alertmanager cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 2.1.1. .spec Description Specification of the desired behavior of the Alertmanager cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Property Type Description additionalPeers array (string) AdditionalPeers allows injecting a set of additional Alertmanagers to peer with to form a highly available cluster. affinity object If specified, the pod's scheduling constraints. alertmanagerConfigMatcherStrategy object AlertmanagerConfigMatcherStrategy defines how AlertmanagerConfig objects process incoming alerts. alertmanagerConfigNamespaceSelector object Namespaces to be selected for AlertmanagerConfig discovery. If nil, only check own namespace. alertmanagerConfigSelector object AlertmanagerConfigs to be selected for to merge and configure Alertmanager with. alertmanagerConfiguration object alertmanagerConfiguration specifies the configuration of Alertmanager. If defined, it takes precedence over the configSecret field. This is an experimental feature , it may change in any upcoming release in a breaking way. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted in the pod. If the service account has automountServiceAccountToken: true , set the field to false to opt out of automounting API credentials. baseImage string Base image that is used to deploy pods, without tag. Deprecated: use 'image' instead. clusterAdvertiseAddress string ClusterAdvertiseAddress is the explicit address to advertise in cluster. Needs to be provided for non RFC1918 [1] (public) addresses. [1] RFC1918: https://tools.ietf.org/html/rfc1918 clusterGossipInterval string Interval between gossip attempts. clusterLabel string Defines the identifier that uniquely identifies the Alertmanager cluster. You should only set it when the Alertmanager cluster includes Alertmanager instances which are external to this Alertmanager resource. In practice, the addresses of the external instances are provided via the .spec.additionalPeers field. clusterPeerTimeout string Timeout for cluster peering. clusterPushpullInterval string Interval between pushpull attempts. configMaps array (string) ConfigMaps is a list of ConfigMaps in the same namespace as the Alertmanager object, which shall be mounted into the Alertmanager Pods. Each ConfigMap is added to the StatefulSet definition as a volume named configmap-<configmap-name> . The ConfigMaps are mounted into /etc/alertmanager/configmaps/<configmap-name> in the 'alertmanager' container. configSecret string ConfigSecret is the name of a Kubernetes Secret in the same namespace as the Alertmanager object, which contains the configuration for this Alertmanager instance. If empty, it defaults to alertmanager-<alertmanager-name> . The Alertmanager configuration should be available under the alertmanager.yaml key. Additional keys from the original secret are copied to the generated secret and mounted into the /etc/alertmanager/config directory in the alertmanager container. If either the secret or the alertmanager.yaml key is missing, the operator provisions a minimal Alertmanager configuration with one empty receiver (effectively dropping alert notifications). containers array Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to an Alertmanager pod. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The current container names are: alertmanager and config-reloader . Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. containers[] object A single application container that you want to run within a pod. dnsConfig object Defines the DNS configuration for the pods. dnsPolicy string Defines the DNS policy for the pods. enableFeatures array (string) Enable access to Alertmanager feature flags. By default, no features are enabled. Enabling features which are disabled by default is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. It requires Alertmanager >= 0.27.0. externalUrl string The external URL the Alertmanager instances will be available under. This is necessary to generate correct URLs. This is necessary if Alertmanager is not served from root of a DNS name. forceEnableClusterMode boolean ForceEnableClusterMode ensures Alertmanager does not deactivate the cluster mode when running with a single replica. Use case is e.g. spanning an Alertmanager cluster across Kubernetes clusters with a single replica in each. hostAliases array Pods' hostAliases configuration hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. image string Image if specified has precedence over baseImage, tag and sha combinations. Specifying the version is still necessary to ensure the Prometheus Operator knows what version of Alertmanager is being configured. imagePullPolicy string Image pull policy for the 'alertmanager', 'init-config-reloader' and 'config-reloader' containers. See https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy for more details. imagePullSecrets array An optional list of references to secrets in the same namespace to use for pulling prometheus and alertmanager images from registries see http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array InitContainers allows adding initContainers to the pod definition. Those can be used to e.g. fetch secrets for injection into the Alertmanager configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ InitContainers described here modify an operator generated init containers if they share the same name and modifications are done via a strategic merge patch. The current init container name is: init-config-reloader . Overriding init containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. initContainers[] object A single application container that you want to run within a pod. listenLocal boolean ListenLocal makes the Alertmanager server listen on loopback, so that it does not bind against the Pod IP. Note this is only for the Alertmanager UI, not the gossip communication. logFormat string Log format for Alertmanager to be configured with. logLevel string Log level for Alertmanager to be configured with. minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) This is an alpha field from kubernetes 1.22 until 1.24 which requires enabling the StatefulSetMinReadySeconds feature gate. nodeSelector object (string) Define which Nodes the Pods are scheduled on. paused boolean If set to true all actions on the underlying managed objects are not goint to be performed, except for delete actions. podMetadata object PodMetadata configures labels and annotations which are propagated to the Alertmanager pods. The following items are reserved and cannot be overridden: * "alertmanager" label, set to the name of the Alertmanager instance. * "app.kubernetes.io/instance" label, set to the name of the Alertmanager instance. * "app.kubernetes.io/managed-by" label, set to "prometheus-operator". * "app.kubernetes.io/name" label, set to "alertmanager". * "app.kubernetes.io/version" label, set to the Alertmanager version. * "kubectl.kubernetes.io/default-container" annotation, set to "alertmanager". portName string Port name used for the pods and governing service. Defaults to web . priorityClassName string Priority class assigned to the Pods replicas integer Size is the expected size of the alertmanager cluster. The controller will eventually make the size of the running cluster equal to the expected size. resources object Define resources requests and limits for single Pods. retention string Time duration Alertmanager shall retain data for. Default is '120h', and must match the regular expression [0-9]+(ms|s|m|h) (milliseconds seconds minutes hours). routePrefix string The route prefix Alertmanager registers HTTP handlers for. This is useful, if using ExternalURL and a proxy is rewriting HTTP routes of a request, and the actual ExternalURL is still true, but the server serves requests under a different route prefix. For example for use with kubectl proxy . secrets array (string) Secrets is a list of Secrets in the same namespace as the Alertmanager object, which shall be mounted into the Alertmanager Pods. Each Secret is added to the StatefulSet definition as a volume named secret-<secret-name> . The Secrets are mounted into /etc/alertmanager/secrets/<secret-name> in the 'alertmanager' container. securityContext object SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run the Prometheus Pods. sha string SHA of Alertmanager container image to be deployed. Defaults to the value of version . Similar to a tag, but the SHA explicitly deploys an immutable container image. Version and Tag are ignored if SHA is set. Deprecated: use 'image' instead. The image digest can be specified as part of the image URL. storage object Storage is the definition of how storage will be used by the Alertmanager instances. tag string Tag of Alertmanager container image to be deployed. Defaults to the value of version . Version is ignored if Tag is set. Deprecated: use 'image' instead. The image tag can be specified as part of the image URL. tolerations array If specified, the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array If specified, the pod's topology spread constraints. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. version string Version the cluster should be on. volumeMounts array VolumeMounts allows configuration of additional VolumeMounts on the output StatefulSet definition. VolumeMounts specified will be appended to other VolumeMounts in the alertmanager container, that are generated as a result of StorageSpec objects. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. volumes array Volumes allows configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. web object Defines the web command line flags when starting Alertmanager. 2.1.2. .spec.affinity Description If specified, the pod's scheduling constraints. Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 2.1.3. .spec.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 2.1.4. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 2.1.5. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 2.1.6. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 2.1.7. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 2.1.8. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.9. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 2.1.10. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.11. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 2.1.12. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 2.1.13. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 2.1.14. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 2.1.15. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.16. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 2.1.17. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.18. .spec.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 2.1.19. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 2.1.20. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 2.1.21. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.22. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.23. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.24. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.25. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.26. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.27. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.28. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 2.1.29. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.30. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.31. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.32. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.33. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.34. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.35. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.36. .spec.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 2.1.37. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 2.1.38. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 2.1.39. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.40. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.41. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.42. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.43. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.44. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.45. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.46. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 2.1.47. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.48. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.49. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.50. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.51. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.52. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.53. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.54. .spec.alertmanagerConfigMatcherStrategy Description AlertmanagerConfigMatcherStrategy defines how AlertmanagerConfig objects process incoming alerts. Type object Property Type Description type string AlertmanagerConfigMatcherStrategyType defines the strategy used by AlertmanagerConfig objects to match alerts in the routes and inhibition rules. The default value is OnNamespace . 2.1.55. .spec.alertmanagerConfigNamespaceSelector Description Namespaces to be selected for AlertmanagerConfig discovery. If nil, only check own namespace. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.56. .spec.alertmanagerConfigNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.57. .spec.alertmanagerConfigNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.58. .spec.alertmanagerConfigSelector Description AlertmanagerConfigs to be selected for to merge and configure Alertmanager with. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.59. .spec.alertmanagerConfigSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.60. .spec.alertmanagerConfigSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.61. .spec.alertmanagerConfiguration Description alertmanagerConfiguration specifies the configuration of Alertmanager. If defined, it takes precedence over the configSecret field. This is an experimental feature , it may change in any upcoming release in a breaking way. Type object Property Type Description global object Defines the global parameters of the Alertmanager configuration. name string The name of the AlertmanagerConfig resource which is used to generate the Alertmanager configuration. It must be defined in the same namespace as the Alertmanager object. The operator will not enforce a namespace label for routes and inhibition rules. templates array Custom notification templates. templates[] object SecretOrConfigMap allows to specify data as a Secret or ConfigMap. Fields are mutually exclusive. 2.1.62. .spec.alertmanagerConfiguration.global Description Defines the global parameters of the Alertmanager configuration. Type object Property Type Description httpConfig object HTTP client configuration. opsGenieApiKey object The default OpsGenie API Key. opsGenieApiUrl object The default OpsGenie API URL. pagerdutyUrl string The default Pagerduty URL. resolveTimeout string ResolveTimeout is the default value used by alertmanager if the alert does not include EndsAt, after this time passes it can declare the alert as resolved if it has not been updated. This has no impact on alerts from Prometheus, as they always include EndsAt. slackApiUrl object The default Slack API URL. smtp object Configures global SMTP parameters. 2.1.63. .spec.alertmanagerConfiguration.global.httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the Alertmanager object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. tlsConfig object TLS configuration for the client. 2.1.64. .spec.alertmanagerConfiguration.global.httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 2.1.65. .spec.alertmanagerConfiguration.global.httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.66. .spec.alertmanagerConfiguration.global.httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 2.1.67. .spec.alertmanagerConfiguration.global.httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.68. .spec.alertmanagerConfiguration.global.httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.69. .spec.alertmanagerConfiguration.global.httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the Alertmanager object and accessible by the Prometheus Operator. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.70. .spec.alertmanagerConfiguration.global.httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tlsConfig object TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. tokenUrl string tokenURL configures the URL to fetch the token from. 2.1.71. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.72. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 2.1.73. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.74. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.75. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 2.1.76. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.proxyConnectHeader{} Description Type array 2.1.77. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.78. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.tlsConfig Description TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 2.1.79. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.80. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 2.1.81. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.82. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.83. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 2.1.84. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.85. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.86. .spec.alertmanagerConfiguration.global.httpConfig.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 2.1.87. .spec.alertmanagerConfiguration.global.httpConfig.proxyConnectHeader{} Description Type array 2.1.88. .spec.alertmanagerConfiguration.global.httpConfig.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.89. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 2.1.90. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.91. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 2.1.92. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.93. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.94. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 2.1.95. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.96. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.97. .spec.alertmanagerConfiguration.global.opsGenieApiKey Description The default OpsGenie API Key. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.98. .spec.alertmanagerConfiguration.global.opsGenieApiUrl Description The default OpsGenie API URL. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.99. .spec.alertmanagerConfiguration.global.slackApiUrl Description The default Slack API URL. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.100. .spec.alertmanagerConfiguration.global.smtp Description Configures global SMTP parameters. Type object Property Type Description authIdentity string SMTP Auth using PLAIN authPassword object SMTP Auth using LOGIN and PLAIN. authSecret object SMTP Auth using CRAM-MD5. authUsername string SMTP Auth using CRAM-MD5, LOGIN and PLAIN. If empty, Alertmanager doesn't authenticate to the SMTP server. from string The default SMTP From header field. hello string The default hostname to identify to the SMTP server. requireTLS boolean The default SMTP TLS requirement. Note that Go does not support unencrypted connections to remote SMTP endpoints. smartHost object The default SMTP smarthost used for sending emails. 2.1.101. .spec.alertmanagerConfiguration.global.smtp.authPassword Description SMTP Auth using LOGIN and PLAIN. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.102. .spec.alertmanagerConfiguration.global.smtp.authSecret Description SMTP Auth using CRAM-MD5. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.103. .spec.alertmanagerConfiguration.global.smtp.smartHost Description The default SMTP smarthost used for sending emails. Type object Required host port Property Type Description host string Defines the host's address, it can be a DNS name or a literal IP address. port string Defines the host's port, it can be a literal port number or a port name. 2.1.104. .spec.alertmanagerConfiguration.templates Description Custom notification templates. Type array 2.1.105. .spec.alertmanagerConfiguration.templates[] Description SecretOrConfigMap allows to specify data as a Secret or ConfigMap. Fields are mutually exclusive. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.106. .spec.alertmanagerConfiguration.templates[].configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 2.1.107. .spec.alertmanagerConfiguration.templates[].secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.108. .spec.containers Description Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to an Alertmanager pod. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The current container names are: alertmanager and config-reloader . Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 2.1.109. .spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 2.1.110. .spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 2.1.111. .spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 2.1.112. .spec.containers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 2.1.113. .spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 2.1.114. .spec.containers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.115. .spec.containers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.116. .spec.containers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.117. .spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 2.1.118. .spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 2.1.119. .spec.containers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 2.1.120. .spec.containers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 2.1.121. .spec.containers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 2.1.122. .spec.containers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.123. .spec.containers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.124. .spec.containers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.125. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.126. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.127. .spec.containers[].lifecycle.postStart.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 2.1.128. .spec.containers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.129. .spec.containers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.130. .spec.containers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.131. .spec.containers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.132. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.133. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.134. .spec.containers[].lifecycle.preStop.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 2.1.135. .spec.containers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.136. .spec.containers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.137. .spec.containers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.138. .spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.139. .spec.containers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.140. .spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.141. .spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.142. .spec.containers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.143. .spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 2.1.144. .spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 2.1.145. .spec.containers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.146. .spec.containers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.147. .spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.148. .spec.containers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.149. .spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.150. .spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.151. .spec.containers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.152. .spec.containers[].resizePolicy Description Resources resize policy for the container. Type array 2.1.153. .spec.containers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 2.1.154. .spec.containers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.155. .spec.containers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 2.1.156. .spec.containers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. request string Request is the name chosen for a request in the referenced claim. If empty, everything from the claim is made available, otherwise only the result of this request. 2.1.157. .spec.containers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. appArmorProfile object appArmorProfile is the AppArmor options to use by this container. If set, this profile overrides the pod's appArmorProfile. Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default value is Default which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 2.1.158. .spec.containers[].securityContext.appArmorProfile Description appArmorProfile is the AppArmor options to use by this container. If set, this profile overrides the pod's appArmorProfile. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile loaded on the node that should be used. The profile must be preconfigured on the node to work. Must match the loaded name of the profile. Must be set if and only if type is "Localhost". type string type indicates which kind of AppArmor profile will be applied. Valid options are: Localhost - a profile pre-loaded on the node. RuntimeDefault - the container runtime's default profile. Unconfined - no AppArmor enforcement. 2.1.159. .spec.containers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 2.1.160. .spec.containers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 2.1.161. .spec.containers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 2.1.162. .spec.containers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 2.1.163. .spec.containers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.164. .spec.containers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.165. .spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.166. .spec.containers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.167. .spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.168. .spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.169. .spec.containers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.170. .spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 2.1.171. .spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 2.1.172. .spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 2.1.173. .spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. When RecursiveReadOnly is set to IfPossible or to Enabled, MountPropagation must be None or unspecified (which defaults to None). name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. recursiveReadOnly string RecursiveReadOnly specifies whether read-only mounts should be handled recursively. If ReadOnly is false, this field has no meaning and must be unspecified. If ReadOnly is true, and this field is set to Disabled, the mount is not made recursively read-only. If this field is set to IfPossible, the mount is made recursively read-only, if it is supported by the container runtime. If this field is set to Enabled, the mount is made recursively read-only if it is supported by the container runtime, otherwise the pod will not be started and an error will be generated to indicate the reason. If this field is set to IfPossible or Enabled, MountPropagation must be set to None (or be unspecified, which defaults to None). If this field is not specified, it is treated as an equivalent of Disabled. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 2.1.174. .spec.dnsConfig Description Defines the DNS configuration for the pods. Type object Property Type Description nameservers array (string) A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. options array A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Resolution options given in Options will override those that appear in the base DNSPolicy. options[] object PodDNSConfigOption defines DNS resolver options of a pod. searches array (string) A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. 2.1.175. .spec.dnsConfig.options Description A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Resolution options given in Options will override those that appear in the base DNSPolicy. Type array 2.1.176. .spec.dnsConfig.options[] Description PodDNSConfigOption defines DNS resolver options of a pod. Type object Required name Property Type Description name string Name is required and must be unique. value string Value is optional. 2.1.177. .spec.hostAliases Description Pods' hostAliases configuration Type array 2.1.178. .spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Required hostnames ip Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 2.1.179. .spec.imagePullSecrets Description An optional list of references to secrets in the same namespace to use for pulling prometheus and alertmanager images from registries see http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod Type array 2.1.180. .spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 2.1.181. .spec.initContainers Description InitContainers allows adding initContainers to the pod definition. Those can be used to e.g. fetch secrets for injection into the Alertmanager configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ InitContainers described here modify an operator generated init containers if they share the same name and modifications are done via a strategic merge patch. The current init container name is: init-config-reloader . Overriding init containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 2.1.182. .spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 2.1.183. .spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 2.1.184. .spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 2.1.185. .spec.initContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 2.1.186. .spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 2.1.187. .spec.initContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.188. .spec.initContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.189. .spec.initContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.190. .spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 2.1.191. .spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 2.1.192. .spec.initContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 2.1.193. .spec.initContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 2.1.194. .spec.initContainers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 2.1.195. .spec.initContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.196. .spec.initContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.197. .spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.198. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.199. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.200. .spec.initContainers[].lifecycle.postStart.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 2.1.201. .spec.initContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.202. .spec.initContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.203. .spec.initContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.204. .spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.205. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.206. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.207. .spec.initContainers[].lifecycle.preStop.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 2.1.208. .spec.initContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.209. .spec.initContainers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.210. .spec.initContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.211. .spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.212. .spec.initContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.213. .spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.214. .spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.215. .spec.initContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.216. .spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 2.1.217. .spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 2.1.218. .spec.initContainers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.219. .spec.initContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.220. .spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.221. .spec.initContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.222. .spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.223. .spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.224. .spec.initContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.225. .spec.initContainers[].resizePolicy Description Resources resize policy for the container. Type array 2.1.226. .spec.initContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 2.1.227. .spec.initContainers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.228. .spec.initContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 2.1.229. .spec.initContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. request string Request is the name chosen for a request in the referenced claim. If empty, everything from the claim is made available, otherwise only the result of this request. 2.1.230. .spec.initContainers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. appArmorProfile object appArmorProfile is the AppArmor options to use by this container. If set, this profile overrides the pod's appArmorProfile. Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default value is Default which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 2.1.231. .spec.initContainers[].securityContext.appArmorProfile Description appArmorProfile is the AppArmor options to use by this container. If set, this profile overrides the pod's appArmorProfile. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile loaded on the node that should be used. The profile must be preconfigured on the node to work. Must match the loaded name of the profile. Must be set if and only if type is "Localhost". type string type indicates which kind of AppArmor profile will be applied. Valid options are: Localhost - a profile pre-loaded on the node. RuntimeDefault - the container runtime's default profile. Unconfined - no AppArmor enforcement. 2.1.232. .spec.initContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 2.1.233. .spec.initContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 2.1.234. .spec.initContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 2.1.235. .spec.initContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 2.1.236. .spec.initContainers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.237. .spec.initContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.238. .spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.239. .spec.initContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.240. .spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.241. .spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.242. .spec.initContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.243. .spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 2.1.244. .spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 2.1.245. .spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 2.1.246. .spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. When RecursiveReadOnly is set to IfPossible or to Enabled, MountPropagation must be None or unspecified (which defaults to None). name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. recursiveReadOnly string RecursiveReadOnly specifies whether read-only mounts should be handled recursively. If ReadOnly is false, this field has no meaning and must be unspecified. If ReadOnly is true, and this field is set to Disabled, the mount is not made recursively read-only. If this field is set to IfPossible, the mount is made recursively read-only, if it is supported by the container runtime. If this field is set to Enabled, the mount is made recursively read-only if it is supported by the container runtime, otherwise the pod will not be started and an error will be generated to indicate the reason. If this field is set to IfPossible or Enabled, MountPropagation must be set to None (or be unspecified, which defaults to None). If this field is not specified, it is treated as an equivalent of Disabled. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 2.1.247. .spec.podMetadata Description PodMetadata configures labels and annotations which are propagated to the Alertmanager pods. The following items are reserved and cannot be overridden: * "alertmanager" label, set to the name of the Alertmanager instance. * "app.kubernetes.io/instance" label, set to the name of the Alertmanager instance. * "app.kubernetes.io/managed-by" label, set to "prometheus-operator". * "app.kubernetes.io/name" label, set to "alertmanager". * "app.kubernetes.io/version" label, set to the Alertmanager version. * "kubectl.kubernetes.io/default-container" annotation, set to "alertmanager". Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 2.1.248. .spec.resources Description Define resources requests and limits for single Pods. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.249. .spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 2.1.250. .spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. request string Request is the name chosen for a request in the referenced claim. If empty, everything from the claim is made available, otherwise only the result of this request. 2.1.251. .spec.securityContext Description SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. Type object Property Type Description appArmorProfile object appArmorProfile is the AppArmor options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID and fsGroup (if specified). If the SupplementalGroupsPolicy feature is enabled, the supplementalGroupsPolicy field determines whether these are in addition to or instead of any group memberships defined in the container image. If unspecified, no additional groups are added, though group memberships defined in the container image may still be used, depending on the supplementalGroupsPolicy field. Note that this field cannot be set when spec.os.name is windows. supplementalGroupsPolicy string Defines how supplemental groups of the first container processes are calculated. Valid values are "Merge" and "Strict". If not specified, "Merge" is used. (Alpha) Using the field requires the SupplementalGroupsPolicy feature gate to be enabled and the container runtime must implement support for this feature. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 2.1.252. .spec.securityContext.appArmorProfile Description appArmorProfile is the AppArmor options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile loaded on the node that should be used. The profile must be preconfigured on the node to work. Must match the loaded name of the profile. Must be set if and only if type is "Localhost". type string type indicates which kind of AppArmor profile will be applied. Valid options are: Localhost - a profile pre-loaded on the node. RuntimeDefault - the container runtime's default profile. Unconfined - no AppArmor enforcement. 2.1.253. .spec.securityContext.seLinuxOptions Description The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 2.1.254. .spec.securityContext.seccompProfile Description The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 2.1.255. .spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 2.1.256. .spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 2.1.257. .spec.securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 2.1.258. .spec.storage Description Storage is the definition of how storage will be used by the Alertmanager instances. Type object Property Type Description disableMountSubPath boolean Deprecated: subPath usage will be removed in a future release. emptyDir object EmptyDirVolumeSource to be used by the StatefulSet. If specified, it takes precedence over ephemeral and volumeClaimTemplate . More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir ephemeral object EphemeralVolumeSource to be used by the StatefulSet. This is a beta field in k8s 1.21 and GA in 1.15. For lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes volumeClaimTemplate object Defines the PVC spec to be used by the Prometheus StatefulSets. The easiest way to use a volume that cannot be automatically provisioned is to use a label selector alongside manually created PersistentVolumes. 2.1.259. .spec.storage.emptyDir Description EmptyDirVolumeSource to be used by the StatefulSet. If specified, it takes precedence over ephemeral and volumeClaimTemplate . More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 2.1.260. .spec.storage.ephemeral Description EphemeralVolumeSource to be used by the StatefulSet. This is a beta field in k8s 1.21 and GA in 1.15. For lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 2.1.261. .spec.storage.ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 2.1.262. .spec.storage.ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 2.1.263. .spec.storage.ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 2.1.264. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.265. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 2.1.266. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.267. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.268. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.269. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.270. .spec.storage.volumeClaimTemplate Description Defines the PVC spec to be used by the Prometheus StatefulSets. The easiest way to use a volume that cannot be automatically provisioned is to use a label selector alongside manually created PersistentVolumes. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata object EmbeddedMetadata contains metadata relevant to an EmbeddedResource. spec object Defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims status object Deprecated: this field is never set. 2.1.271. .spec.storage.volumeClaimTemplate.metadata Description EmbeddedMetadata contains metadata relevant to an EmbeddedResource. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 2.1.272. .spec.storage.volumeClaimTemplate.spec Description Defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 2.1.273. .spec.storage.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.274. .spec.storage.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 2.1.275. .spec.storage.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.276. .spec.storage.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.277. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.278. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.279. .spec.storage.volumeClaimTemplate.status Description Deprecated: this field is never set. Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResourceStatuses object (string) allocatedResourceStatuses stores status of resource being resized for the given PVC. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. ClaimResourceStatus can be in any of following states: - ControllerResizeInProgress: State set when resize controller starts resizing the volume in control-plane. - ControllerResizeFailed: State set when resize has failed in resize controller with a terminal error. - NodeResizePending: State set when resize controller has finished resizing the volume but further resizing of volume is needed on the node. - NodeResizeInProgress: State set when kubelet starts resizing the volume. - NodeResizeFailed: State set when resizing has failed in kubelet with a terminal error. Transient errors don't set NodeResizeFailed. For example: if expanding a PVC for more capacity - this field can be one of the following states: - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeFailed" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizePending" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeFailed" When this field is not set, it means that no resize operation is in progress for the given PVC. A controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. allocatedResources integer-or-string allocatedResources tracks the resources allocated to a PVC including its capacity. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. Capacity reported here may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. A controller that receives PVC update with previously unknown resourceName should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity integer-or-string capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'Resizing'. conditions[] object PersistentVolumeClaimCondition contains details about state of pvc currentVolumeAttributesClassName string currentVolumeAttributesClassName is the current name of the VolumeAttributesClass the PVC is using. When unset, there is no VolumeAttributeClass applied to this PersistentVolumeClaim This is a beta field and requires enabling VolumeAttributesClass feature (off by default). modifyVolumeStatus object ModifyVolumeStatus represents the status object of ControllerModifyVolume operation. When this is unset, there is no ModifyVolume operation being attempted. This is a beta field and requires enabling VolumeAttributesClass feature (off by default). phase string phase represents the current phase of PersistentVolumeClaim. 2.1.280. .spec.storage.volumeClaimTemplate.status.conditions Description conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'Resizing'. Type array 2.1.281. .spec.storage.volumeClaimTemplate.status.conditions[] Description PersistentVolumeClaimCondition contains details about state of pvc Type object Required status type Property Type Description lastProbeTime string lastProbeTime is the time we probed the condition. lastTransitionTime string lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "Resizing" that means the underlying persistent volume is being resized. status string type string PersistentVolumeClaimConditionType defines the condition of PV claim. Valid values are: - "Resizing", "FileSystemResizePending" If RecoverVolumeExpansionFailure feature gate is enabled, then following additional values can be expected: - "ControllerResizeError", "NodeResizeError" If VolumeAttributesClass feature gate is enabled, then following additional values can be expected: - "ModifyVolumeError", "ModifyingVolume" 2.1.282. .spec.storage.volumeClaimTemplate.status.modifyVolumeStatus Description ModifyVolumeStatus represents the status object of ControllerModifyVolume operation. When this is unset, there is no ModifyVolume operation being attempted. This is a beta field and requires enabling VolumeAttributesClass feature (off by default). Type object Required status Property Type Description status string status is the status of the ControllerModifyVolume operation. It can be in any of following states: - Pending Pending indicates that the PersistentVolumeClaim cannot be modified due to unmet requirements, such as the specified VolumeAttributesClass not existing. - InProgress InProgress indicates that the volume is being modified. - Infeasible Infeasible indicates that the request has been rejected as invalid by the CSI driver. To resolve the error, a valid VolumeAttributesClass needs to be specified. Note: New statuses can be added in the future. Consumers should check for unknown statuses and fail appropriately. targetVolumeAttributesClassName string targetVolumeAttributesClassName is the name of the VolumeAttributesClass the PVC currently being reconciled 2.1.283. .spec.tolerations Description If specified, the pod's tolerations. Type array 2.1.284. .spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 2.1.285. .spec.topologySpreadConstraints Description If specified, the pod's topology spread constraints. Type array 2.1.286. .spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector object LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. 2.1.287. .spec.topologySpreadConstraints[].labelSelector Description LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.288. .spec.topologySpreadConstraints[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.289. .spec.topologySpreadConstraints[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.290. .spec.volumeMounts Description VolumeMounts allows configuration of additional VolumeMounts on the output StatefulSet definition. VolumeMounts specified will be appended to other VolumeMounts in the alertmanager container, that are generated as a result of StorageSpec objects. Type array 2.1.291. .spec.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. When RecursiveReadOnly is set to IfPossible or to Enabled, MountPropagation must be None or unspecified (which defaults to None). name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. recursiveReadOnly string RecursiveReadOnly specifies whether read-only mounts should be handled recursively. If ReadOnly is false, this field has no meaning and must be unspecified. If ReadOnly is true, and this field is set to Disabled, the mount is not made recursively read-only. If this field is set to IfPossible, the mount is made recursively read-only, if it is supported by the container runtime. If this field is set to Enabled, the mount is made recursively read-only if it is supported by the container runtime, otherwise the pod will not be started and an error will be generated to indicate the reason. If this field is set to IfPossible or Enabled, MountPropagation must be set to None (or be unspecified, which defaults to None). If this field is not specified, it is treated as an equivalent of Disabled. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 2.1.292. .spec.volumes Description Volumes allows configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. Type array 2.1.293. .spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath image object image represents an OCI object (a container image or artifact) pulled and mounted on the kubelet's host machine. The volume is resolved at pod startup depending on which PullPolicy value is provided: - Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails. - Never: the kubelet never pulls the reference and only uses a local image or artifact. Container creation will fail if the reference isn't present. - IfNotPresent: the kubelet pulls if the reference isn't already present on disk. Container creation will fail if the reference isn't present and the pull fails. The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[ ].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[ ].volumeMounts.subpath). The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 2.1.294. .spec.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 2.1.295. .spec.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 2.1.296. .spec.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 2.1.297. .spec.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 2.1.298. .spec.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 2.1.299. .spec.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 2.1.300. .spec.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 2.1.301. .spec.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 2.1.302. .spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.303. .spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.304. .spec.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 2.1.305. .spec.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 2.1.306. .spec.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 2.1.307. .spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 2.1.308. .spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 2.1.309. .spec.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.310. .spec.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.311. .spec.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 2.1.312. .spec.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 2.1.313. .spec.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 2.1.314. .spec.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 2.1.315. .spec.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 2.1.316. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.317. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 2.1.318. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.319. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.320. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.321. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.322. .spec.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 2.1.323. .spec.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 2.1.324. .spec.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 2.1.325. .spec.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 2.1.326. .spec.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 2.1.327. .spec.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 2.1.328. .spec.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 2.1.329. .spec.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 2.1.330. .spec.volumes[].image Description image represents an OCI object (a container image or artifact) pulled and mounted on the kubelet's host machine. The volume is resolved at pod startup depending on which PullPolicy value is provided: Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails. Never: the kubelet never pulls the reference and only uses a local image or artifact. Container creation will fail if the reference isn't present. IfNotPresent: the kubelet pulls if the reference isn't already present on disk. Container creation will fail if the reference isn't present and the pull fails. The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[ ].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[ ].volumeMounts.subpath). The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type. Type object Property Type Description pullPolicy string Policy for pulling OCI objects. Possible values are: Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails. Never: the kubelet never pulls the reference and only uses a local image or artifact. Container creation will fail if the reference isn't present. IfNotPresent: the kubelet pulls if the reference isn't already present on disk. Container creation will fail if the reference isn't present and the pull fails. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. reference string Required: Image or artifact reference to be used. Behaves in the same way as pod.spec.containers[*].image. Pull secrets will be assembled in the same way as for the container image by looking up node credentials, SA image pull secrets, and pod spec image pull secrets. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. 2.1.331. .spec.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 2.1.332. .spec.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 2.1.333. .spec.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 2.1.334. .spec.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 2.1.335. .spec.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 2.1.336. .spec.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 2.1.337. .spec.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections. Each entry in this list handles one source. sources[] object Projection that may be projected along with other supported volume types. Exactly one of these fields must be set. 2.1.338. .spec.volumes[].projected.sources Description sources is the list of volume projections. Each entry in this list handles one source. Type array 2.1.339. .spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types. Exactly one of these fields must be set. Type object Property Type Description clusterTrustBundle object ClusterTrustBundle allows a pod to access the .spec.trustBundle field of ClusterTrustBundle objects in an auto-updating file. Alpha, gated by the ClusterTrustBundleProjection feature gate. ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector. Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time. configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 2.1.340. .spec.volumes[].projected.sources[].clusterTrustBundle Description ClusterTrustBundle allows a pod to access the .spec.trustBundle field of ClusterTrustBundle objects in an auto-updating file. Alpha, gated by the ClusterTrustBundleProjection feature gate. ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector. Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time. Type object Required path Property Type Description labelSelector object Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, interpreted as "match everything". name string Select a single ClusterTrustBundle by object name. Mutually-exclusive with signerName and labelSelector. optional boolean If true, don't block pod startup if the referenced ClusterTrustBundle(s) aren't available. If using name, then the named ClusterTrustBundle is allowed not to exist. If using signerName, then the combination of signerName and labelSelector is allowed to match zero ClusterTrustBundles. path string Relative path from the volume root to write the bundle. signerName string Select all ClusterTrustBundles that match this signer name. Mutually-exclusive with name. The contents of all selected ClusterTrustBundles will be unified and deduplicated. 2.1.341. .spec.volumes[].projected.sources[].clusterTrustBundle.labelSelector Description Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, interpreted as "match everything". Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.342. .spec.volumes[].projected.sources[].clusterTrustBundle.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.343. .spec.volumes[].projected.sources[].clusterTrustBundle.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.344. .spec.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 2.1.345. .spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.346. .spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.347. .spec.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 2.1.348. .spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 2.1.349. .spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 2.1.350. .spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.351. .spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.352. .spec.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional field specify whether the Secret or its key must be defined 2.1.353. .spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.354. .spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.355. .spec.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 2.1.356. .spec.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 2.1.357. .spec.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 2.1.358. .spec.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 2.1.359. .spec.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 2.1.360. .spec.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 2.1.361. .spec.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 2.1.362. .spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.363. .spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.364. .spec.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 2.1.365. .spec.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 2.1.366. .spec.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 2.1.367. .spec.web Description Defines the web command line flags when starting Alertmanager. Type object Property Type Description getConcurrency integer Maximum number of GET requests processed concurrently. This corresponds to the Alertmanager's --web.get-concurrency flag. httpConfig object Defines HTTP parameters for web server. timeout integer Timeout for HTTP requests. This corresponds to the Alertmanager's --web.timeout flag. tlsConfig object Defines the TLS parameters for HTTPS. 2.1.368. .spec.web.httpConfig Description Defines HTTP parameters for web server. Type object Property Type Description headers object List of headers that can be added to HTTP responses. http2 boolean Enable HTTP/2 support. Note that HTTP/2 is only supported with TLS. When TLSConfig is not configured, HTTP/2 will be disabled. Whenever the value of the field changes, a rolling update will be triggered. 2.1.369. .spec.web.httpConfig.headers Description List of headers that can be added to HTTP responses. Type object Property Type Description contentSecurityPolicy string Set the Content-Security-Policy header to HTTP responses. Unset if blank. strictTransportSecurity string Set the Strict-Transport-Security header to HTTP responses. Unset if blank. Please make sure that you use this with care as this header might force browsers to load Prometheus and the other applications hosted on the same domain and subdomains over HTTPS. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security xContentTypeOptions string Set the X-Content-Type-Options header to HTTP responses. Unset if blank. Accepted value is nosniff. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options xFrameOptions string Set the X-Frame-Options header to HTTP responses. Unset if blank. Accepted values are deny and sameorigin. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options xXSSProtection string Set the X-XSS-Protection header to all responses. Unset if blank. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection 2.1.370. .spec.web.tlsConfig Description Defines the TLS parameters for HTTPS. Type object Property Type Description cert object Contains the TLS certificate for the server. certFile string Path to the TLS certificate file in the Prometheus container for the server. Mutually exclusive with cert . cipherSuites array (string) List of supported cipher suites for TLS versions up to TLS 1.2. If empty, Go default cipher suites are used. Available cipher suites are documented in the go documentation: https://golang.org/pkg/crypto/tls/#pkg-constants clientAuthType string Server policy for client authentication. Maps to ClientAuth Policies. For more detail on clientAuth options: https://golang.org/pkg/crypto/tls/#ClientAuthType clientCAFile string Path to the CA certificate file for client certificate authentication to the server. Mutually exclusive with client_ca . client_ca object Contains the CA certificate for client certificate authentication to the server. curvePreferences array (string) Elliptic curves that will be used in an ECDHE handshake, in preference order. Available curves are documented in the go documentation: https://golang.org/pkg/crypto/tls/#CurveID keyFile string Path to the TLS key file in the Prometheus container for the server. Mutually exclusive with keySecret . keySecret object Secret containing the TLS key for the server. maxVersion string Maximum TLS version that is acceptable. Defaults to TLS13. minVersion string Minimum TLS version that is acceptable. Defaults to TLS12. preferServerCipherSuites boolean Controls whether the server selects the client's most preferred cipher suite, or the server's most preferred cipher suite. If true then the server's preference, as expressed in the order of elements in cipherSuites, is used. 2.1.371. .spec.web.tlsConfig.cert Description Contains the TLS certificate for the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.372. .spec.web.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 2.1.373. .spec.web.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.374. .spec.web.tlsConfig.client_ca Description Contains the CA certificate for client certificate authentication to the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.375. .spec.web.tlsConfig.client_ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 2.1.376. .spec.web.tlsConfig.client_ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.377. .spec.web.tlsConfig.keySecret Description Secret containing the TLS key for the server. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 2.1.378. .status Description Most recent observed status of the Alertmanager cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Required availableReplicas paused replicas unavailableReplicas updatedReplicas Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this Alertmanager cluster. conditions array The current state of the Alertmanager object. conditions[] object Condition represents the state of the resources associated with the Prometheus, Alertmanager or ThanosRuler resource. paused boolean Represents whether any actions on the underlying managed objects are being performed. Only delete actions will be performed. replicas integer Total number of non-terminated pods targeted by this Alertmanager object (their labels match the selector). selector string The selector used to match the pods targeted by this Alertmanager object. unavailableReplicas integer Total number of unavailable pods targeted by this Alertmanager object. updatedReplicas integer Total number of non-terminated pods targeted by this Alertmanager object that have the desired version spec. 2.1.379. .status.conditions Description The current state of the Alertmanager object. Type array 2.1.380. .status.conditions[] Description Condition represents the state of the resources associated with the Prometheus, Alertmanager or ThanosRuler resource. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the time of the last update to the current status property. message string Human-readable message indicating details for the condition's last transition. observedGeneration integer ObservedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string Reason for the condition's last transition. status string Status of the condition. type string Type of the condition being reported. 2.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/alertmanagers GET : list objects of kind Alertmanager /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers DELETE : delete collection of Alertmanager GET : list objects of kind Alertmanager POST : create an Alertmanager /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name} DELETE : delete an Alertmanager GET : read the specified Alertmanager PATCH : partially update the specified Alertmanager PUT : replace the specified Alertmanager /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name}/scale GET : read scale of the specified Alertmanager PATCH : partially update scale of the specified Alertmanager PUT : replace scale of the specified Alertmanager /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name}/status GET : read status of the specified Alertmanager PATCH : partially update status of the specified Alertmanager PUT : replace status of the specified Alertmanager 2.2.1. /apis/monitoring.coreos.com/v1/alertmanagers HTTP method GET Description list objects of kind Alertmanager Table 2.1. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerList schema 401 - Unauthorized Empty 2.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers HTTP method DELETE Description delete collection of Alertmanager Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Alertmanager Table 2.3. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerList schema 401 - Unauthorized Empty HTTP method POST Description create an Alertmanager Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body Alertmanager schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 201 - Created Alertmanager schema 202 - Accepted Alertmanager schema 401 - Unauthorized Empty 2.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name} Table 2.7. Global path parameters Parameter Type Description name string name of the Alertmanager HTTP method DELETE Description delete an Alertmanager Table 2.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Alertmanager Table 2.10. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Alertmanager Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.12. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Alertmanager Table 2.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.14. Body parameters Parameter Type Description body Alertmanager schema Table 2.15. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 201 - Created Alertmanager schema 401 - Unauthorized Empty 2.2.4. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name}/scale Table 2.16. Global path parameters Parameter Type Description name string name of the Alertmanager HTTP method GET Description read scale of the specified Alertmanager Table 2.17. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified Alertmanager Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.19. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified Alertmanager Table 2.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.21. Body parameters Parameter Type Description body Scale schema Table 2.22. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 2.2.5. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name}/status Table 2.23. Global path parameters Parameter Type Description name string name of the Alertmanager HTTP method GET Description read status of the specified Alertmanager Table 2.24. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Alertmanager Table 2.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.26. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Alertmanager Table 2.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.28. Body parameters Parameter Type Description body Alertmanager schema Table 2.29. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 201 - Created Alertmanager schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/monitoring_apis/alertmanager-monitoring-coreos-com-v1
Chapter 7. RangeAllocation [security.openshift.io/v1]
Chapter 7. RangeAllocation [security.openshift.io/v1] Description RangeAllocation is used so we can easily expose a RangeAllocation typed for security group Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object Required range data 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources data string data is a byte array representing the serialized state of a range allocation. It is a bitmap with each bit set to one to represent a range is taken. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta range string range is a string representing a unique label for a range of uids, "1000000000-2000000000/10000". 7.2. API endpoints The following API endpoints are available: /apis/security.openshift.io/v1/rangeallocations DELETE : delete collection of RangeAllocation GET : list or watch objects of kind RangeAllocation POST : create a RangeAllocation /apis/security.openshift.io/v1/watch/rangeallocations GET : watch individual changes to a list of RangeAllocation. deprecated: use the 'watch' parameter with a list operation instead. /apis/security.openshift.io/v1/rangeallocations/{name} DELETE : delete a RangeAllocation GET : read the specified RangeAllocation PATCH : partially update the specified RangeAllocation PUT : replace the specified RangeAllocation /apis/security.openshift.io/v1/watch/rangeallocations/{name} GET : watch changes to an object of kind RangeAllocation. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 7.2.1. /apis/security.openshift.io/v1/rangeallocations Table 7.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of RangeAllocation Table 7.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 7.3. Body parameters Parameter Type Description body DeleteOptions schema Table 7.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind RangeAllocation Table 7.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.6. HTTP responses HTTP code Reponse body 200 - OK RangeAllocationList schema 401 - Unauthorized Empty HTTP method POST Description create a RangeAllocation Table 7.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.8. Body parameters Parameter Type Description body RangeAllocation schema Table 7.9. HTTP responses HTTP code Reponse body 200 - OK RangeAllocation schema 201 - Created RangeAllocation schema 202 - Accepted RangeAllocation schema 401 - Unauthorized Empty 7.2.2. /apis/security.openshift.io/v1/watch/rangeallocations Table 7.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of RangeAllocation. deprecated: use the 'watch' parameter with a list operation instead. Table 7.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.3. /apis/security.openshift.io/v1/rangeallocations/{name} Table 7.12. Global path parameters Parameter Type Description name string name of the RangeAllocation Table 7.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a RangeAllocation Table 7.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.15. Body parameters Parameter Type Description body DeleteOptions schema Table 7.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified RangeAllocation Table 7.17. HTTP responses HTTP code Reponse body 200 - OK RangeAllocation schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified RangeAllocation Table 7.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.19. Body parameters Parameter Type Description body Patch schema Table 7.20. HTTP responses HTTP code Reponse body 200 - OK RangeAllocation schema 201 - Created RangeAllocation schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified RangeAllocation Table 7.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.22. Body parameters Parameter Type Description body RangeAllocation schema Table 7.23. HTTP responses HTTP code Reponse body 200 - OK RangeAllocation schema 201 - Created RangeAllocation schema 401 - Unauthorized Empty 7.2.4. /apis/security.openshift.io/v1/watch/rangeallocations/{name} Table 7.24. Global path parameters Parameter Type Description name string name of the RangeAllocation Table 7.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind RangeAllocation. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/security_apis/rangeallocation-security-openshift-io-v1
Chapter 16. structured
Chapter 16. structured Original log entry as a structured object. This field may be present if the forwarder was configured to parse structured JSON logs. If the original log entry was a valid structured log, this field will contain an equivalent JSON structure. Otherwise this field will be empty or absent, and the message field will contain the original log message. The structured field can have any subfields that are included in the log message, there are no restrictions defined here. Data type group Example value map[message:starting fluentd worker pid=21631 ppid=21618 worker=0 pid:21631 ppid:21618 worker:0]
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/logging/structured
20.12. Events Configuration
20.12. Events Configuration Using the following sections of domain XML it is possible to override the default actions taken on various events. <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <on_lockfailure>poweroff</on_lockfailure> Figure 20.18. Events configuration The following collections of elements allow the actions to be specified when a guest virtual machine OS triggers a life cycle operation. A common use case is to force a reboot to be treated as a poweroff when doing the initial OS installation. This allows the VM to be re-configured for the first post-install bootup. The components of this section of the domain XML are as follows: Table 20.10. Event configuration elements State Description <on_poweroff> Specifies the action that is to be executed when the guest virtual machine requests a poweroff. Four possible arguments are possible: destroy - this action terminates the domain completely and releases all resources restart - this action terminates the domain completely and restarts it with the same configuration preserve - this action terminates the domain completely but and its resources are preserved to allow for future analysis. rename-restart - this action terminates the domain completely and then restarts it with a new name <on_reboot> Specifies the action that is to be executed when the guest virtual machine requests a reboot. Four possible arguments are possible: destroy - this action terminates the domain completely and releases all resources restart - this action terminates the domain completely and restarts it with the same configuration preserve - this action terminates the domain completely but and its resources are preserved to allow for future analysis. rename-restart - this action terminates the domain completely and then restarts it with a new name <on_crash> Specifies the action that is to be executed when the guest virtual machine crashes. In addition, it supports these additional actions: coredump-destroy - the crashed domain's core is dumped, domain is terminated completely, and all resources are released. coredump-restart - the crashed domain's core is dumped, and the domain is restarted with the same configuration settings Four possible arguments are possible: destroy - this action terminates the domain completely and releases all resources restart - this action terminates the domain completely and restarts it with the same configuration preserve - this action terminates the domain completely but and its resources are preserved to allow for future analysis. rename-restart - this action terminates the domain completely and then restarts it with a new name <on_lockfailure> Specifies what action should be taken when a lock manager loses resource locks. The following actions are recognized by libvirt, although not all of them need to be supported by individual lock managers. When no action is specified, each lock manager will take its default action. The following arguments are possible: poweroff - forcefully powers off the domain restart - restarts the domain to reacquire its locks. pause - pauses the domain so that it can be manually resumed when lock issues are solved. ignore - keeps the domain running as if nothing happened.
[ "<on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <on_lockfailure>poweroff</on_lockfailure>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-libvirt-dom-xml-event-config
Chapter 1. Security for HTTP-Compatible Bindings
Chapter 1. Security for HTTP-Compatible Bindings Abstract This chapter describes the security features supported by the Apache CXF HTTP transport. These security features are available to any Apache CXF binding that can be layered on top of the HTTP transport. Overview This section describes how to configure the HTTP transport to use SSL/TLS security, a combination usually referred to as HTTPS. In Apache CXF, HTTPS security is configured by specifying settings in XML configuration files. Warning If you enable SSL/TLS security, you must ensure that you explicitly disable the SSLv3 protocol, in order to safeguard against the Poodle vulnerability (CVE-2014-3566) . For more details, see Disabling SSLv3 in JBoss Fuse 6.x and JBoss A-MQ 6.x . The following topics are discussed in this chapter: the section called "Generating X.509 certificates" the section called "Enabling HTTPS" the section called "HTTPS client with no certificate" the section called "HTTPS client with certificate" the section called "HTTPS server configuration" Generating X.509 certificates A basic prerequisite for using SSL/TLS security is to have a collection of X.509 certificates available to identify your server applications and, optionally, to identify your client applications. You can generate X.509 certificates in one of the following ways: Use a commercial third-party to tool to generate and manage your X.509 certificates. Use the free openssl utility (which can be downloaded from http://www.openssl.org ) and the Java keystore utility to generate certificates (see Section 2.5.3, "Use the CA to Create Signed Certificates in a Java Keystore" ). Note The HTTPS protocol mandates a URL integrity check , which requires a certificate's identity to match the hostname on which the server is deployed. See Section 2.4, "Special Requirements on HTTPS Certificates" for details. Certificate format In the Java runtime, you must deploy X.509 certificate chains and trusted CA certificates in the form of Java keystores. See Chapter 3, Configuring HTTPS for details. Enabling HTTPS A prerequisite for enabling HTTPS on a WSDL endpoint is that the endpoint address must be specified as a HTTPS URL. There are two different locations where the endpoint address is set and both must be modified to use a HTTPS URL: HTTPS specified in the WSDL contract-you must specify the endpoint address in the WSDL contract to be a URL with the https: prefix, as shown in Example 1.1, "Specifying HTTPS in the WSDL" . Example 1.1. Specifying HTTPS in the WSDL Where the location attribute of the soap:address element is configured to use a HTTPS URL. For bindings other than SOAP, you edit the URL appearing in the location attribute of the http:address element. HTTPS specified in the server code-you must ensure that the URL published in the server code by calling Endpoint.publish() is defined with a https: prefix, as shown in Example 1.2, "Specifying HTTPS in the Server Code" . Example 1.2. Specifying HTTPS in the Server Code HTTPS client with no certificate For example, consider the configuration for a secure HTTPS client with no certificate, as shown in Example 1.3, "Sample HTTPS Client with No Certificate" . Example 1.3. Sample HTTPS Client with No Certificate The preceding client configuration is described as follows: The TLS security settings are defined on a specific WSDL port. In this example, the WSDL port being configured has the QName, {http://apache.org/hello_world_soap_http}SoapPort . The http:tlsClientParameters element contains all of the client's TLS configuration details. The sec:trustManagers element is used to specify a list of trusted CA certificates (the client uses this list to decide whether or not to trust certificates received from the server side). The file attribute of the sec:keyStore element specifies a Java keystore file, truststore.jks , containing one or more trusted CA certificates. The password attribute specifies the password required to access the keystore, truststore.jks . See Section 3.2.2, "Specifying Trusted CA Certificates for HTTPS" . Note Instead of the file attribute, you can specify the location of the keystore using either the resource attribute (where the keystore file is provided on the classpath) or the url attribute. In particular, the resource attribute must be used with applications that are deployed into an OSGi container. You must be extremely careful not to load the truststore from an untrustworthy source. The sec:cipherSuitesFilter element can be used to narrow the choice of cipher suites that the client is willing to use for a TLS connection. See Chapter 4, Configuring HTTPS Cipher Suites for details. HTTPS client with certificate Consider a secure HTTPS client that is configured to have its own certificate. Example 1.4, "Sample HTTPS Client with Certificate" shows how to configure such a sample client. Example 1.4. Sample HTTPS Client with Certificate The preceding client configuration is described as follows: The sec:keyManagers element is used to attach an X.509 certificate and a private key to the client. The password specified by the keyPasswod attribute is used to decrypt the certificate's private key. The sec:keyStore element is used to specify an X.509 certificate and a private key that are stored in a Java keystore. This sample declares that the keystore is in Java Keystore format (JKS). The file attribute specifies the location of the keystore file, wibble.jks , that contains the client's X.509 certificate chain and private key in a key entry . The password attribute specifies the keystore password which is required to access the contents of the keystore. It is expected that the keystore file contains just one key entry, so it is not necessary to specify a key alias to identify the entry. If you are deploying a keystore file with multiple key entries, however, it is possible to specify the key in this case by adding the sec:certAlias element as a child of the http:tlsClientParameters element, as follows: For details of how to create a keystore file, see Section 2.5.3, "Use the CA to Create Signed Certificates in a Java Keystore" . Note Instead of the file attribute, you can specify the location of the keystore using either the resource attribute (where the keystore file is provided on the classpath) or the url attribute. In particular, the resource attribute must be used with applications that are deployed into an OSGi container. You must be extremely careful not to load the truststore from an untrustworthy source. HTTPS server configuration Consider a secure HTTPS server that requires clients to present an X.509 certificate. Example 1.5, "Sample HTTPS Server Configuration" shows how to configure such a server. Example 1.5. Sample HTTPS Server Configuration The preceding server configuration is described as follows: The bus attribute references the relevant CXF Bus instance. By default, a CXF Bus instance with the ID, cxf , is automatically created by the Apache CXF runtime. On the server side, TLS is not configured for each WSDL port. Instead of configuring each WSDL port, the TLS security settings are applied to a specific TCP port , which is 9001 in this example. All of the WSDL ports that share this TCP port are therefore configured with the same TLS security settings. The http:tlsServerParameters element contains all of the server's TLS configuration details. Important You must set secureSocketProtocol to TLSv1 on the server side, in order to protect against the Poodle vulnerability (CVE-2014-3566) The sec:keyManagers element is used to attach an X.509 certificate and a private key to the server. The password specified by the keyPasswod attribute is used to decrypt the certificate's private key. The sec:keyStore element is used to specify an X.509 certificate and a private key that are stored in a Java keystore. This sample declares that the keystore is in Java Keystore format (JKS). The file attribute specifies the location of the keystore file, cherry.jks , that contains the client's X.509 certificate chain and private key in a key entry . The password attribute specifies the keystore password, which is needed to access the contents of the keystore. It is expected that the keystore file contains just one key entry, so it is not necessary to specify a key alias to identify the entry. If you are deploying a keystore file with multiple key entries, however, it is possible to specify the key in this case by adding the sec:certAlias element as a child of the http:tlsClientParameters element, as follows: Note Instead of the file attribute, you can specify the location of the keystore using either the resource attribute or the url attribute. You must be extremely careful not to load the truststore from an untrustworthy source. For details of how to create such a keystore file, see Section 2.5.3, "Use the CA to Create Signed Certificates in a Java Keystore" . The sec:trustManagers element is used to specify a list of trusted CA certificates (the server uses this list to decide whether or not to trust certificates presented by clients). The file attribute of the sec:keyStore element specifies a Java keystore file, truststore.jks , containing one or more trusted CA certificates. The password attribute specifies the password required to access the keystore, truststore.jks . See Section 3.2.2, "Specifying Trusted CA Certificates for HTTPS" . Note Instead of the file attribute, you can specify the location of the keystore using either the resource attribute or the url attribute. The sec:cipherSuitesFilter element can be used to narrow the choice of cipher suites that the server is willing to use for a TLS connection. See Chapter 4, Configuring HTTPS Cipher Suites for details. The sec:clientAuthentication element determines the server's disposition towards the presentation of client certificates. The element has the following attributes: want attribute-If true (the default), the server requests the client to present an X.509 certificate during the TLS handshake; if false , the server does not request the client to present an X.509 certificate. required attribute-If true , the server raises an exception if a client fails to present an X.509 certificate during the TLS handshake; if false (the default), the server does not raise an exception if the client fails to present an X.509 certificate.
[ "<wsdl:definitions name=\"HelloWorld\" targetNamespace=\"http://apache.org/hello_world_soap_http\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" ... > <wsdl:service name=\"SOAPService\"> <wsdl:port binding=\"tns:Greeter_SOAPBinding\" name=\"SoapPort\"> <soap:address location=\" https://localhost:9001/SoapContext/SoapPort \"/> </wsdl:port> </wsdl:service> </wsdl:definitions>", "// Java package demo.hw_https.server; import javax.xml.ws.Endpoint; public class Server { protected Server() throws Exception { Object implementor = new GreeterImpl(); String address = \" https://localhost:9001/SoapContext/SoapPort \"; Endpoint.publish(address, implementor); } }", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:sec=\"http://cxf.apache.org/configuration/security\" xmlns:http=\"http://cxf.apache.org/transports/http/configuration\" xmlns:jaxws=\"http://java.sun.com/xml/ns/jaxws\" xsi:schemaLocation=\"...\"> <http:conduit name=\"{http://apache.org/hello_world_soap_http}SoapPort.http-conduit\"> <http:tlsClientParameters> <sec:trustManagers> <sec:keyStore type=\"JKS\" password=\"password\" file=\"certs/truststore.jks\"/> </sec:trustManagers> <sec:cipherSuitesFilter> <sec:include>.*_WITH_3DES_.*</sec:include> <sec:include>.*_WITH_DES_.*</sec:include> <sec:exclude>.*_WITH_NULL_.*</sec:exclude> <sec:exclude>.*_DH_anon_.*</sec:exclude> </sec:cipherSuitesFilter> </http:tlsClientParameters> </http:conduit> </beans>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:sec=\"http://cxf.apache.org/configuration/security\" xmlns:http=\"http://cxf.apache.org/transports/http/configuration\" xmlns:jaxws=\"http://java.sun.com/xml/ns/jaxws\" xsi:schemaLocation=\"...\"> <http:conduit name=\"{http://apache.org/hello_world_soap_http}SoapPort.http-conduit\"> <http:tlsClientParameters> <sec:trustManagers> <sec:keyStore type=\"JKS\" password=\"password\" file=\"certs/truststore.jks\"/> </sec:trustManagers> <sec:keyManagers keyPassword=\"password\"> <sec:keyStore type=\"JKS\" password=\"password\" file=\"certs/wibble.jks\"/> </sec:keyManagers> <sec:cipherSuitesFilter> <sec:include>.*_WITH_3DES_.*</sec:include> <sec:include>.*_WITH_DES_.*</sec:include> <sec:exclude>.*_WITH_NULL_.*</sec:exclude> <sec:exclude>.*_DH_anon_.*</sec:exclude> </sec:cipherSuitesFilter> </http:tlsClientParameters> </http:conduit> <bean id=\"cxf\" class=\"org.apache.cxf.bus.CXFBusImpl\"/> </beans>", "<http:tlsClientParameters> <sec:certAlias> CertAlias </sec:certAlias> </http:tlsClientParameters>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:sec=\"http://cxf.apache.org/configuration/security\" xmlns:http=\"http://cxf.apache.org/transports/http/configuration\" xmlns:httpj=\"http://cxf.apache.org/transports/http-jetty/configuration\" xmlns:jaxws=\"http://java.sun.com/xml/ns/jaxws\" xsi:schemaLocation=\"...\"> <httpj:engine-factory bus=\"cxf\"> <httpj:engine port=\"9001\"> <httpj:tlsServerParameters secureSocketProtocol=\"TLSv1\"> <sec:keyManagers keyPassword=\"password\"> <sec:keyStore type=\"JKS\" password=\"password\" file=\"certs/cherry.jks\"/> </sec:keyManagers> <sec:trustManagers> <sec:keyStore type=\"JKS\" password=\"password\" file=\"certs/truststore.jks\"/> </sec:trustManagers> <sec:cipherSuitesFilter> <sec:include>.*_WITH_3DES_.*</sec:include> <sec:include>.*_WITH_DES_.*</sec:include> <sec:exclude>.*_WITH_NULL_.*</sec:exclude> <sec:exclude>.*_DH_anon_.*</sec:exclude> </sec:cipherSuitesFilter> <sec:clientAuthentication want=\"true\" required=\"true\"/> </httpj:tlsServerParameters> </httpj:engine> </httpj:engine-factory> </beans>", "<http:tlsClientParameters> <sec:certAlias> CertAlias </sec:certAlias> </http:tlsClientParameters>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_security_guide/HTTPCompatible
8.56. gnome-screensaver
8.56. gnome-screensaver 8.56.1. RHBA-2013:1706 - gnome-screensaver bug fix update Updated gnome-screensaver packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The gnome-screensaver packages contain the GNOME project's official screen saver program. The screen saver is designed for improved integration with the GNOME desktop, including themeability, language support, and Human Interface Guidelines (HIG) compliance. It also provides screen-locking and fast user-switching from a locked screen. Bug Fixes BZ# 905935 Previously, when using the virt-manager, virt-viewer, and spice-xpi applications, users were unable to enter the gnome-screensaver password after the screen saver had started. This occurred only when the virtual machine system used the Compiz composting window manager. After users had released the mouse cursor, then pressed a key to enter the password, the dialog window did not accept any input. This happened due to incorrect assignment of window focus to applications that did not drop their keyboard grab. With this update, window focus is now properly assigned to the correct place, and attempts to enter the gnome-screensaver password no longer fail in the described scenario. BZ# 947671 Prior to this update, the gnome-screensaver utility worked incorrectly when using an X server that does not support the fade-out function. Consequently, gnome-screensaver terminated unexpectedly when trying to fade out the monitor. This bug has been fixed and gnome-screensaver now detects a potential fade-out failure and recovers instead of crashing. Users of gnome-screensaver are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/gnome-screensaver
Using Containerized Identity Management Services
Using Containerized Identity Management Services Red Hat Enterprise Linux 7 Overview and installation of containerized Identity Management services Florian Delehaye Red Hat Customer Content Services [email protected] Marc Muehlfeld Red Hat Customer Content Services Filip Hanzelka Red Hat Customer Content Services Lucie Manaskova Red Hat Customer Content Services Aneta Steflova Petrova Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/using_containerized_identity_management_services/index
RPM installation
RPM installation Red Hat Ansible Automation Platform 2.5 Install the RPM version of Ansible Automation Platform Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/index
Chapter 23. Hardware networks
Chapter 23. Hardware networks 23.1. About Single Root I/O Virtualization (SR-IOV) hardware networks The Single Root I/O Virtualization (SR-IOV) specification is a standard for a type of PCI device assignment that can share a single device with multiple pods. SR-IOV can segment a compliant network device, recognized on the host node as a physical function (PF), into multiple virtual functions (VFs). The VF is used like any other network device. The SR-IOV network device driver for the device determines how the VF is exposed in the container: netdevice driver: A regular kernel network device in the netns of the container vfio-pci driver: A character device mounted in the container You can use SR-IOV network devices with additional networks on your OpenShift Container Platform cluster installed on bare metal or Red Hat OpenStack Platform (RHOSP) infrastructure for applications that require high bandwidth or low latency. You can configure multi-network policies for SR-IOV networks. The support for this is technology preview and SR-IOV additional networks are only supported with kernel NICs. They are not supported for Data Plane Development Kit (DPDK) applications. Note Creating multi-network policies on SR-IOV networks might not deliver the same performance to applications compared to SR-IOV networks without a multi-network policy configured. Important Multi-network policies for SR-IOV network is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can enable SR-IOV on a node by using the following command: USD oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable="true" 23.1.1. Components that manage SR-IOV network devices The SR-IOV Network Operator creates and manages the components of the SR-IOV stack. It performs the following functions: Orchestrates discovery and management of SR-IOV network devices Generates NetworkAttachmentDefinition custom resources for the SR-IOV Container Network Interface (CNI) Creates and updates the configuration of the SR-IOV network device plugin Creates node specific SriovNetworkNodeState custom resources Updates the spec.interfaces field in each SriovNetworkNodeState custom resource The Operator provisions the following components: SR-IOV network configuration daemon A daemon set that is deployed on worker nodes when the SR-IOV Network Operator starts. The daemon is responsible for discovering and initializing SR-IOV network devices in the cluster. SR-IOV Network Operator webhook A dynamic admission controller webhook that validates the Operator custom resource and sets appropriate default values for unset fields. SR-IOV Network resources injector A dynamic admission controller webhook that provides functionality for patching Kubernetes pod specifications with requests and limits for custom network resources such as SR-IOV VFs. The SR-IOV network resources injector adds the resource field to only the first container in a pod automatically. SR-IOV network device plugin A device plugin that discovers, advertises, and allocates SR-IOV network virtual function (VF) resources. Device plugins are used in Kubernetes to enable the use of limited resources, typically in physical devices. Device plugins give the Kubernetes scheduler awareness of resource availability, so that the scheduler can schedule pods on nodes with sufficient resources. SR-IOV CNI plugin A CNI plugin that attaches VF interfaces allocated from the SR-IOV network device plugin directly into a pod. SR-IOV InfiniBand CNI plugin A CNI plugin that attaches InfiniBand (IB) VF interfaces allocated from the SR-IOV network device plugin directly into a pod. Note The SR-IOV Network resources injector and SR-IOV Network Operator webhook are enabled by default and can be disabled by editing the default SriovOperatorConfig CR. Use caution when disabling the SR-IOV Network Operator Admission Controller webhook. You can disable the webhook under specific circumstances, such as troubleshooting, or if you want to use unsupported devices. 23.1.1.1. Supported platforms The SR-IOV Network Operator is supported on the following platforms: Bare metal Red Hat OpenStack Platform (RHOSP) 23.1.1.2. Supported devices OpenShift Container Platform supports the following network interface controllers: Table 23.1. Supported network interface controllers Manufacturer Model Vendor ID Device ID Broadcom BCM57414 14e4 16d7 Broadcom BCM57508 14e4 1750 Broadcom BCM57504 14e4 1751 Intel X710 8086 1572 Intel X710 Backplane 8086 1581 Intel X710 Base T 8086 15ff Intel XL710 8086 1583 Intel XXV710 8086 158b Intel E810-CQDA2 8086 1592 Intel E810-2CQDA2 8086 1592 Intel E810-XXVDA2 8086 159b Intel E810-XXVDA4 8086 1593 Intel E810-XXVDA4T 8086 1593 Mellanox MT27700 Family [ConnectX‐4] 15b3 1013 Mellanox MT27710 Family [ConnectX‐4 Lx] 15b3 1015 Mellanox MT27800 Family [ConnectX‐5] 15b3 1017 Mellanox MT28880 Family [ConnectX‐5 Ex] 15b3 1019 Mellanox MT28908 Family [ConnectX‐6] 15b3 101b Mellanox MT2892 Family [ConnectX‐6 Dx] 15b3 101d Mellanox MT2894 Family [ConnectX‐6 Lx] 15b3 101f Mellanox MT42822 BlueField‐2 in ConnectX‐6 NIC mode 15b3 a2d6 Pensando [1] DSC-25 dual-port 25G distributed services card for ionic driver 0x1dd8 0x1002 Pensando [1] DSC-100 dual-port 100G distributed services card for ionic driver 0x1dd8 0x1003 Silicom STS Family 8086 1591 OpenShift SR-IOV is supported, but you must set a static, Virtual Function (VF) media access control (MAC) address using the SR-IOV CNI config file when using SR-IOV. Note For the most up-to-date list of supported cards and compatible OpenShift Container Platform versions available, see Openshift Single Root I/O Virtualization (SR-IOV) and PTP hardware networks Support Matrix . 23.1.1.3. Automated discovery of SR-IOV network devices The SR-IOV Network Operator searches your cluster for SR-IOV capable network devices on worker nodes. The Operator creates and updates a SriovNetworkNodeState custom resource (CR) for each worker node that provides a compatible SR-IOV network device. The CR is assigned the same name as the worker node. The status.interfaces list provides information about the network devices on a node. Important Do not modify a SriovNetworkNodeState object. The Operator creates and manages these resources automatically. 23.1.1.3.1. Example SriovNetworkNodeState object The following YAML is an example of a SriovNetworkNodeState object created by the SR-IOV Network Operator: An SriovNetworkNodeState object apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState metadata: name: node-25 1 namespace: openshift-sriov-network-operator ownerReferences: - apiVersion: sriovnetwork.openshift.io/v1 blockOwnerDeletion: true controller: true kind: SriovNetworkNodePolicy name: default spec: dpConfigVersion: "39824" status: interfaces: 2 - deviceID: "1017" driver: mlx5_core mtu: 1500 name: ens785f0 pciAddress: "0000:18:00.0" totalvfs: 8 vendor: 15b3 - deviceID: "1017" driver: mlx5_core mtu: 1500 name: ens785f1 pciAddress: "0000:18:00.1" totalvfs: 8 vendor: 15b3 - deviceID: 158b driver: i40e mtu: 1500 name: ens817f0 pciAddress: 0000:81:00.0 totalvfs: 64 vendor: "8086" - deviceID: 158b driver: i40e mtu: 1500 name: ens817f1 pciAddress: 0000:81:00.1 totalvfs: 64 vendor: "8086" - deviceID: 158b driver: i40e mtu: 1500 name: ens803f0 pciAddress: 0000:86:00.0 totalvfs: 64 vendor: "8086" syncStatus: Succeeded 1 The value of the name field is the same as the name of the worker node. 2 The interfaces stanza includes a list of all of the SR-IOV devices discovered by the Operator on the worker node. 23.1.1.4. Example use of a virtual function in a pod You can run a remote direct memory access (RDMA) or a Data Plane Development Kit (DPDK) application in a pod with SR-IOV VF attached. This example shows a pod using a virtual function (VF) in RDMA mode: Pod spec that uses RDMA mode apiVersion: v1 kind: Pod metadata: name: rdma-app annotations: k8s.v1.cni.cncf.io/networks: sriov-rdma-mlnx spec: containers: - name: testpmd image: <RDMA_image> imagePullPolicy: IfNotPresent securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] command: ["sleep", "infinity"] The following example shows a pod with a VF in DPDK mode: Pod spec that uses DPDK mode apiVersion: v1 kind: Pod metadata: name: dpdk-app annotations: k8s.v1.cni.cncf.io/networks: sriov-dpdk-net spec: containers: - name: testpmd image: <DPDK_image> securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: memory: "1Gi" cpu: "2" hugepages-1Gi: "4Gi" requests: memory: "1Gi" cpu: "2" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 23.1.1.5. DPDK library for use with container applications An optional library , app-netutil , provides several API methods for gathering network information about a pod from within a container running within that pod. This library can assist with integrating SR-IOV virtual functions (VFs) in Data Plane Development Kit (DPDK) mode into the container. The library provides both a Golang API and a C API. Currently there are three API methods implemented: GetCPUInfo() This function determines which CPUs are available to the container and returns the list. GetHugepages() This function determines the amount of huge page memory requested in the Pod spec for each container and returns the values. GetInterfaces() This function determines the set of interfaces in the container and returns the list. The return value includes the interface type and type-specific data for each interface. The repository for the library includes a sample Dockerfile to build a container image, dpdk-app-centos . The container image can run one of the following DPDK sample applications, depending on an environment variable in the pod specification: l2fwd , l3wd or testpmd . The container image provides an example of integrating the app-netutil library into the container image itself. The library can also integrate into an init container. The init container can collect the required data and pass the data to an existing DPDK workload. 23.1.1.6. Huge pages resource injection for Downward API When a pod specification includes a resource request or limit for huge pages, the Network Resources Injector automatically adds Downward API fields to the pod specification to provide the huge pages information to the container. The Network Resources Injector adds a volume that is named podnetinfo and is mounted at /etc/podnetinfo for each container in the pod. The volume uses the Downward API and includes a file for huge pages requests and limits. The file naming convention is as follows: /etc/podnetinfo/hugepages_1G_request_<container-name> /etc/podnetinfo/hugepages_1G_limit_<container-name> /etc/podnetinfo/hugepages_2M_request_<container-name> /etc/podnetinfo/hugepages_2M_limit_<container-name> The paths specified in the list are compatible with the app-netutil library. By default, the library is configured to search for resource information in the /etc/podnetinfo directory. If you choose to specify the Downward API path items yourself manually, the app-netutil library searches for the following paths in addition to the paths in the list. /etc/podnetinfo/hugepages_request /etc/podnetinfo/hugepages_limit /etc/podnetinfo/hugepages_1G_request /etc/podnetinfo/hugepages_1G_limit /etc/podnetinfo/hugepages_2M_request /etc/podnetinfo/hugepages_2M_limit As with the paths that the Network Resources Injector can create, the paths in the preceding list can optionally end with a _<container-name> suffix. 23.1.2. Additional resources Configuring multi-network policy 23.1.3. steps Installing the SR-IOV Network Operator Optional: Configuring the SR-IOV Network Operator Configuring an SR-IOV network device If you use OpenShift Virtualization: Connecting a virtual machine to an SR-IOV network Configuring an SR-IOV network attachment Adding a pod to an SR-IOV additional network 23.2. Installing the SR-IOV Network Operator You can install the Single Root I/O Virtualization (SR-IOV) Network Operator on your cluster to manage SR-IOV network devices and network attachments. 23.2.1. Installing the SR-IOV Network Operator As a cluster administrator, you can install the Single Root I/O Virtualization (SR-IOV) Network Operator by using the OpenShift Container Platform CLI or the web console. 23.2.1.1. CLI: Installing the SR-IOV Network Operator As a cluster administrator, you can install the Operator using the CLI. Prerequisites A cluster installed on bare-metal hardware with nodes that have hardware that supports SR-IOV. Install the OpenShift CLI ( oc ). An account with cluster-admin privileges. Procedure To create the openshift-sriov-network-operator namespace, enter the following command: USD cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management EOF To create an OperatorGroup CR, enter the following command: USD cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator EOF To create a Subscription CR for the SR-IOV Network Operator, enter the following command: USD cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: stable name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF To verify that the Operator is installed, enter the following command: USD oc get csv -n openshift-sriov-network-operator \ -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase sriov-network-operator.4.14.0-202310121402 Succeeded 23.2.1.2. Web console: Installing the SR-IOV Network Operator As a cluster administrator, you can install the Operator using the web console. Prerequisites A cluster installed on bare-metal hardware with nodes that have hardware that supports SR-IOV. Install the OpenShift CLI ( oc ). An account with cluster-admin privileges. Procedure Install the SR-IOV Network Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Select SR-IOV Network Operator from the list of available Operators, and then click Install . On the Install Operator page, under Installed Namespace , select Operator recommended Namespace . Click Install . Verify that the SR-IOV Network Operator is installed successfully: Navigate to the Operators Installed Operators page. Ensure that SR-IOV Network Operator is listed in the openshift-sriov-network-operator project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator does not appear as installed, to troubleshoot further: Inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Navigate to the Workloads Pods page and check the logs for pods in the openshift-sriov-network-operator project. Check the namespace of the YAML file. If the annotation is missing, you can add the annotation workload.openshift.io/allowed=management to the Operator namespace with the following command: USD oc annotate ns/openshift-sriov-network-operator workload.openshift.io/allowed=management Note For single-node OpenShift clusters, the annotation workload.openshift.io/allowed=management is required for the namespace. 23.2.2. steps Optional: Configuring the SR-IOV Network Operator 23.3. Configuring the SR-IOV Network Operator The Single Root I/O Virtualization (SR-IOV) Network Operator manages the SR-IOV network devices and network attachments in your cluster. 23.3.1. Configuring the SR-IOV Network Operator Important Modifying the SR-IOV Network Operator configuration is not normally necessary. The default configuration is recommended for most use cases. Complete the steps to modify the relevant configuration only if the default behavior of the Operator is not compatible with your use case. The SR-IOV Network Operator adds the SriovOperatorConfig.sriovnetwork.openshift.io CustomResourceDefinition resource. The Operator automatically creates a SriovOperatorConfig custom resource (CR) named default in the openshift-sriov-network-operator namespace. Note The default CR contains the SR-IOV Network Operator configuration for your cluster. To change the Operator configuration, you must modify this CR. 23.3.1.1. SR-IOV Network Operator config custom resource The fields for the sriovoperatorconfig custom resource are described in the following table: Table 23.2. SR-IOV Network Operator config custom resource Field Type Description metadata.name string Specifies the name of the SR-IOV Network Operator instance. The default value is default . Do not set a different value. metadata.namespace string Specifies the namespace of the SR-IOV Network Operator instance. The default value is openshift-sriov-network-operator . Do not set a different value. spec.configDaemonNodeSelector string Specifies the node selection to control scheduling the SR-IOV Network Config Daemon on selected nodes. By default, this field is not set and the Operator deploys the SR-IOV Network Config daemon set on worker nodes. spec.disableDrain boolean Specifies whether to disable the node draining process or enable the node draining process when you apply a new policy to configure the NIC on a node. Setting this field to true facilitates software development and installing OpenShift Container Platform on a single node. By default, this field is not set. For single-node clusters, set this field to true after installing the Operator. This field must remain set to true . spec.enableInjector boolean Specifies whether to enable or disable the Network Resources Injector daemon set. By default, this field is set to true . spec.enableOperatorWebhook boolean Specifies whether to enable or disable the Operator Admission Controller webhook daemon set. By default, this field is set to true . spec.logLevel integer Specifies the log verbosity level of the Operator. Set to 0 to show only the basic logs. Set to 2 to show all the available logs. By default, this field is set to 2 . 23.3.1.2. About the Network Resources Injector The Network Resources Injector is a Kubernetes Dynamic Admission Controller application. It provides the following capabilities: Mutation of resource requests and limits in a pod specification to add an SR-IOV resource name according to an SR-IOV network attachment definition annotation. Mutation of a pod specification with a Downward API volume to expose pod annotations, labels, and huge pages requests and limits. Containers that run in the pod can access the exposed information as files under the /etc/podnetinfo path. By default, the Network Resources Injector is enabled by the SR-IOV Network Operator and runs as a daemon set on all control plane nodes. The following is an example of Network Resources Injector pods running in a cluster with three control plane nodes: USD oc get pods -n openshift-sriov-network-operator Example output NAME READY STATUS RESTARTS AGE network-resources-injector-5cz5p 1/1 Running 0 10m network-resources-injector-dwqpx 1/1 Running 0 10m network-resources-injector-lktz5 1/1 Running 0 10m 23.3.1.3. About the SR-IOV Network Operator admission controller webhook The SR-IOV Network Operator Admission Controller webhook is a Kubernetes Dynamic Admission Controller application. It provides the following capabilities: Validation of the SriovNetworkNodePolicy CR when it is created or updated. Mutation of the SriovNetworkNodePolicy CR by setting the default value for the priority and deviceType fields when the CR is created or updated. By default the SR-IOV Network Operator Admission Controller webhook is enabled by the Operator and runs as a daemon set on all control plane nodes. Note Use caution when disabling the SR-IOV Network Operator Admission Controller webhook. You can disable the webhook under specific circumstances, such as troubleshooting, or if you want to use unsupported devices. For information about configuring unsupported devices, see Configuring the SR-IOV Network Operator to use an unsupported NIC . The following is an example of the Operator Admission Controller webhook pods running in a cluster with three control plane nodes: USD oc get pods -n openshift-sriov-network-operator Example output NAME READY STATUS RESTARTS AGE operator-webhook-9jkw6 1/1 Running 0 16m operator-webhook-kbr5p 1/1 Running 0 16m operator-webhook-rpfrl 1/1 Running 0 16m 23.3.1.4. About custom node selectors The SR-IOV Network Config daemon discovers and configures the SR-IOV network devices on cluster nodes. By default, it is deployed to all the worker nodes in the cluster. You can use node labels to specify on which nodes the SR-IOV Network Config daemon runs. 23.3.1.5. Disabling or enabling the Network Resources Injector To disable or enable the Network Resources Injector, which is enabled by default, complete the following procedure. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. You must have installed the SR-IOV Network Operator. Procedure Set the enableInjector field. Replace <value> with false to disable the feature or true to enable the feature. USD oc patch sriovoperatorconfig default \ --type=merge -n openshift-sriov-network-operator \ --patch '{ "spec": { "enableInjector": <value> } }' Tip You can alternatively apply the following YAML to update the Operator: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableInjector: <value> 23.3.1.6. Disabling or enabling the SR-IOV Network Operator admission controller webhook To disable or enable the admission controller webhook, which is enabled by default, complete the following procedure. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. You must have installed the SR-IOV Network Operator. Procedure Set the enableOperatorWebhook field. Replace <value> with false to disable the feature or true to enable it: USD oc patch sriovoperatorconfig default --type=merge \ -n openshift-sriov-network-operator \ --patch '{ "spec": { "enableOperatorWebhook": <value> } }' Tip You can alternatively apply the following YAML to update the Operator: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableOperatorWebhook: <value> 23.3.1.7. Configuring a custom NodeSelector for the SR-IOV Network Config daemon The SR-IOV Network Config daemon discovers and configures the SR-IOV network devices on cluster nodes. By default, it is deployed to all the worker nodes in the cluster. You can use node labels to specify on which nodes the SR-IOV Network Config daemon runs. To specify the nodes where the SR-IOV Network Config daemon is deployed, complete the following procedure. Important When you update the configDaemonNodeSelector field, the SR-IOV Network Config daemon is recreated on each selected node. While the daemon is recreated, cluster users are unable to apply any new SR-IOV Network node policy or create new SR-IOV pods. Procedure To update the node selector for the operator, enter the following command: USD oc patch sriovoperatorconfig default --type=json \ -n openshift-sriov-network-operator \ --patch '[{ "op": "replace", "path": "/spec/configDaemonNodeSelector", "value": {<node_label>} }]' Replace <node_label> with a label to apply as in the following example: "node-role.kubernetes.io/worker": "" . Tip You can alternatively apply the following YAML to update the Operator: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: <node_label> 23.3.1.8. Configuring the SR-IOV Network Operator for single node installations By default, the SR-IOV Network Operator drains workloads from a node before every policy change. The Operator performs this action to ensure that there no workloads using the virtual functions before the reconfiguration. For installations on a single node, there are no other nodes to receive the workloads. As a result, the Operator must be configured not to drain the workloads from the single node. Important After performing the following procedure to disable draining workloads, you must remove any workload that uses an SR-IOV network interface before you change any SR-IOV network node policy. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. You must have installed the SR-IOV Network Operator. Procedure To set the disableDrain field to true , enter the following command: USD oc patch sriovoperatorconfig default --type=merge \ -n openshift-sriov-network-operator \ --patch '{ "spec": { "disableDrain": true } }' Tip You can alternatively apply the following YAML to update the Operator: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: disableDrain: true 23.3.1.9. Deploying the SR-IOV Operator for hosted control planes Important Hosted control planes on the AWS platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . After you configure and deploy your hosting service cluster, you can create a subscription to the SR-IOV Operator on a hosted cluster. The SR-IOV pod runs on worker machines rather than the control plane. Prerequisites You must configure and deploy the hosted cluster on AWS. For more information, see Configuring the hosting cluster on AWS (Technology Preview) . Procedure Create a namespace and an Operator group: apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator Create a subscription to the SR-IOV Operator: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subsription namespace: openshift-sriov-network-operator spec: channel: stable name: sriov-network-operator config: nodeSelector: node-role.kubernetes.io/worker: "" source: s/qe-app-registry/redhat-operators sourceNamespace: openshift-marketplace Verification To verify that the SR-IOV Operator is ready, run the following command and view the resulting output: USD oc get csv -n openshift-sriov-network-operator Example output NAME DISPLAY VERSION REPLACES PHASE sriov-network-operator.4.14.0-202211021237 SR-IOV Network Operator 4.14.0-202211021237 sriov-network-operator.4.14.0-202210290517 Succeeded To verify that the SR-IOV pods are deployed, run the following command: USD oc get pods -n openshift-sriov-network-operator 23.3.2. steps Configuring an SR-IOV network device 23.4. Configuring an SR-IOV network device You can configure a Single Root I/O Virtualization (SR-IOV) device in your cluster. 23.4.1. SR-IOV network node configuration object You specify the SR-IOV network device configuration for a node by creating an SR-IOV network node policy. The API object for the policy is part of the sriovnetwork.openshift.io API group. The following YAML describes an SR-IOV network node policy: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" 4 priority: <priority> 5 mtu: <mtu> 6 needVhostNet: false 7 numVfs: <num> 8 externallyManaged: false 9 nicSelector: 10 vendor: "<vendor_code>" 11 deviceID: "<device_id>" 12 pfNames: ["<pf_name>", ...] 13 rootDevices: ["<pci_bus_id>", ...] 14 netFilter: "<filter_string>" 15 deviceType: <device_type> 16 isRdma: false 17 linkType: <link_type> 18 eSwitchMode: "switchdev" 19 excludeTopology: false 20 1 The name for the custom resource object. 2 The namespace where the SR-IOV Network Operator is installed. 3 The resource name of the SR-IOV network device plugin. You can create multiple SR-IOV network node policies for a resource name. When specifying a name, be sure to use the accepted syntax expression ^[a-zA-Z0-9_]+USD in the resourceName . 4 The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed on selected nodes only. Important The SR-IOV Network Operator applies node network configuration policies to nodes in sequence. Before applying node network configuration policies, the SR-IOV Network Operator checks if the machine config pool (MCP) for a node is in an unhealthy state such as Degraded or Updating . If a node is in an unhealthy MCP, the process of applying node network configuration policies to all targeted nodes in the cluster pauses until the MCP returns to a healthy state. To avoid a node in an unhealthy MCP from blocking the application of node network configuration policies to other nodes, including nodes in other MCPs, you must create a separate node network configuration policy for each MCP. 5 Optional: The priority is an integer value between 0 and 99 . A smaller value receives higher priority. For example, a priority of 10 is a higher priority than 99 . The default value is 99 . 6 Optional: The maximum transmission unit (MTU) of the physical function and all its virtual functions. The maximum MTU value can vary for different network interface controller (NIC) models. Important If you want to create virtual function on the default network interface, ensure that the MTU is set to a value that matches the cluster MTU. If you want to modify the MTU of a single virtual function while the function is assigned to a pod, leave the MTU value blank in the SR-IOV network node policy. Otherwise, the SR-IOV Network Operator reverts the MTU of the virtual function to the MTU value defined in the SR-IOV network node policy, which might trigger a node drain. 7 Optional: Set needVhostNet to true to mount the /dev/vhost-net device in the pod. Use the mounted /dev/vhost-net device with Data Plane Development Kit (DPDK) to forward traffic to the kernel network stack. 8 The number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 127 . 9 The externallyManaged field indicates whether the SR-IOV Network Operator manages all, or only a subset of virtual functions (VFs). With the value set to false the SR-IOV Network Operator manages and configures all VFs on the PF. Note When externallyManaged is set to true , you must manually create the Virtual Functions (VFs) on the physical function (PF) before applying the SriovNetworkNodePolicy resource. If the VFs are not pre-created, the SR-IOV Network Operator's webhook will block the policy request. When externallyManaged is set to false , the SR-IOV Network Operator automatically creates and manages the VFs, including resetting them if necessary. To use VFs on the host system, you must create them through NMState, and set externallyManaged to true . In this mode, the SR-IOV Network Operator does not modify the PF or the manually managed VFs, except for those explicitly defined in the nicSelector field of your policy. However, the SR-IOV Network Operator continues to manage VFs that are used as pod secondary interfaces. 10 The NIC selector identifies the device to which this resource applies. You do not have to specify values for all the parameters. It is recommended to identify the network device with enough precision to avoid selecting a device unintentionally. If you specify rootDevices , you must also specify a value for vendor , deviceID , or pfNames . If you specify both pfNames and rootDevices at the same time, ensure that they refer to the same device. If you specify a value for netFilter , then you do not need to specify any other parameter because a network ID is unique. 11 Optional: The vendor hexadecimal vendor identifier of the SR-IOV network device. The only allowed values are 8086 (Intel) and 15b3 (Mellanox). 12 Optional: The device hexadecimal device identifier of the SR-IOV network device. For example, 101b is the device ID for a Mellanox ConnectX-6 device. 13 Optional: An array of one or more physical function (PF) names the resource must apply to. 14 Optional: An array of one or more PCI bus addresses the resource must apply to. For example 0000:02:00.1 . 15 Optional: The platform-specific network filter. The only supported platform is Red Hat OpenStack Platform (RHOSP). Acceptable values use the following format: openstack/NetworkID:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx . Replace xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx with the value from the /var/config/openstack/latest/network_data.json metadata file. This filter ensures that VFs are associated with a specific OpenStack network. The operator uses this filter to map the VFs to the appropriate network based on metadata provided by the OpenStack platform. 16 Optional: The driver to configure for the VFs created from this resource. The only allowed values are netdevice and vfio-pci . The default value is netdevice . For a Mellanox NIC to work in DPDK mode on bare metal nodes, use the netdevice driver type and set isRdma to true . 17 Optional: Configures whether to enable remote direct memory access (RDMA) mode. The default value is false . If the isRdma parameter is set to true , you can continue to use the RDMA-enabled VF as a normal network device. A device can be used in either mode. Set isRdma to true and additionally set needVhostNet to true to configure a Mellanox NIC for use with Fast Datapath DPDK applications. Note You cannot set the isRdma parameter to true for intel NICs. 18 Optional: The link type for the VFs. The default value is eth for Ethernet. Change this value to 'ib' for InfiniBand. When linkType is set to ib , isRdma is automatically set to true by the SR-IOV Network Operator webhook. When linkType is set to ib , deviceType should not be set to vfio-pci . Do not set linkType to eth for SriovNetworkNodePolicy, because this can lead to an incorrect number of available devices reported by the device plugin. 19 Optional: To enable hardware offloading, you must set the eSwitchMode field to "switchdev" . For more information about hardware offloading, see "Configuring hardware offloading". 20 Optional: To exclude advertising an SR-IOV network resource's NUMA node to the Topology Manager, set the value to true . The default value is false . 23.4.1.1. SR-IOV network node configuration examples The following example describes the configuration for an InfiniBand device: Example configuration for an InfiniBand device apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-ib-net-1 namespace: openshift-sriov-network-operator spec: resourceName: ibnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 4 nicSelector: vendor: "15b3" deviceID: "101b" rootDevices: - "0000:19:00.0" linkType: ib isRdma: true The following example describes the configuration for an SR-IOV network device in a RHOSP virtual machine: Example configuration for an SR-IOV device in a virtual machine apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-sriov-net-openstack-1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 1 1 nicSelector: vendor: "15b3" deviceID: "101b" netFilter: "openstack/NetworkID:ea24bd04-8674-4f69-b0ee-fa0b3bd20509" 2 1 The numVfs field is always set to 1 when configuring the node network policy for a virtual machine. 2 The netFilter field must refer to a network ID when the virtual machine is deployed on RHOSP. Valid values for netFilter are available from an SriovNetworkNodeState object. 23.4.1.2. Virtual function (VF) partitioning for SR-IOV devices In some cases, you might want to split virtual functions (VFs) from the same physical function (PF) into multiple resource pools. For example, you might want some of the VFs to load with the default driver and the remaining VFs load with the vfio-pci driver. In such a deployment, the pfNames selector in your SriovNetworkNodePolicy custom resource (CR) can be used to specify a range of VFs for a pool using the following format: <pfname>#<first_vf>-<last_vf> . For example, the following YAML shows the selector for an interface named netpf0 with VF 2 through 7 : pfNames: ["netpf0#2-7"] netpf0 is the PF interface name. 2 is the first VF index (0-based) that is included in the range. 7 is the last VF index (0-based) that is included in the range. You can select VFs from the same PF by using different policy CRs if the following requirements are met: The numVfs value must be identical for policies that select the same PF. The VF index must be in the range of 0 to <numVfs>-1 . For example, if you have a policy with numVfs set to 8 , then the <first_vf> value must not be smaller than 0 , and the <last_vf> must not be larger than 7 . The VFs ranges in different policies must not overlap. The <first_vf> must not be larger than the <last_vf> . The following example illustrates NIC partitioning for an SR-IOV device. The policy policy-net-1 defines a resource pool net-1 that contains the VF 0 of PF netpf0 with the default VF driver. The policy policy-net-1-dpdk defines a resource pool net-1-dpdk that contains the VF 8 to 15 of PF netpf0 with the vfio VF driver. Policy policy-net-1 : apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1 namespace: openshift-sriov-network-operator spec: resourceName: net1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 16 nicSelector: pfNames: ["netpf0#0-0"] deviceType: netdevice Policy policy-net-1-dpdk : apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1-dpdk namespace: openshift-sriov-network-operator spec: resourceName: net1dpdk nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 16 nicSelector: pfNames: ["netpf0#8-15"] deviceType: vfio-pci Verifying that the interface is successfully partitioned Confirm that the interface partitioned to virtual functions (VFs) for the SR-IOV device by running the following command. USD ip link show <interface> 1 1 Replace <interface> with the interface that you specified when partitioning to VFs for the SR-IOV device, for example, ens3f1 . Example output 5: ens3f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 3c:fd:fe:d1:bc:01 brd ff:ff:ff:ff:ff:ff vf 0 link/ether 5a:e7:88:25:ea:a0 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 1 link/ether 3e:1d:36:d7:3d:49 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 2 link/ether ce:09:56:97:df:f9 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 3 link/ether 5e:91:cf:88:d1:38 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 4 link/ether e6:06:a1:96:2f:de brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off 23.4.2. Configuring SR-IOV network devices The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io CustomResourceDefinition to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR). Note When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. It might take several minutes for a configuration change to apply. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the SR-IOV Network Operator. You have enough available nodes in your cluster to handle the evicted workload from drained nodes. You have not selected any control plane nodes for SR-IOV network device configuration. Procedure Create an SriovNetworkNodePolicy object, and then save the YAML in the <name>-sriov-node-network.yaml file. Replace <name> with the name for this configuration. Optional: Label the SR-IOV capable cluster nodes with SriovNetworkNodePolicy.Spec.NodeSelector if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes". Create the SriovNetworkNodePolicy object: USD oc create -f <name>-sriov-node-network.yaml where <name> specifies the name for this configuration. After applying the configuration update, all the pods in sriov-network-operator namespace transition to the Running status. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured. USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}' Additional resources Understanding how to update labels on nodes . 23.4.2.1. Configuring parallel node draining during SR-IOV network policy updates By default, the SR-IOV Network Operator drains workloads from a node before every policy change. The Operator completes this action, one node at a time, to ensure that no workloads are affected by the reconfiguration. In large clusters, draining nodes sequentially can be time-consuming, taking hours or even days. In time-sensitive environments, you can enable parallel node draining in an SriovNetworkPoolConfig custom resource (CR) for faster rollouts of SR-IOV network configurations. To configure parallel draining, use the SriovNetworkPoolConfig CR to create a node pool. You can then add nodes to the pool and define the maximum number of nodes in the pool that the Operator can drain in parallel. With this approach, you can enable parallel draining for faster reconfiguration while ensuring you still have enough nodes remaining in the pool to handle any running workloads. Note A node can belong to only one SR-IOV network pool configuration. If a node is not part of a pool, it is added to a virtual, default pool that is configured to drain one node at a time only. The node might restart during the draining process. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the SR-IOV Network Operator. Ensure that nodes have hardware that supports SR-IOV. Procedure Create a SriovNetworkPoolConfig resource: Create a YAML file that defines the SriovNetworkPoolConfig resource: Example sriov-nw-pool.yaml file apiVersion: v1 kind: SriovNetworkPoolConfig metadata: name: pool-1 1 namespace: openshift-sriov-network-operator 2 spec: maxUnavailable: 2 3 nodeSelector: 4 matchLabels: node-role.kubernetes.io/worker: "" 1 Specify the name of the SriovNetworkPoolConfig object. 2 Specify namespace where the SR-IOV Network Operator is installed. 3 Specify an integer number or percentage value for nodes that can be unavailable in the pool during an update. For example, if you have 10 nodes and you set the maximum unavailable value to 2, then only 2 nodes can be drained in parallel at any time, leaving 8 nodes for handling workloads. 4 Specify the nodes to add the pool by using the node selector. This example adds all nodes with the worker role to the pool. Create the SriovNetworkPoolConfig resource by running the following command: USD oc create -f sriov-nw-pool.yaml Create the sriov-test namespace by running the following comand: USD oc create namespace sriov-test Create a SriovNetworkNodePolicy resource: Create a YAML file that defines the SriovNetworkNodePolicy resource: Example sriov-node-policy.yaml file apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: pfNames: ["ens1"] nodeSelector: node-role.kubernetes.io/worker: "" numVfs: 5 priority: 99 resourceName: sriov_nic_1 Create the SriovNetworkNodePolicy resource by running the following command: USD oc create -f sriov-node-policy.yaml Create a SriovNetwork resource: Create a YAML file that defines the SriovNetwork resource: Example sriov-network.yaml file apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nic-1 namespace: openshift-sriov-network-operator spec: linkState: auto networkNamespace: sriov-test resourceName: sriov_nic_1 capabilities: '{ "mac": true, "ips": true }' ipam: '{ "type": "static" }' Create the SriovNetwork resource by running the following command: USD oc create -f sriov-network.yaml Verification View the node pool you created by running the following command: USD oc get sriovNetworkpoolConfig -n openshift-sriov-network-operator Example output NAME AGE pool-1 67s 1 1 In this example, pool-1 contains all the nodes with the worker role. To demonstrate the node draining process by using the example scenario from the procedure, complete the following steps: Update the number of virtual functions in the SriovNetworkNodePolicy resource to trigger workload draining in the cluster: USD oc patch SriovNetworkNodePolicy sriov-nic-1 -n openshift-sriov-network-operator --type merge -p '{"spec": {"numVfs": 4}}' Monitor the draining status on the target cluster by running the following command: USD oc get sriovNetworkNodeState -n openshift-sriov-network-operator Example output NAMESPACE NAME SYNC STATUS DESIRED SYNC STATE CURRENT SYNC STATE AGE openshift-sriov-network-operator worker-0 InProgress Drain_Required DrainComplete 3d10h openshift-sriov-network-operator worker-1 InProgress Drain_Required DrainComplete 3d10h When the draining process is complete, the SYNC STATUS changes to Succeeded , and the DESIRED SYNC STATE and CURRENT SYNC STATE values return to IDLE . Example output NAMESPACE NAME SYNC STATUS DESIRED SYNC STATE CURRENT SYNC STATE AGE openshift-sriov-network-operator worker-0 Succeeded Idle Idle 3d10h openshift-sriov-network-operator worker-1 Succeeded Idle Idle 3d10h 23.4.3. Troubleshooting SR-IOV configuration After following the procedure to configure an SR-IOV network device, the following sections address some error conditions. To display the state of nodes, run the following command: USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> where: <node_name> specifies the name of a node with an SR-IOV network device. Error output: Cannot allocate memory "lastSyncError": "write /sys/bus/pci/devices/0000:3b:00.1/sriov_numvfs: cannot allocate memory" When a node indicates that it cannot allocate memory, check the following items: Confirm that global SR-IOV settings are enabled in the BIOS for the node. Confirm that VT-d is enabled in the BIOS for the node. 23.4.4. Assigning an SR-IOV network to a VRF As a cluster administrator, you can assign an SR-IOV network interface to your VRF domain by using the CNI VRF plugin. To do this, add the VRF configuration to the optional metaPlugins parameter of the SriovNetwork resource. Note Applications that use VRFs need to bind to a specific device. The common usage is to use the SO_BINDTODEVICE option for a socket. SO_BINDTODEVICE binds the socket to a device that is specified in the passed interface name, for example, eth1 . To use SO_BINDTODEVICE , the application must have CAP_NET_RAW capabilities. Using a VRF through the ip vrf exec command is not supported in OpenShift Container Platform pods. To use VRF, bind applications directly to the VRF interface. 23.4.4.1. Creating an additional SR-IOV network attachment with the CNI VRF plugin The SR-IOV Network Operator manages additional network definitions. When you specify an additional SR-IOV network to create, the SR-IOV Network Operator creates the NetworkAttachmentDefinition custom resource (CR) automatically. Note Do not edit NetworkAttachmentDefinition custom resources that the SR-IOV Network Operator manages. Doing so might disrupt network traffic on your additional network. To create an additional SR-IOV network attachment with the CNI VRF plugin, perform the following procedure. Prerequisites Install the OpenShift Container Platform CLI (oc). Log in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. Procedure Create the SriovNetwork custom resource (CR) for the additional SR-IOV network attachment and insert the metaPlugins configuration, as in the following example CR. Save the YAML as the file sriov-network-attachment.yaml . apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: example-network namespace: additional-sriov-network-1 spec: ipam: | { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "routes": [{ "dst": "0.0.0.0/0" }], "gateway": "10.56.217.1" } vlan: 0 resourceName: intelnics metaPlugins : | { "type": "vrf", 1 "vrfname": "example-vrf-name" 2 } 1 type must be set to vrf . 2 vrfname is the name of the VRF that the interface is assigned to. If it does not exist in the pod, it is created. Create the SriovNetwork resource: USD oc create -f sriov-network-attachment.yaml Verifying that the NetworkAttachmentDefinition CR is successfully created Confirm that the SR-IOV Network Operator created the NetworkAttachmentDefinition CR by running the following command. USD oc get network-attachment-definitions -n <namespace> 1 1 Replace <namespace> with the namespace that you specified when configuring the network attachment, for example, additional-sriov-network-1 . Example output NAME AGE additional-sriov-network-1 14m Note There might be a delay before the SR-IOV Network Operator creates the CR. Verifying that the additional SR-IOV network attachment is successful To verify that the VRF CNI is correctly configured and the additional SR-IOV network attachment is attached, do the following: Create an SR-IOV network that uses the VRF CNI. Assign the network to a pod. Verify that the pod network attachment is connected to the SR-IOV additional network. Remote shell into the pod and run the following command: USD ip vrf show Example output Name Table ----------------------- red 10 Confirm the VRF interface is master of the secondary interface: USD ip link Example output ... 5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode ... 23.4.5. Exclude the SR-IOV network topology for NUMA-aware scheduling You can exclude advertising the Non-Uniform Memory Access (NUMA) node for the SR-IOV network to the Topology Manager for more flexible SR-IOV network deployments during NUMA-aware pod scheduling. In some scenarios, it is a priority to maximize CPU and memory resources for a pod on a single NUMA node. By not providing a hint to the Topology Manager about the NUMA node for the pod's SR-IOV network resource, the Topology Manager can deploy the SR-IOV network resource and the pod CPU and memory resources to different NUMA nodes. This can add to network latency because of the data transfer between NUMA nodes. However, it is acceptable in scenarios when workloads require optimal CPU and memory performance. For example, consider a compute node, compute-1 , that features two NUMA nodes: numa0 and numa1 . The SR-IOV-enabled NIC is present on numa0 . The CPUs available for pod scheduling are present on numa1 only. By setting the excludeTopology specification to true , the Topology Manager can assign CPU and memory resources for the pod to numa1 and can assign the SR-IOV network resource for the same pod to numa0 . This is only possible when you set the excludeTopology specification to true . Otherwise, the Topology Manager attempts to place all resources on the same NUMA node. 23.4.5.1. Excluding the SR-IOV network topology for NUMA-aware scheduling To exclude advertising the SR-IOV network resource's Non-Uniform Memory Access (NUMA) node to the Topology Manager, you can configure the excludeTopology specification in the SriovNetworkNodePolicy custom resource. Use this configuration for more flexible SR-IOV network deployments during NUMA-aware pod scheduling. Prerequisites You have installed the OpenShift CLI ( oc ). You have configured the CPU Manager policy to static . For more information about CPU Manager, see the Additional resources section. You have configured the Topology Manager policy to single-numa-node . You have installed the SR-IOV Network Operator. Procedure Create the SriovNetworkNodePolicy CR: Save the following YAML in the sriov-network-node-policy.yaml file, replacing values in the YAML to match your environment: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <policy_name> namespace: openshift-sriov-network-operator spec: resourceName: sriovnuma0 1 nodeSelector: kubernetes.io/hostname: <node_name> numVfs: <number_of_Vfs> nicSelector: 2 vendor: "<vendor_ID>" deviceID: "<device_ID>" deviceType: netdevice excludeTopology: true 3 1 The resource name of the SR-IOV network device plugin. This YAML uses a sample resourceName value. 2 Identify the device for the Operator to configure by using the NIC selector. 3 To exclude advertising the NUMA node for the SR-IOV network resource to the Topology Manager, set the value to true . The default value is false . Note If multiple SriovNetworkNodePolicy resources target the same SR-IOV network resource, the SriovNetworkNodePolicy resources must have the same value as the excludeTopology specification. Otherwise, the conflicting policy is rejected. Create the SriovNetworkNodePolicy resource by running the following command: USD oc create -f sriov-network-node-policy.yaml Example output sriovnetworknodepolicy.sriovnetwork.openshift.io/policy-for-numa-0 created Create the SriovNetwork CR: Save the following YAML in the sriov-network.yaml file, replacing values in the YAML to match your environment: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-numa-0-network 1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnuma0 2 networkNamespace: <namespace> 3 ipam: |- 4 { "type": "<ipam_type>", } 1 Replace sriov-numa-0-network with the name for the SR-IOV network resource. 2 Specify the resource name for the SriovNetworkNodePolicy CR from the step. This YAML uses a sample resourceName value. 3 Enter the namespace for your SR-IOV network resource. 4 Enter the IP address management configuration for the SR-IOV network. Create the SriovNetwork resource by running the following command: USD oc create -f sriov-network.yaml Example output sriovnetwork.sriovnetwork.openshift.io/sriov-numa-0-network created Create a pod and assign the SR-IOV network resource from the step: Save the following YAML in the sriov-network-pod.yaml file, replacing values in the YAML to match your environment: apiVersion: v1 kind: Pod metadata: name: <pod_name> annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "sriov-numa-0-network", 1 } ] spec: containers: - name: <container_name> image: <image> imagePullPolicy: IfNotPresent command: ["sleep", "infinity"] 1 This is the name of the SriovNetwork resource that uses the SriovNetworkNodePolicy resource. Create the Pod resource by running the following command: USD oc create -f sriov-network-pod.yaml Example output pod/example-pod created Verification Verify the status of the pod by running the following command, replacing <pod_name> with the name of the pod: USD oc get pod <pod_name> Example output NAME READY STATUS RESTARTS AGE test-deployment-sriov-76cbbf4756-k9v72 1/1 Running 0 45h Open a debug session with the target pod to verify that the SR-IOV network resources are deployed to a different node than the memory and CPU resources. Open a debug session with the pod by running the following command, replacing <pod_name> with the target pod name. USD oc debug pod/<pod_name> Set /host as the root directory within the debug shell. The debug pod mounts the root file system from the host in /host within the pod. By changing the root directory to /host , you can run binaries from the host file system: USD chroot /host View information about the CPU allocation by running the following commands: USD lscpu | grep NUMA Example output NUMA node(s): 2 NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,... NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,... USD cat /proc/self/status | grep Cpus Example output Cpus_allowed: aa Cpus_allowed_list: 1,3,5,7 USD cat /sys/class/net/net1/device/numa_node Example output 0 In this example, CPUs 1,3,5, and 7 are allocated to NUMA node1 but the SR-IOV network resource can use the NIC in NUMA node0 . Note If the excludeTopology specification is set to True , it is possible that the required resources exist in the same NUMA node. Additional resources Using CPU Manager 23.4.6. steps Configuring an SR-IOV network attachment 23.5. Configuring an SR-IOV Ethernet network attachment You can configure an Ethernet network attachment for an Single Root I/O Virtualization (SR-IOV) device in the cluster. 23.5.1. Ethernet device configuration object You can configure an Ethernet network device by defining an SriovNetwork object. The following YAML describes an SriovNetwork object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: "<spoof_check>" 6 ipam: |- 7 {} linkState: <link_state> 8 maxTxRate: <max_tx_rate> 9 minTxRate: <min_tx_rate> 10 vlanQoS: <vlan_qos> 11 trust: "<trust_vf>" 12 capabilities: <capabilities> 13 1 A name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 The namespace where the SR-IOV Network Operator is installed. 3 The value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 The target namespace for the SriovNetwork object. Only pods in the target namespace can attach to the additional network. 5 Optional: A Virtual LAN (VLAN) ID for the additional network. The integer value must be from 0 to 4095 . The default value is 0 . 6 Optional: The spoof check mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value you specify in quotes or the object is rejected by the SR-IOV Network Operator. 7 A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. 8 Optional: The link state of virtual function (VF). Allowed value are enable , disable and auto . 9 Optional: A maximum transmission rate, in Mbps, for the VF. 10 Optional: A minimum transmission rate, in Mbps, for the VF. This value must be less than or equal to the maximum transmission rate. Note Intel NICs do not support the minTxRate parameter. For more information, see BZ#1772847 . 11 Optional: An IEEE 802.1p priority level for the VF. The default value is 0 . 12 Optional: The trust mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value that you specify in quotes, or the SR-IOV Network Operator rejects the object. 13 Optional: The capabilities to configure for this additional network. You can specify '{ "ips": true }' to enable IP address support or '{ "mac": true }' to enable MAC address support. 23.5.1.1. Configuration of IP address assignment for an additional network The IP address management (IPAM) Container Network Interface (CNI) plugin provides IP addresses for other CNI plugins. You can use the following IP address assignment types: Static assignment. Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network. Dynamic assignment through the Whereabouts IPAM CNI plugin. 23.5.1.1.1. Static IP address assignment configuration The following table describes the configuration for static IP address assignment: Table 23.3. ipam static configuration object Field Type Description type string The IPAM address type. The value static is required. addresses array An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. routes array An array of objects specifying routes to configure inside the pod. dns array Optional: An array of objects specifying the DNS configuration. The addresses array requires objects with the following fields: Table 23.4. ipam.addresses[] array Field Type Description address string An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24 , then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0 . gateway string The default gateway to route egress network traffic to. Table 23.5. ipam.routes[] array Field Type Description dst string The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route. gw string The gateway where network traffic is routed. Table 23.6. ipam.dns object Field Type Description nameservers array An array of one or more IP addresses for to send DNS queries to. domain array The default domain to append to a hostname. For example, if the domain is set to example.com , a DNS lookup query for example-host is rewritten as example-host.example.com . search array An array of domain names to append to an unqualified hostname, such as example-host , during a DNS lookup query. Static IP address assignment configuration example { "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.7/24" } ] } } 23.5.1.1.2. Dynamic IP address (DHCP) assignment configuration The following JSON describes the configuration for dynamic IP address address assignment with DHCP. Renewal of DHCP leases A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster. The SR-IOV Network Operator does not create a DHCP server deployment; The Cluster Network Operator is responsible for creating the minimal DHCP server deployment. To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example: Example shim network attachment definition apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { "name": "dhcp-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "dhcp" } } # ... Table 23.7. ipam DHCP configuration object Field Type Description type string The IPAM address type. The value dhcp is required. Dynamic IP address (DHCP) assignment configuration example { "ipam": { "type": "dhcp" } } 23.5.1.1.3. Dynamic IP address assignment configuration with Whereabouts The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server. The following table describes the configuration for dynamic IP address assignment with Whereabouts: Table 23.8. ipam whereabouts configuration object Field Type Description type string The IPAM address type. The value whereabouts is required. range string An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. exclude array Optional: A list of zero or more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. Dynamic IP address assignment configuration example that uses Whereabouts { "ipam": { "type": "whereabouts", "range": "192.0.2.192/27", "exclude": [ "192.0.2.192/30", "192.0.2.196/32" ] } } 23.5.1.2. Creating a configuration for assignment of dual-stack IP addresses dynamically Dual-stack IP address assignment can be configured with the ipRanges parameter for: IPv4 addresses IPv6 addresses multiple IP address assignment Procedure Set type to whereabouts . Use ipRanges to allocate IP addresses as shown in the following example: cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { "name": "whereabouts-dual-stack", "cniVersion": "0.3.1, "type": "bridge", "ipam": { "type": "whereabouts", "ipRanges": [ {"range": "192.168.10.0/24"}, {"range": "2001:db8::/64"} ] } } Attach network to a pod. For more information, see "Adding a pod to an additional network". Verify that all IP addresses are assigned. Run the following command to ensure the IP addresses are assigned as metadata. USD oc exec -it mypod -- ip a Additional resources Attaching a pod to an additional network 23.5.2. Configuring SR-IOV additional network You can configure an additional network that uses SR-IOV hardware by creating an SriovNetwork object. When you create an SriovNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object. Note Do not modify or delete an SriovNetwork object if it is attached to any pods in a running state. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a SriovNetwork object, and then save the YAML in the <name>.yaml file, where <name> is a name for this additional network. The object specification might resemble the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "gateway": "10.56.217.1" } To create the object, enter the following command: USD oc create -f <name>.yaml where <name> specifies the name of the additional network. Optional: To confirm that the NetworkAttachmentDefinition object that is associated with the SriovNetwork object that you created in the step exists, enter the following command. Replace <namespace> with the networkNamespace you specified in the SriovNetwork object. USD oc get net-attach-def -n <namespace> 23.5.3. steps Adding a pod to an SR-IOV additional network 23.5.4. Additional resources Configuring an SR-IOV network device 23.6. Configuring an SR-IOV InfiniBand network attachment You can configure an InfiniBand (IB) network attachment for an Single Root I/O Virtualization (SR-IOV) device in the cluster. 23.6.1. InfiniBand device configuration object You can configure an InfiniBand (IB) network device by defining an SriovIBNetwork object. The following YAML describes an SriovIBNetwork object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 ipam: |- 5 {} linkState: <link_state> 6 capabilities: <capabilities> 7 1 A name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 The namespace where the SR-IOV Operator is installed. 3 The value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 The target namespace for the SriovIBNetwork object. Only pods in the target namespace can attach to the network device. 5 Optional: A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. 6 Optional: The link state of virtual function (VF). Allowed values are enable , disable and auto . 7 Optional: The capabilities to configure for this network. You can specify '{ "ips": true }' to enable IP address support or '{ "infinibandGUID": true }' to enable IB Global Unique Identifier (GUID) support. 23.6.1.1. Configuration of IP address assignment for an additional network The IP address management (IPAM) Container Network Interface (CNI) plugin provides IP addresses for other CNI plugins. You can use the following IP address assignment types: Static assignment. Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network. Dynamic assignment through the Whereabouts IPAM CNI plugin. 23.6.1.1.1. Static IP address assignment configuration The following table describes the configuration for static IP address assignment: Table 23.9. ipam static configuration object Field Type Description type string The IPAM address type. The value static is required. addresses array An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. routes array An array of objects specifying routes to configure inside the pod. dns array Optional: An array of objects specifying the DNS configuration. The addresses array requires objects with the following fields: Table 23.10. ipam.addresses[] array Field Type Description address string An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24 , then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0 . gateway string The default gateway to route egress network traffic to. Table 23.11. ipam.routes[] array Field Type Description dst string The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route. gw string The gateway where network traffic is routed. Table 23.12. ipam.dns object Field Type Description nameservers array An array of one or more IP addresses for to send DNS queries to. domain array The default domain to append to a hostname. For example, if the domain is set to example.com , a DNS lookup query for example-host is rewritten as example-host.example.com . search array An array of domain names to append to an unqualified hostname, such as example-host , during a DNS lookup query. Static IP address assignment configuration example { "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.7/24" } ] } } 23.6.1.1.2. Dynamic IP address (DHCP) assignment configuration The following JSON describes the configuration for dynamic IP address address assignment with DHCP. Renewal of DHCP leases A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster. To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example: Example shim network attachment definition apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { "name": "dhcp-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "dhcp" } } # ... Table 23.13. ipam DHCP configuration object Field Type Description type string The IPAM address type. The value dhcp is required. Dynamic IP address (DHCP) assignment configuration example { "ipam": { "type": "dhcp" } } 23.6.1.1.3. Dynamic IP address assignment configuration with Whereabouts The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server. The following table describes the configuration for dynamic IP address assignment with Whereabouts: Table 23.14. ipam whereabouts configuration object Field Type Description type string The IPAM address type. The value whereabouts is required. range string An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. exclude array Optional: A list of zero or more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. Dynamic IP address assignment configuration example that uses Whereabouts { "ipam": { "type": "whereabouts", "range": "192.0.2.192/27", "exclude": [ "192.0.2.192/30", "192.0.2.196/32" ] } } 23.6.1.2. Creating a configuration for assignment of dual-stack IP addresses dynamically Dual-stack IP address assignment can be configured with the ipRanges parameter for: IPv4 addresses IPv6 addresses multiple IP address assignment Procedure Set type to whereabouts . Use ipRanges to allocate IP addresses as shown in the following example: cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { "name": "whereabouts-dual-stack", "cniVersion": "0.3.1, "type": "bridge", "ipam": { "type": "whereabouts", "ipRanges": [ {"range": "192.168.10.0/24"}, {"range": "2001:db8::/64"} ] } } Attach network to a pod. For more information, see "Adding a pod to an additional network". Verify that all IP addresses are assigned. Run the following command to ensure the IP addresses are assigned as metadata. USD oc exec -it mypod -- ip a Additional resources Attaching a pod to an additional network 23.6.2. Configuring SR-IOV additional network You can configure an additional network that uses SR-IOV hardware by creating an SriovIBNetwork object. When you create an SriovIBNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object. Note Do not modify or delete an SriovIBNetwork object if it is attached to any pods in a running state. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a SriovIBNetwork object, and then save the YAML in the <name>.yaml file, where <name> is a name for this additional network. The object specification might resemble the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "gateway": "10.56.217.1" } To create the object, enter the following command: USD oc create -f <name>.yaml where <name> specifies the name of the additional network. Optional: To confirm that the NetworkAttachmentDefinition object that is associated with the SriovIBNetwork object that you created in the step exists, enter the following command. Replace <namespace> with the networkNamespace you specified in the SriovIBNetwork object. USD oc get net-attach-def -n <namespace> 23.6.3. steps Adding a pod to an SR-IOV additional network 23.6.4. Additional resources Configuring an SR-IOV network device 23.7. Adding a pod to an SR-IOV additional network You can add a pod to an existing Single Root I/O Virtualization (SR-IOV) network. 23.7.1. Runtime configuration for a network attachment When attaching a pod to an additional network, you can specify a runtime configuration to make specific customizations for the pod. For example, you can request a specific MAC hardware address. You specify the runtime configuration by setting an annotation in the pod specification. The annotation key is k8s.v1.cni.cncf.io/networks , and it accepts a JSON object that describes the runtime configuration. 23.7.1.1. Runtime configuration for an Ethernet-based SR-IOV attachment The following JSON describes the runtime configuration options for an Ethernet-based SR-IOV network attachment. [ { "name": "<name>", 1 "mac": "<mac_address>", 2 "ips": ["<cidr_range>"] 3 } ] 1 The name of the SR-IOV network attachment definition CR. 2 Optional: The MAC address for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. To use this feature, you also must specify { "mac": true } in the SriovNetwork object. 3 Optional: IP addresses for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovNetwork object. Example runtime configuration apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "net1", "mac": "20:04:0f:f1:88:01", "ips": ["192.168.10.1/24", "2001::1/64"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: ["sleep", "infinity"] 23.7.1.2. Runtime configuration for an InfiniBand-based SR-IOV attachment The following JSON describes the runtime configuration options for an InfiniBand-based SR-IOV network attachment. [ { "name": "<network_attachment>", 1 "infiniband-guid": "<guid>", 2 "ips": ["<cidr_range>"] 3 } ] 1 The name of the SR-IOV network attachment definition CR. 2 The InfiniBand GUID for the SR-IOV device. To use this feature, you also must specify { "infinibandGUID": true } in the SriovIBNetwork object. 3 The IP addresses for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovIBNetwork object. Example runtime configuration apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "ib1", "infiniband-guid": "c2:11:22:33:44:55:66:77", "ips": ["192.168.10.1/24", "2001::1/64"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: ["sleep", "infinity"] 23.7.2. Adding a pod to an additional network You can add a pod to an additional network. The pod continues to send normal cluster-related network traffic over the default network. When a pod is created additional networks are attached to it. However, if a pod already exists, you cannot attach additional networks to it. The pod must be in the same namespace as the additional network. Note The SR-IOV Network Resource Injector adds the resource field to the first container in a pod automatically. If you are using an Intel network interface controller (NIC) in Data Plane Development Kit (DPDK) mode, only the first container in your pod is configured to access the NIC. Your SR-IOV additional network is configured for DPDK mode if the deviceType is set to vfio-pci in the SriovNetworkNodePolicy object. You can work around this issue by either ensuring that the container that needs access to the NIC is the first container defined in the Pod object or by disabling the Network Resource Injector. For more information, see BZ#1990953 . Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster. Install the SR-IOV Operator. Create either an SriovNetwork object or an SriovIBNetwork object to attach the pod to. Procedure Add an annotation to the Pod object. Only one of the following annotation formats can be used: To attach an additional network without any customization, add an annotation with the following format. Replace <network> with the name of the additional network to associate with the pod: metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1 1 To specify more than one additional network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same additional network multiple times, that pod will have multiple network interfaces attached to that network. To attach an additional network with customizations, add an annotation with the following format: metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "<network>", 1 "namespace": "<namespace>", 2 "default-route": ["<default-route>"] 3 } ] 1 Specify the name of the additional network defined by a NetworkAttachmentDefinition object. 2 Specify the namespace where the NetworkAttachmentDefinition object is defined. 3 Optional: Specify an override for the default route, such as 192.168.17.1 . To create the pod, enter the following command. Replace <name> with the name of the pod. USD oc create -f <name>.yaml Optional: To Confirm that the annotation exists in the Pod CR, enter the following command, replacing <name> with the name of the pod. USD oc get pod <name> -o yaml In the following example, the example-pod pod is attached to the net1 additional network: USD oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.128.2.14" ], "default": true, "dns": {} },{ "name": "macvlan-bridge", "interface": "net1", "ips": [ "20.2.2.100" ], "mac": "22:2f:60:a5:f8:00", "dns": {} }] name: example-pod namespace: default spec: ... status: ... 1 The k8s.v1.cni.cncf.io/network-status parameter is a JSON array of objects. Each object describes the status of an additional network attached to the pod. The annotation value is stored as a plain text value. 23.7.3. Creating a non-uniform memory access (NUMA) aligned SR-IOV pod You can create a NUMA aligned SR-IOV pod by restricting SR-IOV and the CPU resources allocated from the same NUMA node with restricted or single-numa-node Topology Manager polices. Prerequisites You have installed the OpenShift CLI ( oc ). You have configured the CPU Manager policy to static . For more information on CPU Manager, see the "Additional resources" section. You have configured the Topology Manager policy to single-numa-node . Note When single-numa-node is unable to satisfy the request, you can configure the Topology Manager policy to restricted . For more flexible SR-IOV network resource scheduling, see Excluding SR-IOV network topology during NUMA-aware scheduling in the Additional resources section. Procedure Create the following SR-IOV pod spec, and then save the YAML in the <name>-sriov-pod.yaml file. Replace <name> with a name for this pod. The following example shows an SR-IOV pod spec: apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: <name> 1 spec: containers: - name: sample-container image: <image> 2 command: ["sleep", "infinity"] resources: limits: memory: "1Gi" 3 cpu: "2" 4 requests: memory: "1Gi" cpu: "2" 1 Replace <name> with the name of the SR-IOV network attachment definition CR. 2 Replace <image> with the name of the sample-pod image. 3 To create the SR-IOV pod with guaranteed QoS, set memory limits equal to memory requests . 4 To create the SR-IOV pod with guaranteed QoS, set cpu limits equals to cpu requests . Create the sample SR-IOV pod by running the following command: USD oc create -f <filename> 1 1 Replace <filename> with the name of the file you created in the step. Confirm that the sample-pod is configured with guaranteed QoS. USD oc describe pod sample-pod Confirm that the sample-pod is allocated with exclusive CPUs. USD oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus Confirm that the SR-IOV device and CPUs that are allocated for the sample-pod are on the same NUMA node. USD oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus 23.7.4. A test pod template for clusters that use SR-IOV on OpenStack The following testpmd pod demonstrates container creation with huge pages, reserved CPUs, and the SR-IOV port. An example testpmd pod apiVersion: v1 kind: Pod metadata: name: testpmd-sriov namespace: mynamespace annotations: cpu-load-balancing.crio.io: "disable" cpu-quota.crio.io: "disable" # ... spec: containers: - name: testpmd command: ["sleep", "99999"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: ["IPC_LOCK","SYS_ADMIN"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' openshift.io/sriov1: 1 limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi openshift.io/sriov1: 1 volumeMounts: - mountPath: /dev/hugepages name: hugepage readOnly: False runtimeClassName: performance-cnf-performanceprofile 1 volumes: - name: hugepage emptyDir: medium: HugePages 1 This example assumes that the name of the performance profile is cnf-performance profile . 23.7.5. Additional resources Configuring an SR-IOV Ethernet network attachment Configuring an SR-IOV InfiniBand network attachment Using CPU Manager Exclude SR-IOV network topology for NUMA-aware scheduling 23.8. Configuring interface-level network sysctl settings and all-multicast mode for SR-IOV networks As a cluster administrator, you can change interface-level network sysctls and several interface attributes such as promiscuous mode, all-multicast mode, MTU, and MAC address by using the tuning Container Network Interface (CNI) meta plugin for a pod connected to a SR-IOV network device. 23.8.1. Labeling nodes with an SR-IOV enabled NIC If you want to enable SR-IOV on only SR-IOV capable nodes there are a couple of ways to do this: Install the Node Feature Discovery (NFD) Operator. NFD detects the presence of SR-IOV enabled NICs and labels the nodes with node.alpha.kubernetes-incubator.io/nfd-network-sriov.capable = true . Examine the SriovNetworkNodeState CR for each node. The interfaces stanza includes a list of all of the SR-IOV devices discovered by the SR-IOV Network Operator on the worker node. Label each node with feature.node.kubernetes.io/network-sriov.capable: "true" by using the following command: USD oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable="true" Note You can label the nodes with whatever name you want. 23.8.2. Setting one sysctl flag You can set interface-level network sysctl settings for a pod connected to a SR-IOV network device. In this example, net.ipv4.conf.IFNAME.accept_redirects is set to 1 on the created virtual interfaces. The sysctl-tuning-test is a namespace used in this example. Use the following command to create the sysctl-tuning-test namespace: 23.8.2.1. Setting one sysctl flag on nodes with SR-IOV network devices The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io custom resource definition (CRD) to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR). Note When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain and reboot the nodes. It can take several minutes for a configuration change to apply. Follow this procedure to create a SriovNetworkNodePolicy custom resource (CR). Procedure Create an SriovNetworkNodePolicy custom resource (CR). For example, save the following YAML as the file policyoneflag-sriov-node-network.yaml : apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policyoneflag 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyoneflag 3 nodeSelector: 4 feature.node.kubernetes.io/network-sriov.capable="true" priority: 10 5 numVfs: 5 6 nicSelector: 7 pfNames: ["ens5"] 8 deviceType: "netdevice" 9 isRdma: false 10 1 The name for the custom resource object. 2 The namespace where the SR-IOV Network Operator is installed. 3 The resource name of the SR-IOV network device plugin. You can create multiple SR-IOV network node policies for a resource name. 4 The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed on selected nodes only. 5 Optional: The priority is an integer value between 0 and 99 . A smaller value receives higher priority. For example, a priority of 10 is a higher priority than 99 . The default value is 99 . 6 The number of the virtual functions (VFs) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 127 . 7 The NIC selector identifies the device for the Operator to configure. You do not have to specify values for all the parameters. It is recommended to identify the network device with enough precision to avoid selecting a device unintentionally. If you specify rootDevices , you must also specify a value for vendor , deviceID , or pfNames . If you specify both pfNames and rootDevices at the same time, ensure that they refer to the same device. If you specify a value for netFilter , then you do not need to specify any other parameter because a network ID is unique. 8 Optional: An array of one or more physical function (PF) names for the device. 9 Optional: The driver type for the virtual functions. The only allowed value is netdevice . For a Mellanox NIC to work in DPDK mode on bare metal nodes, set isRdma to true . 10 Optional: Configures whether to enable remote direct memory access (RDMA) mode. The default value is false . If the isRdma parameter is set to true , you can continue to use the RDMA-enabled VF as a normal network device. A device can be used in either mode. Set isRdma to true and additionally set needVhostNet to true to configure a Mellanox NIC for use with Fast Datapath DPDK applications. Note The vfio-pci driver type is not supported. Create the SriovNetworkNodePolicy object: USD oc create -f policyoneflag-sriov-node-network.yaml After applying the configuration update, all the pods in sriov-network-operator namespace change to the Running status. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured. USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}' Example output Succeeded 23.8.2.2. Configuring sysctl on a SR-IOV network You can set interface specific sysctl settings on virtual interfaces created by SR-IOV by adding the tuning configuration to the optional metaPlugins parameter of the SriovNetwork resource. The SR-IOV Network Operator manages additional network definitions. When you specify an additional SR-IOV network to create, the SR-IOV Network Operator creates the NetworkAttachmentDefinition custom resource (CR) automatically. Note Do not edit NetworkAttachmentDefinition custom resources that the SR-IOV Network Operator manages. Doing so might disrupt network traffic on your additional network. To change the interface-level network net.ipv4.conf.IFNAME.accept_redirects sysctl settings, create an additional SR-IOV network with the Container Network Interface (CNI) tuning plugin. Prerequisites Install the OpenShift Container Platform CLI (oc). Log in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. Procedure Create the SriovNetwork custom resource (CR) for the additional SR-IOV network attachment and insert the metaPlugins configuration, as in the following example CR. Save the YAML as the file sriov-network-interface-sysctl.yaml . apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: onevalidflag 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyoneflag 3 networkNamespace: sysctl-tuning-test 4 ipam: '{ "type": "static" }' 5 capabilities: '{ "mac": true, "ips": true }' 6 metaPlugins : | 7 { "type": "tuning", "capabilities":{ "mac":true }, "sysctl":{ "net.ipv4.conf.IFNAME.accept_redirects": "1" } } 1 A name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 The namespace where the SR-IOV Network Operator is installed. 3 The value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 The target namespace for the SriovNetwork object. Only pods in the target namespace can attach to the additional network. 5 A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. 6 Optional: Set capabilities for the additional network. You can specify "{ "ips": true }" to enable IP address support or "{ "mac": true }" to enable MAC address support. 7 Optional: The metaPlugins parameter is used to add additional capabilities to the device. In this use case set the type field to tuning . Specify the interface-level network sysctl you want to set in the sysctl field. Create the SriovNetwork resource: USD oc create -f sriov-network-interface-sysctl.yaml Verifying that the NetworkAttachmentDefinition CR is successfully created Confirm that the SR-IOV Network Operator created the NetworkAttachmentDefinition CR by running the following command: USD oc get network-attachment-definitions -n <namespace> 1 1 Replace <namespace> with the value for networkNamespace that you specified in the SriovNetwork object. For example, sysctl-tuning-test . Example output NAME AGE onevalidflag 14m Note There might be a delay before the SR-IOV Network Operator creates the CR. Verifying that the additional SR-IOV network attachment is successful To verify that the tuning CNI is correctly configured and the additional SR-IOV network attachment is attached, do the following: Create a Pod CR. Save the following YAML as the file examplepod.yaml : apiVersion: v1 kind: Pod metadata: name: tunepod namespace: sysctl-tuning-test annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "onevalidflag", 1 "mac": "0a:56:0a:83:04:0c", 2 "ips": ["10.100.100.200/24"] 3 } ] spec: containers: - name: podexample image: centos command: ["/bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault 1 The name of the SR-IOV network attachment definition CR. 2 Optional: The MAC address for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. To use this feature, you also must specify { "mac": true } in the SriovNetwork object. 3 Optional: IP addresses for the SR-IOV device that are allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovNetwork object. Create the Pod CR: USD oc apply -f examplepod.yaml Verify that the pod is created by running the following command: USD oc get pod -n sysctl-tuning-test Example output NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s Log in to the pod by running the following command: USD oc rsh -n sysctl-tuning-test tunepod Verify the values of the configured sysctl flag. Find the value net.ipv4.conf.IFNAME.accept_redirects by running the following command:: USD sysctl net.ipv4.conf.net1.accept_redirects Example output net.ipv4.conf.net1.accept_redirects = 1 23.8.3. Configuring sysctl settings for pods associated with bonded SR-IOV interface flag You can set interface-level network sysctl settings for a pod connected to a bonded SR-IOV network device. In this example, the specific network interface-level sysctl settings that can be configured are set on the bonded interface. The sysctl-tuning-test namespace is used in this example. Procedure Use the following command to create the sysctl-tuning-test namespace: USD oc create namespace sysctl-tuning-test 23.8.3.1. Setting all sysctl flag on nodes with bonded SR-IOV network devices The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io custom resource definition (CRD) to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR). Note When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. It might take several minutes for a configuration change to apply. Follow this procedure to create a SriovNetworkNodePolicy custom resource (CR). Procedure Create an SriovNetworkNodePolicy custom resource (CR). Save the following YAML as the file policyallflags-sriov-node-network.yaml . Replace policyallflags with the name for the configuration. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policyallflags 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyallflags 3 nodeSelector: 4 node.alpha.kubernetes-incubator.io/nfd-network-sriov.capable = `true` priority: 10 5 numVfs: 5 6 nicSelector: 7 pfNames: ["ens1f0"] 8 deviceType: "netdevice" 9 isRdma: false 10 1 The name for the custom resource object. 2 The namespace where the SR-IOV Network Operator is installed. 3 The resource name of the SR-IOV network device plugin. You can create multiple SR-IOV network node policies for a resource name. 4 The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed on selected nodes only. 5 Optional: The priority is an integer value between 0 and 99 . A smaller value receives higher priority. For example, a priority of 10 is a higher priority than 99 . The default value is 99 . 6 The number of virtual functions (VFs) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 127 . 7 The NIC selector identifies the device for the Operator to configure. You do not have to specify values for all the parameters. It is recommended to identify the network device with enough precision to avoid selecting a device unintentionally. If you specify rootDevices , you must also specify a value for vendor , deviceID , or pfNames . If you specify both pfNames and rootDevices at the same time, ensure that they refer to the same device. If you specify a value for netFilter , then you do not need to specify any other parameter because a network ID is unique. 8 Optional: An array of one or more physical function (PF) names for the device. 9 Optional: The driver type for the virtual functions. The only allowed value is netdevice . For a Mellanox NIC to work in DPDK mode on bare metal nodes, set isRdma to true . 10 Optional: Configures whether to enable remote direct memory access (RDMA) mode. The default value is false . If the isRdma parameter is set to true , you can continue to use the RDMA-enabled VF as a normal network device. A device can be used in either mode. Set isRdma to true and additionally set needVhostNet to true to configure a Mellanox NIC for use with Fast Datapath DPDK applications. Note The vfio-pci driver type is not supported. Create the SriovNetworkNodePolicy object: USD oc create -f policyallflags-sriov-node-network.yaml After applying the configuration update, all the pods in sriov-network-operator namespace change to the Running status. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured. USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}' Example output Succeeded 23.8.3.2. Configuring sysctl on a bonded SR-IOV network You can set interface specific sysctl settings on a bonded interface created from two SR-IOV interfaces. Do this by adding the tuning configuration to the optional Plugins parameter of the bond network attachment definition. Note Do not edit NetworkAttachmentDefinition custom resources that the SR-IOV Network Operator manages. Doing so might disrupt network traffic on your additional network. To change specific interface-level network sysctl settings create the SriovNetwork custom resource (CR) with the Container Network Interface (CNI) tuning plugin by using the following procedure. Prerequisites Install the OpenShift Container Platform CLI (oc). Log in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. Procedure Create the SriovNetwork custom resource (CR) for the bonded interface as in the following example CR. Save the YAML as the file sriov-network-attachment.yaml . apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: allvalidflags 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyallflags 3 networkNamespace: sysctl-tuning-test 4 capabilities: '{ "mac": true, "ips": true }' 5 1 A name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 The namespace where the SR-IOV Network Operator is installed. 3 The value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 The target namespace for the SriovNetwork object. Only pods in the target namespace can attach to the additional network. 5 Optional: The capabilities to configure for this additional network. You can specify "{ "ips": true }" to enable IP address support or "{ "mac": true }" to enable MAC address support. Create the SriovNetwork resource: USD oc create -f sriov-network-attachment.yaml Create a bond network attachment definition as in the following example CR. Save the YAML as the file sriov-bond-network-interface.yaml . apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: bond-sysctl-network namespace: sysctl-tuning-test spec: config: '{ "cniVersion":"0.4.0", "name":"bound-net", "plugins":[ { "type":"bond", 1 "mode": "active-backup", 2 "failOverMac": 1, 3 "linksInContainer": true, 4 "miimon": "100", "links": [ 5 {"name": "net1"}, {"name": "net2"} ], "ipam":{ 6 "type":"static" } }, { "type":"tuning", 7 "capabilities":{ "mac":true }, "sysctl":{ "net.ipv4.conf.IFNAME.accept_redirects": "0", "net.ipv4.conf.IFNAME.accept_source_route": "0", "net.ipv4.conf.IFNAME.disable_policy": "1", "net.ipv4.conf.IFNAME.secure_redirects": "0", "net.ipv4.conf.IFNAME.send_redirects": "0", "net.ipv6.conf.IFNAME.accept_redirects": "0", "net.ipv6.conf.IFNAME.accept_source_route": "1", "net.ipv6.neigh.IFNAME.base_reachable_time_ms": "20000", "net.ipv6.neigh.IFNAME.retrans_time_ms": "2000" } } ] }' 1 The type is bond . 2 The mode attribute specifies the bonding mode. The bonding modes supported are: balance-rr - 0 active-backup - 1 balance-xor - 2 For balance-rr or balance-xor modes, you must set the trust mode to on for the SR-IOV virtual function. 3 The failover attribute is mandatory for active-backup mode. 4 The linksInContainer=true flag informs the Bond CNI that the required interfaces are to be found inside the container. By default, Bond CNI looks for these interfaces on the host which does not work for integration with SRIOV and Multus. 5 The links section defines which interfaces will be used to create the bond. By default, Multus names the attached interfaces as: "net", plus a consecutive number, starting with one. 6 A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. In this pod example IP addresses are configured manually, so in this case, ipam is set to static. 7 Add additional capabilities to the device. For example, set the type field to tuning . Specify the interface-level network sysctl you want to set in the sysctl field. This example sets all interface-level network sysctl settings that can be set. Create the bond network attachment resource: USD oc create -f sriov-bond-network-interface.yaml Verifying that the NetworkAttachmentDefinition CR is successfully created Confirm that the SR-IOV Network Operator created the NetworkAttachmentDefinition CR by running the following command: USD oc get network-attachment-definitions -n <namespace> 1 1 Replace <namespace> with the networkNamespace that you specified when configuring the network attachment, for example, sysctl-tuning-test . Example output NAME AGE bond-sysctl-network 22m allvalidflags 47m Note There might be a delay before the SR-IOV Network Operator creates the CR. Verifying that the additional SR-IOV network resource is successful To verify that the tuning CNI is correctly configured and the additional SR-IOV network attachment is attached, do the following: Create a Pod CR. For example, save the following YAML as the file examplepod.yaml : apiVersion: v1 kind: Pod metadata: name: tunepod namespace: sysctl-tuning-test annotations: k8s.v1.cni.cncf.io/networks: |- [ {"name": "allvalidflags"}, 1 {"name": "allvalidflags"}, { "name": "bond-sysctl-network", "interface": "bond0", "mac": "0a:56:0a:83:04:0c", 2 "ips": ["10.100.100.200/24"] 3 } ] spec: containers: - name: podexample image: centos command: ["/bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault 1 The name of the SR-IOV network attachment definition CR. 2 Optional: The MAC address for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. To use this feature, you also must specify { "mac": true } in the SriovNetwork object. 3 Optional: IP addresses for the SR-IOV device that are allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovNetwork object. Apply the YAML: USD oc apply -f examplepod.yaml Verify that the pod is created by running the following command: USD oc get pod -n sysctl-tuning-test Example output NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s Log in to the pod by running the following command: USD oc rsh -n sysctl-tuning-test tunepod Verify the values of the configured sysctl flag. Find the value net.ipv6.neigh.IFNAME.base_reachable_time_ms by running the following command:: USD sysctl net.ipv6.neigh.bond0.base_reachable_time_ms Example output net.ipv6.neigh.bond0.base_reachable_time_ms = 20000 23.8.4. About all-multicast mode Enabling all-multicast mode, particularly in the context of rootless applications, is critical. If you do not enable this mode, you would be required to grant the NET_ADMIN capability to the pod's Security Context Constraints (SCC). If you were to allow the NET_ADMIN capability to grant the pod privileges to make changes that extend beyond its specific requirements, you could potentially expose security vulnerabilities. The tuning CNI plugin supports changing several interface attributes, including all-multicast mode. By enabling this mode, you can allow applications running on Virtual Functions (VFs) that are configured on a SR-IOV network device to receive multicast traffic from applications on other VFs, whether attached to the same or different physical functions. 23.8.4.1. Enabling the all-multicast mode on an SR-IOV network You can enable the all-multicast mode on an SR-IOV interface by: Adding the tuning configuration to the metaPlugins parameter of the SriovNetwork resource Setting the allmulti field to true in the tuning configuration Note Ensure that you create the virtual function (VF) with trust enabled. The SR-IOV Network Operator manages additional network definitions. When you specify an additional SR-IOV network to create, the SR-IOV Network Operator creates the NetworkAttachmentDefinition custom resource (CR) automatically. Note Do not edit NetworkAttachmentDefinition custom resources that the SR-IOV Network Operator manages. Doing so might disrupt network traffic on your additional network. Enable the all-multicast mode on a SR-IOV network by following this guidance. Prerequisites You have installed the OpenShift Container Platform CLI (oc). You are logged in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. You have installed the SR-IOV Network Operator. You have configured an appropriate SriovNetworkNodePolicy object. Procedure Create a YAML file with the following settings that defines a SriovNetworkNodePolicy object for a Mellanox ConnectX-5 device. Save the YAML file as sriovnetpolicy-mlx.yaml . apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnetpolicy-mlx namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: deviceID: "1017" pfNames: - ens8f0np0#0-9 rootDevices: - 0000:d8:00.0 vendor: "15b3" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 10 priority: 99 resourceName: resourcemlx Optional: If the SR-IOV capable cluster nodes are not already labeled, add the SriovNetworkNodePolicy.Spec.NodeSelector label. For more information about labeling nodes, see "Understanding how to update labels on nodes". Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f sriovnetpolicy-mlx.yaml After applying the configuration update, all the pods in the sriov-network-operator namespace automatically move to a Running status. Create the enable-allmulti-test namespace by running the following command: USD oc create namespace enable-allmulti-test Create the SriovNetwork custom resource (CR) for the additional SR-IOV network attachment and insert the metaPlugins configuration, as in the following example CR YAML, and save the file as sriov-enable-all-multicast.yaml . apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: enableallmulti 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: enableallmulti 3 networkNamespace: enable-allmulti-test 4 ipam: '{ "type": "static" }' 5 capabilities: '{ "mac": true, "ips": true }' 6 trust: "on" 7 metaPlugins : | 8 { "type": "tuning", "capabilities":{ "mac":true }, "allmulti": true } } 1 Specify a name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with the same name. 2 Specify the namespace where the SR-IOV Network Operator is installed. 3 Specify a value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 Specify the target namespace for the SriovNetwork object. Only pods in the target namespace can attach to the additional network. 5 Specify a configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. 6 Optional: Set capabilities for the additional network. You can specify "{ "ips": true }" to enable IP address support or "{ "mac": true }" to enable MAC address support. 7 Specify the trust mode of the virtual function. This must be set to "on". 8 Add more capabilities to the device by using the metaPlugins parameter. In this use case, set the type field to tuning , and add the allmulti field and set it to true . Create the SriovNetwork resource by running the following command: USD oc create -f sriov-enable-all-multicast.yaml Verification of the NetworkAttachmentDefinition CR Confirm that the SR-IOV Network Operator created the NetworkAttachmentDefinition CR by running the following command: USD oc get network-attachment-definitions -n <namespace> 1 1 Replace <namespace> with the value for networkNamespace that you specified in the SriovNetwork object. For this example, that is enable-allmulti-test . Example output NAME AGE enableallmulti 14m Note There might be a delay before the SR-IOV Network Operator creates the CR. Display information about the SR-IOV network resources by running the following command: USD oc get sriovnetwork -n openshift-sriov-network-operator Verifying that the tuning CNI is correctly configured To verify that the tuning CNI is correctly configured and that the additional SR-IOV network attachment is attached, follow these steps: Create a Pod CR. Save the following sample YAML in a file named examplepod.yaml : apiVersion: v1 kind: Pod metadata: name: samplepod namespace: enable-allmulti-test annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "enableallmulti", 1 "mac": "0a:56:0a:83:04:0c", 2 "ips": ["10.100.100.200/24"] 3 } ] spec: containers: - name: podexample image: centos command: ["/bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault 1 Specify the name of the SR-IOV network attachment definition CR. 2 Optional: Specify the MAC address for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. To use this feature, you also must specify {"mac": true} in the SriovNetwork object. 3 Optional: Specify the IP addresses for the SR-IOV device that are allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovNetwork object. Create the Pod CR by running the following command: USD oc apply -f examplepod.yaml Verify that the pod is created by running the following command: USD oc get pod -n enable-allmulti-test Example output NAME READY STATUS RESTARTS AGE samplepod 1/1 Running 0 47s Log in to the pod by running the following command: USD oc rsh -n enable-allmulti-test samplepod List all the interfaces associated with the pod by running the following command: sh-4.4# ip link Example output 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UP mode DEFAULT group default link/ether 0a:58:0a:83:00:10 brd ff:ff:ff:ff:ff:ff link-netnsid 0 1 3: net1@if24: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether ee:9b:66:a4:ec:1d brd ff:ff:ff:ff:ff:ff link-netnsid 0 2 1 eth0@if22 is the primary interface. 2 net1@if24 is the secondary interface configured with the network-attachment-definition that supports the all-multicast mode ( ALLMULTI flag). 23.8.5. Additional resources Understanding how to update labels on nodes 23.9. Using high performance multicast You can use multicast on your Single Root I/O Virtualization (SR-IOV) hardware network. 23.9.1. High performance multicast The OpenShift SDN network plugin supports multicast between pods on the default network. This is best used for low-bandwidth coordination or service discovery, and not high-bandwidth applications. For applications such as streaming media, like Internet Protocol television (IPTV) and multipoint videoconferencing, you can utilize Single Root I/O Virtualization (SR-IOV) hardware to provide near-native performance. When using additional SR-IOV interfaces for multicast: Multicast packages must be sent or received by a pod through the additional SR-IOV interface. The physical network which connects the SR-IOV interfaces decides the multicast routing and topology, which is not controlled by OpenShift Container Platform. 23.9.2. Configuring an SR-IOV interface for multicast The follow procedure creates an example SR-IOV interface for multicast. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Create a SriovNetworkNodePolicy object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-example namespace: openshift-sriov-network-operator spec: resourceName: example nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 4 nicSelector: vendor: "8086" pfNames: ['ens803f0'] rootDevices: ['0000:86:00.0'] Create a SriovNetwork object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: net-example namespace: openshift-sriov-network-operator spec: networkNamespace: default ipam: | 1 { "type": "host-local", 2 "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "routes": [ {"dst": "224.0.0.0/5"}, {"dst": "232.0.0.0/5"} ], "gateway": "10.56.217.1" } resourceName: example 1 2 If you choose to configure DHCP as IPAM, ensure that you provision the following default routes through your DHCP server: 224.0.0.0/5 and 232.0.0.0/5 . This is to override the static multicast route set by the default network provider. Create a pod with multicast application: apiVersion: v1 kind: Pod metadata: name: testpmd namespace: default annotations: k8s.v1.cni.cncf.io/networks: nic1 spec: containers: - name: example image: rhel7:latest securityContext: capabilities: add: ["NET_ADMIN"] 1 command: [ "sleep", "infinity"] 1 The NET_ADMIN capability is required only if your application needs to assign the multicast IP address to the SR-IOV interface. Otherwise, it can be omitted. 23.10. Using DPDK and RDMA The containerized Data Plane Development Kit (DPDK) application is supported on OpenShift Container Platform. You can use Single Root I/O Virtualization (SR-IOV) network hardware with the Data Plane Development Kit (DPDK) and with remote direct memory access (RDMA). For information about supported devices, see Supported devices . 23.10.1. Using a virtual function in DPDK mode with an Intel NIC Prerequisites Install the OpenShift CLI ( oc ). Install the SR-IOV Network Operator. Log in as a user with cluster-admin privileges. Procedure Create the following SriovNetworkNodePolicy object, and then save the YAML in the intel-dpdk-node-policy.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: intel-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: intelnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: <priority> numVfs: <num> nicSelector: vendor: "8086" deviceID: "158b" pfNames: ["<pf_name>", ...] rootDevices: ["<pci_bus_id>", "..."] deviceType: vfio-pci 1 1 Specify the driver type for the virtual functions to vfio-pci . Note See the Configuring SR-IOV network devices section for a detailed explanation on each option in SriovNetworkNodePolicy . When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand. After the configuration update is applied, all the pods in openshift-sriov-network-operator namespace will change to a Running status. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f intel-dpdk-node-policy.yaml Create the following SriovNetwork object, and then save the YAML in the intel-dpdk-network.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: intel-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- # ... 1 vlan: <vlan> resourceName: intelnics 1 Specify a configuration object for the ipam CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. Note See the "Configuring SR-IOV additional network" section for a detailed explanation on each option in SriovNetwork . An optional library, app-netutil, provides several API methods for gathering network information about a container's parent pod. Create the SriovNetwork object by running the following command: USD oc create -f intel-dpdk-network.yaml Create the following Pod spec, and then save the YAML in the intel-dpdk-pod.yaml file. apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: intel-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: openshift.io/intelnics: "1" 5 memory: "1Gi" cpu: "4" 6 hugepages-1Gi: "4Gi" 7 requests: openshift.io/intelnics: "1" memory: "1Gi" cpu: "4" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the same target_namespace where the SriovNetwork object intel-dpdk-network is created. If you would like to create the pod in a different namespace, change target_namespace in both the Pod spec and the SriovNetwork object. 2 Specify the DPDK image which includes your application and the DPDK library used by application. 3 Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. 4 Mount a hugepage volume to the DPDK pod under /mnt/huge . The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages . 5 Optional: Specify the number of DPDK devices allocated to DPDK pod. This resource request and limit, if not explicitly specified, will be automatically added by the SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by the SR-IOV Operator. It is enabled by default and can be disabled by setting enableInjector option to false in the default SriovOperatorConfig CR. 6 Specify the number of CPUs. The DPDK pod usually requires exclusive CPUs to be allocated from the kubelet. This is achieved by setting CPU Manager policy to static and creating a pod with Guaranteed QoS. 7 Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to the DPDK pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepage requires adding kernel arguments to Nodes. For example, adding kernel arguments default_hugepagesz=1GB , hugepagesz=1G and hugepages=16 will result in 16*1Gi hugepages be allocated during system boot. Create the DPDK pod by running the following command: USD oc create -f intel-dpdk-pod.yaml 23.10.2. Using a virtual function in DPDK mode with a Mellanox NIC You can create a network node policy and create a Data Plane Development Kit (DPDK) pod using a virtual function in DPDK mode with a Mellanox NIC. Prerequisites You have installed the OpenShift CLI ( oc ). You have installed the Single Root I/O Virtualization (SR-IOV) Network Operator. You have logged in as a user with cluster-admin privileges. Procedure Save the following SriovNetworkNodePolicy YAML configuration to an mlx-dpdk-node-policy.yaml file: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: <priority> numVfs: <num> nicSelector: vendor: "15b3" deviceID: "1015" 1 pfNames: ["<pf_name>", ...] rootDevices: ["<pci_bus_id>", "..."] deviceType: netdevice 2 isRdma: true 3 1 Specify the device hex code of the SR-IOV network device. 2 Specify the driver type for the virtual functions to netdevice . A Mellanox SR-IOV Virtual Function (VF) can work in DPDK mode without using the vfio-pci device type. The VF device appears as a kernel network interface inside a container. 3 Enable Remote Direct Memory Access (RDMA) mode. This is required for Mellanox cards to work in DPDK mode. Note See Configuring an SR-IOV network device for a detailed explanation of each option in the SriovNetworkNodePolicy object. When applying the configuration specified in an SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. It might take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand. After the configuration update is applied, all the pods in the openshift-sriov-network-operator namespace will change to a Running status. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f mlx-dpdk-node-policy.yaml Save the following SriovNetwork YAML configuration to an mlx-dpdk-network.yaml file: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 ... vlan: <vlan> resourceName: mlxnics 1 Specify a configuration object for the IP Address Management (IPAM) Container Network Interface (CNI) plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. Note See Configuring an SR-IOV network device for a detailed explanation on each option in the SriovNetwork object. The app-netutil option library provides several API methods for gathering network information about the parent pod of a container. Create the SriovNetwork object by running the following command: USD oc create -f mlx-dpdk-network.yaml Save the following Pod YAML configuration to an mlx-dpdk-pod.yaml file: apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: openshift.io/mlxnics: "1" 5 memory: "1Gi" cpu: "4" 6 hugepages-1Gi: "4Gi" 7 requests: openshift.io/mlxnics: "1" memory: "1Gi" cpu: "4" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the same target_namespace where SriovNetwork object mlx-dpdk-network is created. To create the pod in a different namespace, change target_namespace in both the Pod spec and SriovNetwork object. 2 Specify the DPDK image which includes your application and the DPDK library used by the application. 3 Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. 4 Mount the hugepage volume to the DPDK pod under /mnt/huge . The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages . 5 Optional: Specify the number of DPDK devices allocated for the DPDK pod. If not explicitly specified, this resource request and limit is automatically added by the SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by SR-IOV Operator. It is enabled by default and can be disabled by setting the enableInjector option to false in the default SriovOperatorConfig CR. 6 Specify the number of CPUs. The DPDK pod usually requires that exclusive CPUs be allocated from the kubelet. To do this, set the CPU Manager policy to static and create a pod with Guaranteed Quality of Service (QoS). 7 Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to the DPDK pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepages requires adding kernel arguments to Nodes. Create the DPDK pod by running the following command: USD oc create -f mlx-dpdk-pod.yaml 23.10.3. Using the TAP CNI to run a rootless DPDK workload with kernel access DPDK applications can use virtio-user as an exception path to inject certain types of packets, such as log messages, into the kernel for processing. For more information about this feature, see Virtio_user as Exception Path . In OpenShift Container Platform version 4.14 and later, you can use non-privileged pods to run DPDK applications alongside the tap CNI plugin. To enable this functionality, you need to mount the vhost-net device by setting the needVhostNet parameter to true within the SriovNetworkNodePolicy object. Figure 23.1. DPDK and TAP example configuration Prerequisites You have installed the OpenShift CLI ( oc ). You have installed the SR-IOV Network Operator. You are logged in as a user with cluster-admin privileges. Ensure that setsebools container_use_devices=on is set as root on all nodes. Note Use the Machine Config Operator to set this SELinux boolean. Procedure Create a file, such as test-namespace.yaml , with content like the following example: apiVersion: v1 kind: Namespace metadata: name: test-namespace labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: "false" Create the new Namespace object by running the following command: USD oc apply -f test-namespace.yaml Create a file, such as sriov-node-network-policy.yaml , with content like the following example:: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnic namespace: openshift-sriov-network-operator spec: deviceType: netdevice 1 isRdma: true 2 needVhostNet: true 3 nicSelector: vendor: "15b3" 4 deviceID: "101b" 5 rootDevices: ["00:05.0"] numVfs: 10 priority: 99 resourceName: sriovnic nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" 1 This indicates that the profile is tailored specifically for Mellanox Network Interface Controllers (NICs). 2 Setting isRdma to true is only required for a Mellanox NIC. 3 This mounts the /dev/net/tun and /dev/vhost-net devices into the container so the application can create a tap device and connect the tap device to the DPDK workload. 4 The vendor hexadecimal code of the SR-IOV network device. The value 15b3 is associated with a Mellanox NIC. 5 The device hexadecimal code of the SR-IOV network device. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f sriov-node-network-policy.yaml Create the following SriovNetwork object, and then save the YAML in the sriov-network-attachment.yaml file: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-network namespace: openshift-sriov-network-operator spec: networkNamespace: test-namespace resourceName: sriovnic spoofChk: "off" trust: "on" Note See the "Configuring SR-IOV additional network" section for a detailed explanation on each option in SriovNetwork . An optional library, app-netutil , provides several API methods for gathering network information about a container's parent pod. Create the SriovNetwork object by running the following command: USD oc create -f sriov-network-attachment.yaml Create a file, such as tap-example.yaml , that defines a network attachment definition, with content like the following example: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: tap-one namespace: test-namespace 1 spec: config: '{ "cniVersion": "0.4.0", "name": "tap", "plugins": [ { "type": "tap", "multiQueue": true, "selinuxcontext": "system_u:system_r:container_t:s0" }, { "type":"tuning", "capabilities":{ "mac":true } } ] }' 1 Specify the same target_namespace where the SriovNetwork object is created. Create the NetworkAttachmentDefinition object by running the following command: USD oc apply -f tap-example.yaml Create a file, such as dpdk-pod-rootless.yaml , with content like the following example: apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: test-namespace 1 annotations: k8s.v1.cni.cncf.io/networks: '[ {"name": "sriov-network", "namespace": "test-namespace"}, {"name": "tap-one", "interface": "ext0", "namespace": "test-namespace"}]' spec: nodeSelector: kubernetes.io/hostname: "worker-0" securityContext: fsGroup: 1001 2 runAsGroup: 1001 3 seccompProfile: type: RuntimeDefault containers: - name: testpmd image: <DPDK_image> 4 securityContext: capabilities: drop: ["ALL"] 5 add: 6 - IPC_LOCK - NET_RAW #for mlx only 7 runAsUser: 1001 8 privileged: false 9 allowPrivilegeEscalation: true 10 runAsNonRoot: true 11 volumeMounts: - mountPath: /mnt/huge 12 name: hugepages resources: limits: openshift.io/sriovnic: "1" 13 memory: "1Gi" cpu: "4" 14 hugepages-1Gi: "4Gi" 15 requests: openshift.io/sriovnic: "1" memory: "1Gi" cpu: "4" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] runtimeClassName: performance-cnf-performanceprofile 16 volumes: - name: hugepages emptyDir: medium: HugePages 1 Specify the same target_namespace in which the SriovNetwork object is created. If you want to create the pod in a different namespace, change target_namespace in both the Pod spec and the SriovNetwork object. 2 Sets the group ownership of volume-mounted directories and files created in those volumes. 3 Specify the primary group ID used for running the container. 4 Specify the DPDK image that contains your application and the DPDK library used by application. 5 Removing all capabilities ( ALL ) from the container's securityContext means that the container has no special privileges beyond what is necessary for normal operation. 6 Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. These capabilities must also be set in the binary file by using the setcap command. 7 Mellanox network interface controller (NIC) requires the NET_RAW capability. 8 Specify the user ID used for running the container. 9 This setting indicates that the container or containers within the pod should not be granted privileged access to the host system. 10 This setting allows a container to escalate its privileges beyond the initial non-root privileges it might have been assigned. 11 This setting ensures that the container runs with a non-root user. This helps enforce the principle of least privilege, limiting the potential impact of compromising the container and reducing the attack surface. 12 Mount a hugepage volume to the DPDK pod under /mnt/huge . The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages . 13 Optional: Specify the number of DPDK devices allocated for the DPDK pod. If not explicitly specified, this resource request and limit is automatically added by the SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by SR-IOV Operator. It is enabled by default and can be disabled by setting the enableInjector option to false in the default SriovOperatorConfig CR. 14 Specify the number of CPUs. The DPDK pod usually requires exclusive CPUs to be allocated from the kubelet. This is achieved by setting CPU Manager policy to static and creating a pod with Guaranteed QoS. 15 Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to the DPDK pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepage requires adding kernel arguments to Nodes. For example, adding kernel arguments default_hugepagesz=1GB , hugepagesz=1G and hugepages=16 will result in 16*1Gi hugepages be allocated during system boot. 16 If your performance profile is not named cnf-performance profile , replace that string with the correct performance profile name. Create the DPDK pod by running the following command: USD oc create -f dpdk-pod-rootless.yaml Additional resources Enabling the container_use_devices boolean Creating a performance profile Configuring an SR-IOV network device 23.10.4. Overview of achieving a specific DPDK line rate To achieve a specific Data Plane Development Kit (DPDK) line rate, deploy a Node Tuning Operator and configure Single Root I/O Virtualization (SR-IOV). You must also tune the DPDK settings for the following resources: Isolated CPUs Hugepages The topology scheduler Note In versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. DPDK test environment The following diagram shows the components of a traffic-testing environment: Traffic generator : An application that can generate high-volume packet traffic. SR-IOV-supporting NIC : A network interface card compatible with SR-IOV. The card runs a number of virtual functions on a physical interface. Physical Function (PF) : A PCI Express (PCIe) function of a network adapter that supports the SR-IOV interface. Virtual Function (VF) : A lightweight PCIe function on a network adapter that supports SR-IOV. The VF is associated with the PCIe PF on the network adapter. The VF represents a virtualized instance of the network adapter. Switch : A network switch. Nodes can also be connected back-to-back. testpmd : An example application included with DPDK. The testpmd application can be used to test the DPDK in a packet-forwarding mode. The testpmd application is also an example of how to build a fully-fledged application using the DPDK Software Development Kit (SDK). worker 0 and worker 1 : OpenShift Container Platform nodes. 23.10.5. Using SR-IOV and the Node Tuning Operator to achieve a DPDK line rate You can use the Node Tuning Operator to configure isolated CPUs, hugepages, and a topology scheduler. You can then use the Node Tuning Operator with Single Root I/O Virtualization (SR-IOV) to achieve a specific Data Plane Development Kit (DPDK) line rate. Prerequisites You have installed the OpenShift CLI ( oc ). You have installed the SR-IOV Network Operator. You have logged in as a user with cluster-admin privileges. You have deployed a standalone Node Tuning Operator. Note In versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. Procedure Create a PerformanceProfile object based on the following example: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: globallyDisableIrqLoadBalancing: true cpu: isolated: 21-51,73-103 1 reserved: 0-20,52-72 2 hugepages: defaultHugepagesSize: 1G 3 pages: - count: 32 size: 1G net: userLevelNetworking: true numa: topologyPolicy: "single-numa-node" nodeSelector: node-role.kubernetes.io/worker-cnf: "" 1 If hyperthreading is enabled on the system, allocate the relevant symbolic links to the isolated and reserved CPU groups. If the system contains multiple non-uniform memory access nodes (NUMAs), allocate CPUs from both NUMAs to both groups. You can also use the Performance Profile Creator for this task. For more information, see Creating a performance profile . 2 You can also specify a list of devices that will have their queues set to the reserved CPU count. For more information, see Reducing NIC queues using the Node Tuning Operator . 3 Allocate the number and size of hugepages needed. You can specify the NUMA configuration for the hugepages. By default, the system allocates an even number to every NUMA node on the system. If needed, you can request the use of a realtime kernel for the nodes. See Provisioning a worker with real-time capabilities for more information. Save the yaml file as mlx-dpdk-perfprofile-policy.yaml . Apply the performance profile using the following command: USD oc create -f mlx-dpdk-perfprofile-policy.yaml 23.10.5.1. Example SR-IOV Network Operator for virtual functions You can use the Single Root I/O Virtualization (SR-IOV) Network Operator to allocate and configure Virtual Functions (VFs) from SR-IOV-supporting Physical Function NICs on the nodes. For more information on deploying the Operator, see Installing the SR-IOV Network Operator . For more information on configuring an SR-IOV network device, see Configuring an SR-IOV network device . There are some differences between running Data Plane Development Kit (DPDK) workloads on Intel VFs and Mellanox VFs. This section provides object configuration examples for both VF types. The following is an example of an sriovNetworkNodePolicy object used to run DPDK applications on Intel NICs: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci 1 needVhostNet: true 2 nicSelector: pfNames: ["ens3f0"] nodeSelector: node-role.kubernetes.io/worker-cnf: "" numVfs: 10 priority: 99 resourceName: dpdk_nic_1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci needVhostNet: true nicSelector: pfNames: ["ens3f1"] nodeSelector: node-role.kubernetes.io/worker-cnf: "" numVfs: 10 priority: 99 resourceName: dpdk_nic_2 1 For Intel NICs, deviceType must be vfio-pci . 2 If kernel communication with DPDK workloads is required, add needVhostNet: true . This mounts the /dev/net/tun and /dev/vhost-net devices into the container so the application can create a tap device and connect the tap device to the DPDK workload. The following is an example of an sriovNetworkNodePolicy object for Mellanox NICs: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice 1 isRdma: true 2 nicSelector: rootDevices: - "0000:5e:00.1" nodeSelector: node-role.kubernetes.io/worker-cnf: "" numVfs: 5 priority: 99 resourceName: dpdk_nic_1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-2 namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: rootDevices: - "0000:5e:00.0" nodeSelector: node-role.kubernetes.io/worker-cnf: "" numVfs: 5 priority: 99 resourceName: dpdk_nic_2 1 For Mellanox devices the deviceType must be netdevice . 2 For Mellanox devices isRdma must be true . Mellanox cards are connected to DPDK applications using Flow Bifurcation. This mechanism splits traffic between Linux user space and kernel space, and can enhance line rate processing capability. 23.10.5.2. Example SR-IOV network operator The following is an example definition of an sriovNetwork object. In this case, Intel and Mellanox configurations are identical: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-network-1 namespace: openshift-sriov-network-operator spec: ipam: '{"type": "host-local","ranges": [[{"subnet": "10.0.1.0/24"}]],"dataDir": "/run/my-orchestrator/container-ipam-state-1"}' 1 networkNamespace: dpdk-test 2 spoofChk: "off" trust: "on" resourceName: dpdk_nic_1 3 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-network-2 namespace: openshift-sriov-network-operator spec: ipam: '{"type": "host-local","ranges": [[{"subnet": "10.0.2.0/24"}]],"dataDir": "/run/my-orchestrator/container-ipam-state-1"}' networkNamespace: dpdk-test spoofChk: "off" trust: "on" resourceName: dpdk_nic_2 1 You can use a different IP Address Management (IPAM) implementation, such as Whereabouts. For more information, see Dynamic IP address assignment configuration with Whereabouts . 2 You must request the networkNamespace where the network attachment definition will be created. You must create the sriovNetwork CR under the openshift-sriov-network-operator namespace. 3 The resourceName value must match that of the resourceName created under the sriovNetworkNodePolicy . 23.10.5.3. Example DPDK base workload The following is an example of a Data Plane Development Kit (DPDK) container: apiVersion: v1 kind: Namespace metadata: name: dpdk-test --- apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ 1 { "name": "dpdk-network-1", "namespace": "dpdk-test" }, { "name": "dpdk-network-2", "namespace": "dpdk-test" } ]' irq-load-balancing.crio.io: "disable" 2 cpu-load-balancing.crio.io: "disable" cpu-quota.crio.io: "disable" labels: app: dpdk name: testpmd namespace: dpdk-test spec: runtimeClassName: performance-performance 3 containers: - command: - /bin/bash - -c - sleep INF image: registry.redhat.io/openshift4/dpdk-base-rhel8 imagePullPolicy: Always name: dpdk resources: 4 limits: cpu: "16" hugepages-1Gi: 8Gi memory: 2Gi requests: cpu: "16" hugepages-1Gi: 8Gi memory: 2Gi securityContext: capabilities: add: - IPC_LOCK - SYS_RESOURCE - NET_RAW - NET_ADMIN runAsUser: 0 volumeMounts: - mountPath: /mnt/huge name: hugepages terminationGracePeriodSeconds: 5 volumes: - emptyDir: medium: HugePages name: hugepages 1 Request the SR-IOV networks you need. Resources for the devices will be injected automatically. 2 Disable the CPU and IRQ load balancing base. See Disabling interrupt processing for individual pods for more information. 3 Set the runtimeClass to performance-performance . Do not set the runtimeClass to HostNetwork or privileged . 4 Request an equal number of resources for requests and limits to start the pod with Guaranteed Quality of Service (QoS). Note Do not start the pod with SLEEP and then exec into the pod to start the testpmd or the DPDK workload. This can add additional interrupts as the exec process is not pinned to any CPU. 23.10.5.4. Example testpmd script The following is an example script for running testpmd : #!/bin/bash set -ex export CPU=USD(cat /sys/fs/cgroup/cpuset/cpuset.cpus) echo USD{CPU} dpdk-testpmd -l USD{CPU} -a USD{PCIDEVICE_OPENSHIFT_IO_DPDK_NIC_1} -a USD{PCIDEVICE_OPENSHIFT_IO_DPDK_NIC_2} -n 4 -- -i --nb-cores=15 --rxd=4096 --txd=4096 --rxq=7 --txq=7 --forward-mode=mac --eth-peer=0,50:00:00:00:00:01 --eth-peer=1,50:00:00:00:00:02 This example uses two different sriovNetwork CRs. The environment variable contains the Virtual Function (VF) PCI address that was allocated for the pod. If you use the same network in the pod definition, you must split the pciAddress . It is important to configure the correct MAC addresses of the traffic generator. This example uses custom MAC addresses. 23.10.6. Using a virtual function in RDMA mode with a Mellanox NIC Important RDMA over Converged Ethernet (RoCE) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . RDMA over Converged Ethernet (RoCE) is the only supported mode when using RDMA on OpenShift Container Platform. Prerequisites Install the OpenShift CLI ( oc ). Install the SR-IOV Network Operator. Log in as a user with cluster-admin privileges. Procedure Create the following SriovNetworkNodePolicy object, and then save the YAML in the mlx-rdma-node-policy.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-rdma-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: <priority> numVfs: <num> nicSelector: vendor: "15b3" deviceID: "1015" 1 pfNames: ["<pf_name>", ...] rootDevices: ["<pci_bus_id>", "..."] deviceType: netdevice 2 isRdma: true 3 1 Specify the device hex code of the SR-IOV network device. 2 Specify the driver type for the virtual functions to netdevice . 3 Enable RDMA mode. Note See the Configuring SR-IOV network devices section for a detailed explanation on each option in SriovNetworkNodePolicy . When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand. After the configuration update is applied, all the pods in the openshift-sriov-network-operator namespace will change to a Running status. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f mlx-rdma-node-policy.yaml Create the following SriovNetwork object, and then save the YAML in the mlx-rdma-network.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-rdma-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 # ... vlan: <vlan> resourceName: mlxnics 1 Specify a configuration object for the ipam CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. Note See the "Configuring SR-IOV additional network" section for a detailed explanation on each option in SriovNetwork . An optional library, app-netutil, provides several API methods for gathering network information about a container's parent pod. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f mlx-rdma-network.yaml Create the following Pod spec, and then save the YAML in the mlx-rdma-pod.yaml file. apiVersion: v1 kind: Pod metadata: name: rdma-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-rdma-network spec: containers: - name: testpmd image: <RDMA_image> 2 securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: memory: "1Gi" cpu: "4" 5 hugepages-1Gi: "4Gi" 6 requests: memory: "1Gi" cpu: "4" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the same target_namespace where SriovNetwork object mlx-rdma-network is created. If you would like to create the pod in a different namespace, change target_namespace in both Pod spec and SriovNetwork object. 2 Specify the RDMA image which includes your application and RDMA library used by application. 3 Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. 4 Mount the hugepage volume to RDMA pod under /mnt/huge . The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages . 5 Specify number of CPUs. The RDMA pod usually requires exclusive CPUs be allocated from the kubelet. This is achieved by setting CPU Manager policy to static and create pod with Guaranteed QoS. 6 Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to the RDMA pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepage requires adding kernel arguments to Nodes. Create the RDMA pod by running the following command: USD oc create -f mlx-rdma-pod.yaml 23.10.7. A test pod template for clusters that use OVS-DPDK on OpenStack The following testpmd pod demonstrates container creation with huge pages, reserved CPUs, and the SR-IOV port. An example testpmd pod apiVersion: v1 kind: Pod metadata: name: testpmd-dpdk namespace: mynamespace annotations: cpu-load-balancing.crio.io: "disable" cpu-quota.crio.io: "disable" # ... spec: containers: - name: testpmd command: ["sleep", "99999"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: ["IPC_LOCK","SYS_ADMIN"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' openshift.io/dpdk1: 1 1 limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi openshift.io/dpdk1: 1 volumeMounts: - mountPath: /mnt/huge name: hugepage readOnly: False runtimeClassName: performance-cnf-performanceprofile 2 volumes: - name: hugepage emptyDir: medium: HugePages 1 The name dpdk1 in this example is a user-created SriovNetworkNodePolicy resource. You can substitute this name for that of a resource that you create. 2 If your performance profile is not named cnf-performance profile , replace that string with the correct performance profile name. 23.10.8. A test pod template for clusters that use OVS hardware offloading on OpenStack The following testpmd pod demonstrates Open vSwitch (OVS) hardware offloading on Red Hat OpenStack Platform (RHOSP). An example testpmd pod apiVersion: v1 kind: Pod metadata: name: testpmd-sriov namespace: mynamespace annotations: k8s.v1.cni.cncf.io/networks: hwoffload1 spec: runtimeClassName: performance-cnf-performanceprofile 1 containers: - name: testpmd command: ["sleep", "99999"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: ["IPC_LOCK","SYS_ADMIN"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi volumeMounts: - mountPath: /mnt/huge name: hugepage readOnly: False volumes: - name: hugepage emptyDir: medium: HugePages 1 If your performance profile is not named cnf-performance profile , replace that string with the correct performance profile name. 23.10.9. Additional resources Creating a performance profile Adjusting the NIC queues with the performance profile Provisioning real-time and low latency workloads Installing the SR-IOV Network Operator Configuring an SR-IOV network device Dynamic IP address assignment configuration with Whereabouts Disabling interrupt processing for individual pods Configuring an SR-IOV Ethernet network attachment The app-netutil library provides several API methods for gathering network information about a container's parent pod. 23.11. Using pod-level bonding Bonding at the pod level is vital to enable workloads inside pods that require high availability and more throughput. With pod-level bonding, you can create a bond interface from multiple single root I/O virtualization (SR-IOV) virtual function interfaces in a kernel mode interface. The SR-IOV virtual functions are passed into the pod and attached to a kernel driver. One scenario where pod level bonding is required is creating a bond interface from multiple SR-IOV virtual functions on different physical functions. Creating a bond interface from two different physical functions on the host can be used to achieve high availability and throughput at pod level. For guidance on tasks such as creating a SR-IOV network, network policies, network attachment definitions and pods, see Configuring an SR-IOV network device . 23.11.1. Configuring a bond interface from two SR-IOV interfaces Bonding enables multiple network interfaces to be aggregated into a single logical "bonded" interface. Bond Container Network Interface (Bond-CNI) brings bond capability into containers. Bond-CNI can be created using Single Root I/O Virtualization (SR-IOV) virtual functions and placing them in the container network namespace. OpenShift Container Platform only supports Bond-CNI using SR-IOV virtual functions. The SR-IOV Network Operator provides the SR-IOV CNI plugin needed to manage the virtual functions. Other CNIs or types of interfaces are not supported. Prerequisites The SR-IOV Network Operator must be installed and configured to obtain virtual functions in a container. To configure SR-IOV interfaces, an SR-IOV network and policy must be created for each interface. The SR-IOV Network Operator creates a network attachment definition for each SR-IOV interface, based on the SR-IOV network and policy defined. The linkState is set to the default value auto for the SR-IOV virtual function. 23.11.1.1. Creating a bond network attachment definition Now that the SR-IOV virtual functions are available, you can create a bond network attachment definition. apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: bond-net1 namespace: demo spec: config: '{ "type": "bond", 1 "cniVersion": "0.3.1", "name": "bond-net1", "mode": "active-backup", 2 "failOverMac": 1, 3 "linksInContainer": true, 4 "miimon": "100", "mtu": 1500, "links": [ 5 {"name": "net1"}, {"name": "net2"} ], "ipam": { "type": "host-local", "subnet": "10.56.217.0/24", "routes": [{ "dst": "0.0.0.0/0" }], "gateway": "10.56.217.1" } }' 1 The cni-type is always set to bond . 2 The mode attribute specifies the bonding mode. Note The bonding modes supported are: balance-rr - 0 active-backup - 1 balance-xor - 2 For balance-rr or balance-xor modes, you must set the trust mode to on for the SR-IOV virtual function. 3 The failover attribute is mandatory for active-backup mode and must be set to 1. 4 The linksInContainer=true flag informs the Bond CNI that the required interfaces are to be found inside the container. By default, Bond CNI looks for these interfaces on the host which does not work for integration with SRIOV and Multus. 5 The links section defines which interfaces will be used to create the bond. By default, Multus names the attached interfaces as: "net", plus a consecutive number, starting with one. 23.11.1.2. Creating a pod using a bond interface Test the setup by creating a pod with a YAML file named for example podbonding.yaml with content similar to the following: apiVersion: v1 kind: Pod metadata: name: bondpod1 namespace: demo annotations: k8s.v1.cni.cncf.io/networks: demo/sriovnet1, demo/sriovnet2, demo/bond-net1 1 spec: containers: - name: podexample image: quay.io/openshift/origin-network-interface-bond-cni:4.11.0 command: ["/bin/bash", "-c", "sleep INF"] 1 Note the network annotation: it contains two SR-IOV network attachments, and one bond network attachment. The bond attachment uses the two SR-IOV interfaces as bonded port interfaces. Apply the yaml by running the following command: USD oc apply -f podbonding.yaml Inspect the pod interfaces with the following command: USD oc rsh -n demo bondpod1 sh-4.4# sh-4.4# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 3: eth0@if150: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP link/ether 62:b1:b5:c8:fb:7a brd ff:ff:ff:ff:ff:ff inet 10.244.1.122/24 brd 10.244.1.255 scope global eth0 valid_lft forever preferred_lft forever 4: net3: <BROADCAST,MULTICAST,UP,LOWER_UP400> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 1 inet 10.56.217.66/24 scope global bond0 valid_lft forever preferred_lft forever 43: net1: <BROADCAST,MULTICAST,UP,LOWER_UP800> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 2 44: net2: <BROADCAST,MULTICAST,UP,LOWER_UP800> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 3 1 The bond interface is automatically named net3 . To set a specific interface name add @name suffix to the pod's k8s.v1.cni.cncf.io/networks annotation. 2 The net1 interface is based on an SR-IOV virtual function. 3 The net2 interface is based on an SR-IOV virtual function. Note If no interface names are configured in the pod annotation, interface names are assigned automatically as net<n> , with <n> starting at 1 . Optional: If you want to set a specific interface name for example bond0 , edit the k8s.v1.cni.cncf.io/networks annotation and set bond0 as the interface name as follows: annotations: k8s.v1.cni.cncf.io/networks: demo/sriovnet1, demo/sriovnet2, demo/bond-net1@bond0 23.12. Configuring hardware offloading As a cluster administrator, you can configure hardware offloading on compatible nodes to increase data processing performance and reduce load on host CPUs. 23.12.1. About hardware offloading Open vSwitch hardware offloading is a method of processing network tasks by diverting them away from the CPU and offloading them to a dedicated processor on a network interface controller. As a result, clusters can benefit from faster data transfer speeds, reduced CPU workloads, and lower computing costs. The key element for this feature is a modern class of network interface controllers known as SmartNICs. A SmartNIC is a network interface controller that is able to handle computationally-heavy network processing tasks. In the same way that a dedicated graphics card can improve graphics performance, a SmartNIC can improve network performance. In each case, a dedicated processor improves performance for a specific type of processing task. In OpenShift Container Platform, you can configure hardware offloading for bare metal nodes that have a compatible SmartNIC. Hardware offloading is configured and enabled by the SR-IOV Network Operator. Hardware offloading is not compatible with all workloads or application types. Only the following two communication types are supported: pod-to-pod pod-to-service, where the service is a ClusterIP service backed by a regular pod In all cases, hardware offloading takes place only when those pods and services are assigned to nodes that have a compatible SmartNIC. Suppose, for example, that a pod on a node with hardware offloading tries to communicate with a service on a regular node. On the regular node, all the processing takes place in the kernel, so the overall performance of the pod-to-service communication is limited to the maximum performance of that regular node. Hardware offloading is not compatible with DPDK applications. Enabling hardware offloading on a node, but not configuring pods to use, it can result in decreased throughput performance for pod traffic. You cannot configure hardware offloading for pods that are managed by OpenShift Container Platform. 23.12.2. Supported devices Hardware offloading is supported on the following network interface controllers: Table 23.15. Supported network interface controllers Manufacturer Model Vendor ID Device ID Mellanox MT27800 Family [ConnectX‐5] 15b3 1017 Mellanox MT28880 Family [ConnectX‐5 Ex] 15b3 1019 Mellanox MT2892 Family [ConnectX‐6 Dx] 15b3 101d Mellanox MT2894 Family [ConnectX-6 Lx] 15b3 101f Mellanox MT42822 BlueField-2 in ConnectX-6 NIC mode 15b3 a2d6 23.12.3. Prerequisites Your cluster has at least one bare metal machine with a network interface controller that is supported for hardware offloading. You installed the SR-IOV Network Operator . Your cluster uses the OVN-Kubernetes network plugin . In your OVN-Kubernetes network plugin configuration , the gatewayConfig.routingViaHost field is set to false . 23.12.4. Setting the SR-IOV Network Operator into systemd mode To support hardware offloading, you must first set the SR-IOV Network Operator into systemd mode. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user that has the cluster-admin role. Procedure Create a SriovOperatorConfig custom resource (CR) to deploy all the SR-IOV Operator components: Create a file named sriovOperatorConfig.yaml that contains the following YAML: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default 1 namespace: openshift-sriov-network-operator spec: enableInjector: true enableOperatorWebhook: true configurationMode: "systemd" 2 logLevel: 2 1 The only valid name for the SriovOperatorConfig resource is default and it must be in the namespace where the Operator is deployed. 2 Setting the SR-IOV Network Operator into systemd mode is only relevant for Open vSwitch hardware offloading. Create the resource by running the following command: USD oc apply -f sriovOperatorConfig.yaml 23.12.5. Configuring a machine config pool for hardware offloading To enable hardware offloading, you now create a dedicated machine config pool and configure it to work with the SR-IOV Network Operator. Prerequisites SR-IOV Network Operator installed and set into systemd mode. Procedure Create a machine config pool for machines you want to use hardware offloading on. Create a file, such as mcp-offloading.yaml , with content like the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: mcp-offloading 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,mcp-offloading]} 2 nodeSelector: matchLabels: node-role.kubernetes.io/mcp-offloading: "" 3 1 2 The name of your machine config pool for hardware offloading. 3 This node role label is used to add nodes to the machine config pool. Apply the configuration for the machine config pool: USD oc create -f mcp-offloading.yaml Add nodes to the machine config pool. Label each node with the node role label of your pool: USD oc label node worker-2 node-role.kubernetes.io/mcp-offloading="" Optional: To verify that the new pool is created, run the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 2d v1.27.3 master-1 Ready master 2d v1.27.3 master-2 Ready master 2d v1.27.3 worker-0 Ready worker 2d v1.27.3 worker-1 Ready worker 2d v1.27.3 worker-2 Ready mcp-offloading,worker 47h v1.27.3 worker-3 Ready mcp-offloading,worker 47h v1.27.3 Add this machine config pool to the SriovNetworkPoolConfig custom resource: Create a file, such as sriov-pool-config.yaml , with content like the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkPoolConfig metadata: name: sriovnetworkpoolconfig-offload namespace: openshift-sriov-network-operator spec: ovsHardwareOffloadConfig: name: mcp-offloading 1 1 The name of your machine config pool for hardware offloading. Apply the configuration: USD oc create -f <SriovNetworkPoolConfig_name>.yaml Note When you apply the configuration specified in a SriovNetworkPoolConfig object, the SR-IOV Operator drains and restarts the nodes in the machine config pool. It might take several minutes for a configuration changes to apply. 23.12.6. Configuring the SR-IOV network node policy You can create an SR-IOV network device configuration for a node by creating an SR-IOV network node policy. To enable hardware offloading, you must define the .spec.eSwitchMode field with the value "switchdev" . The following procedure creates an SR-IOV interface for a network interface controller with hardware offloading. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure Create a file, such as sriov-node-policy.yaml , with content like the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-policy 1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice 2 eSwitchMode: "switchdev" 3 nicSelector: deviceID: "1019" rootDevices: - 0000:d8:00.0 vendor: "15b3" pfNames: - ens8f0 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 6 priority: 5 resourceName: mlxnics 1 The name for the custom resource object. 2 Required. Hardware offloading is not supported with vfio-pci . 3 Required. Apply the configuration for the policy: USD oc create -f sriov-node-policy.yaml Note When you apply the configuration specified in a SriovNetworkPoolConfig object, the SR-IOV Operator drains and restarts the nodes in the machine config pool. It might take several minutes for a configuration change to apply. 23.12.6.1. An example SR-IOV network node policy for OpenStack The following example describes an SR-IOV interface for a network interface controller (NIC) with hardware offloading on Red Hat OpenStack Platform (RHOSP). An SR-IOV interface for a NIC with hardware offloading on RHOSP apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USD{name} namespace: openshift-sriov-network-operator spec: deviceType: switchdev isRdma: true nicSelector: netFilter: openstack/NetworkID:USD{net_id} nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: USD{name} 23.12.7. Improving network traffic performance using a virtual function Follow this procedure to assign a virtual function to the OVN-Kubernetes management port and increase its network traffic performance. This procedure results in the creation of two pools: the first has a virtual function used by OVN-Kubernetes, and the second comprises the remaining virtual functions. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure Add the network.operator.openshift.io/smart-nic label to each worker node with a SmartNIC present by running the following command: USD oc label node <node-name> network.operator.openshift.io/smart-nic= Use the oc get nodes command to get a list of the available nodes. Create a policy named sriov-node-mgmt-vf-policy.yaml for the management port with content such as the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-mgmt-vf-policy namespace: openshift-sriov-network-operator spec: deviceType: netdevice eSwitchMode: "switchdev" nicSelector: deviceID: "1019" rootDevices: - 0000:d8:00.0 vendor: "15b3" pfNames: - ens8f0#0-0 1 nodeSelector: network.operator.openshift.io/smart-nic: "" numVfs: 6 2 priority: 5 resourceName: mgmtvf 1 Replace this device with the appropriate network device for your use case. The #0-0 part of the pfNames value reserves a single virtual function used by OVN-Kubernetes. 2 The value provided here is an example. Replace this value with one that meets your requirements. For more information, see SR-IOV network node configuration object in the Additional resources section. Create a policy named sriov-node-policy.yaml with content such as the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-policy namespace: openshift-sriov-network-operator spec: deviceType: netdevice eSwitchMode: "switchdev" nicSelector: deviceID: "1019" rootDevices: - 0000:d8:00.0 vendor: "15b3" pfNames: - ens8f0#1-5 1 nodeSelector: network.operator.openshift.io/smart-nic: "" numVfs: 6 2 priority: 5 resourceName: mlxnics 1 Replace this device with the appropriate network device for your use case. 2 The value provided here is an example. Replace this value with the value specified in the sriov-node-mgmt-vf-policy.yaml file. For more information, see SR-IOV network node configuration object in the Additional resources section. Note The sriov-node-mgmt-vf-policy.yaml file has different values for the pfNames and resourceName keys than the sriov-node-policy.yaml file. Apply the configuration for both policies: USD oc create -f sriov-node-policy.yaml USD oc create -f sriov-node-mgmt-vf-policy.yaml Create a Cluster Network Operator (CNO) ConfigMap in the cluster for the management configuration: Create a ConfigMap named hardware-offload-config.yaml with the following contents: apiVersion: v1 kind: ConfigMap metadata: name: hardware-offload-config namespace: openshift-network-operator data: mgmt-port-resource-name: openshift.io/mgmtvf Apply the configuration for the ConfigMap: USD oc create -f hardware-offload-config.yaml Additional resources SR-IOV network node configuration object 23.12.8. Creating a network attachment definition After you define the machine config pool and the SR-IOV network node policy, you can create a network attachment definition for the network interface card you specified. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure Create a file, such as net-attach-def.yaml , with content like the following example: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: net-attach-def 1 namespace: net-attach-def 2 annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/mlxnics 3 spec: config: '{"cniVersion":"0.3.1","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{}}' 1 The name for your network attachment definition. 2 The namespace for your network attachment definition. 3 This is the value of the spec.resourceName field you specified in the SriovNetworkNodePolicy object. Apply the configuration for the network attachment definition: USD oc create -f net-attach-def.yaml Verification Run the following command to see whether the new definition is present: USD oc get net-attach-def -A Example output NAMESPACE NAME AGE net-attach-def net-attach-def 43h 23.12.9. Adding the network attachment definition to your pods After you create the machine config pool, the SriovNetworkPoolConfig and SriovNetworkNodePolicy custom resources, and the network attachment definition, you can apply these configurations to your pods by adding the network attachment definition to your pod specifications. Procedure In the pod specification, add the .metadata.annotations.k8s.v1.cni.cncf.io/networks field and specify the network attachment definition you created for hardware offloading: .... metadata: annotations: v1.multus-cni.io/default-network: net-attach-def/net-attach-def 1 1 The value must be the name and namespace of the network attachment definition you created for hardware offloading. 23.13. Switching Bluefield-2 from DPU to NIC You can switch the Bluefield-2 network device from data processing unit (DPU) mode to network interface controller (NIC) mode. 23.13.1. Switching Bluefield-2 from DPU mode to NIC mode Use the following procedure to switch Bluefield-2 from data processing units (DPU) mode to network interface controller (NIC) mode. Important Currently, only switching Bluefield-2 from DPU to NIC mode is supported. Switching from NIC mode to DPU mode is unsupported. Prerequisites You have installed the SR-IOV Network Operator. For more information, see "Installing SR-IOV Network Operator". You have updated Bluefield-2 to the latest firmware. For more information, see Firmware for NVIDIA BlueField-2 . Procedure Add the following labels to each of your worker nodes by entering the following commands: USD oc label node <example_node_name_one> node-role.kubernetes.io/sriov= USD oc label node <example_node_name_two> node-role.kubernetes.io/sriov= Create a machine config pool for the SR-IOV Network Operator, for example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: sriov spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,sriov]} nodeSelector: matchLabels: node-role.kubernetes.io/sriov: "" Apply the following machineconfig.yaml file to the worker nodes: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: sriov name: 99-bf2-dpu spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,ZmluZF9jb250YWluZXIoKSB7CiAgY3JpY3RsIHBzIC1vIGpzb24gfCBqcSAtciAnLmNvbnRhaW5lcnNbXSB8IHNlbGVjdCgubWV0YWRhdGEubmFtZT09InNyaW92LW5ldHdvcmstY29uZmlnLWRhZW1vbiIpIHwgLmlkJwp9CnVudGlsIG91dHB1dD0kKGZpbmRfY29udGFpbmVyKTsgW1sgLW4gIiRvdXRwdXQiIF1dOyBkbwogIGVjaG8gIndhaXRpbmcgZm9yIGNvbnRhaW5lciB0byBjb21lIHVwIgogIHNsZWVwIDE7CmRvbmUKISBzdWRvIGNyaWN0bCBleGVjICRvdXRwdXQgL2JpbmRhdGEvc2NyaXB0cy9iZjItc3dpdGNoLW1vZGUuc2ggIiRAIgo= mode: 0755 overwrite: true path: /etc/default/switch_in_sriov_config_daemon.sh systemd: units: - name: dpu-switch.service enabled: true contents: | [Unit] Description=Switch BlueField2 card to NIC/DPU mode RequiresMountsFor=%t/containers Wants=network.target After=network-online.target kubelet.service [Service] SuccessExitStatus=0 120 RemainAfterExit=True ExecStart=/bin/bash -c '/etc/default/switch_in_sriov_config_daemon.sh nic || shutdown -r now' 1 Type=oneshot [Install] WantedBy=multi-user.target 1 Optional: The PCI address of a specific card can optionally be specified, for example ExecStart=/bin/bash -c '/etc/default/switch_in_sriov_config_daemon.sh nic 0000:5e:00.0 || echo done' . By default, the first device is selected. If there is more than one device, you must specify which PCI address to be used. The PCI address must be the same on all nodes that are switching Bluefield-2 from DPU mode to NIC mode. Wait for the worker nodes to restart. After restarting, the Bluefield-2 network device on the worker nodes is switched into NIC mode. Optional: You might need to restart the host hardware because most recent Bluefield-2 firmware releases require a hardware restart to switch into NIC mode. Additional resources Installing SR-IOV Network Operator 23.14. Uninstalling the SR-IOV Network Operator To uninstall the SR-IOV Network Operator, you must delete any running SR-IOV workloads, uninstall the Operator, and delete the webhooks that the Operator used. 23.14.1. Uninstalling the SR-IOV Network Operator As a cluster administrator, you can uninstall the SR-IOV Network Operator. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have the SR-IOV Network Operator installed. Procedure Delete all SR-IOV custom resources (CRs): USD oc delete sriovnetwork -n openshift-sriov-network-operator --all USD oc delete sriovnetworknodepolicy -n openshift-sriov-network-operator --all USD oc delete sriovibnetwork -n openshift-sriov-network-operator --all Follow the instructions in the "Deleting Operators from a cluster" section to remove the SR-IOV Network Operator from your cluster. Delete the SR-IOV custom resource definitions that remain in the cluster after the SR-IOV Network Operator is uninstalled: USD oc delete crd sriovibnetworks.sriovnetwork.openshift.io USD oc delete crd sriovnetworknodepolicies.sriovnetwork.openshift.io USD oc delete crd sriovnetworknodestates.sriovnetwork.openshift.io USD oc delete crd sriovnetworkpoolconfigs.sriovnetwork.openshift.io USD oc delete crd sriovnetworks.sriovnetwork.openshift.io USD oc delete crd sriovoperatorconfigs.sriovnetwork.openshift.io Delete the SR-IOV webhooks: USD oc delete mutatingwebhookconfigurations network-resources-injector-config USD oc delete MutatingWebhookConfiguration sriov-operator-webhook-config USD oc delete ValidatingWebhookConfiguration sriov-operator-webhook-config Delete the SR-IOV Network Operator namespace: USD oc delete namespace openshift-sriov-network-operator Additional resources Deleting Operators from a cluster
[ "oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable=\"true\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState metadata: name: node-25 1 namespace: openshift-sriov-network-operator ownerReferences: - apiVersion: sriovnetwork.openshift.io/v1 blockOwnerDeletion: true controller: true kind: SriovNetworkNodePolicy name: default spec: dpConfigVersion: \"39824\" status: interfaces: 2 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f0 pciAddress: \"0000:18:00.0\" totalvfs: 8 vendor: 15b3 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f1 pciAddress: \"0000:18:00.1\" totalvfs: 8 vendor: 15b3 - deviceID: 158b driver: i40e mtu: 1500 name: ens817f0 pciAddress: 0000:81:00.0 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens817f1 pciAddress: 0000:81:00.1 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens803f0 pciAddress: 0000:86:00.0 totalvfs: 64 vendor: \"8086\" syncStatus: Succeeded", "apiVersion: v1 kind: Pod metadata: name: rdma-app annotations: k8s.v1.cni.cncf.io/networks: sriov-rdma-mlnx spec: containers: - name: testpmd image: <RDMA_image> imagePullPolicy: IfNotPresent securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] command: [\"sleep\", \"infinity\"]", "apiVersion: v1 kind: Pod metadata: name: dpdk-app annotations: k8s.v1.cni.cncf.io/networks: sriov-dpdk-net spec: containers: - name: testpmd image: <DPDK_image> securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: memory: \"1Gi\" cpu: \"2\" hugepages-1Gi: \"4Gi\" requests: memory: \"1Gi\" cpu: \"2\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management EOF", "cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator EOF", "cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: stable name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc get csv -n openshift-sriov-network-operator -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase sriov-network-operator.4.14.0-202310121402 Succeeded", "oc annotate ns/openshift-sriov-network-operator workload.openshift.io/allowed=management", "oc get pods -n openshift-sriov-network-operator", "NAME READY STATUS RESTARTS AGE network-resources-injector-5cz5p 1/1 Running 0 10m network-resources-injector-dwqpx 1/1 Running 0 10m network-resources-injector-lktz5 1/1 Running 0 10m", "oc get pods -n openshift-sriov-network-operator", "NAME READY STATUS RESTARTS AGE operator-webhook-9jkw6 1/1 Running 0 16m operator-webhook-kbr5p 1/1 Running 0 16m operator-webhook-rpfrl 1/1 Running 0 16m", "oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"enableInjector\": <value> } }'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableInjector: <value>", "oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"enableOperatorWebhook\": <value> } }'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableOperatorWebhook: <value>", "oc patch sriovoperatorconfig default --type=json -n openshift-sriov-network-operator --patch '[{ \"op\": \"replace\", \"path\": \"/spec/configDaemonNodeSelector\", \"value\": {<node_label>} }]'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: <node_label>", "oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"disableDrain\": true } }'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: disableDrain: true", "apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subsription namespace: openshift-sriov-network-operator spec: channel: stable name: sriov-network-operator config: nodeSelector: node-role.kubernetes.io/worker: \"\" source: s/qe-app-registry/redhat-operators sourceNamespace: openshift-marketplace", "oc get csv -n openshift-sriov-network-operator", "NAME DISPLAY VERSION REPLACES PHASE sriov-network-operator.4.14.0-202211021237 SR-IOV Network Operator 4.14.0-202211021237 sriov-network-operator.4.14.0-202210290517 Succeeded", "oc get pods -n openshift-sriov-network-operator", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 needVhostNet: false 7 numVfs: <num> 8 externallyManaged: false 9 nicSelector: 10 vendor: \"<vendor_code>\" 11 deviceID: \"<device_id>\" 12 pfNames: [\"<pf_name>\", ...] 13 rootDevices: [\"<pci_bus_id>\", ...] 14 netFilter: \"<filter_string>\" 15 deviceType: <device_type> 16 isRdma: false 17 linkType: <link_type> 18 eSwitchMode: \"switchdev\" 19 excludeTopology: false 20", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-ib-net-1 namespace: openshift-sriov-network-operator spec: resourceName: ibnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 4 nicSelector: vendor: \"15b3\" deviceID: \"101b\" rootDevices: - \"0000:19:00.0\" linkType: ib isRdma: true", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-sriov-net-openstack-1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 1 1 nicSelector: vendor: \"15b3\" deviceID: \"101b\" netFilter: \"openstack/NetworkID:ea24bd04-8674-4f69-b0ee-fa0b3bd20509\" 2", "pfNames: [\"netpf0#2-7\"]", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1 namespace: openshift-sriov-network-operator spec: resourceName: net1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 16 nicSelector: pfNames: [\"netpf0#0-0\"] deviceType: netdevice", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1-dpdk namespace: openshift-sriov-network-operator spec: resourceName: net1dpdk nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 16 nicSelector: pfNames: [\"netpf0#8-15\"] deviceType: vfio-pci", "ip link show <interface> 1", "5: ens3f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 3c:fd:fe:d1:bc:01 brd ff:ff:ff:ff:ff:ff vf 0 link/ether 5a:e7:88:25:ea:a0 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 1 link/ether 3e:1d:36:d7:3d:49 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 2 link/ether ce:09:56:97:df:f9 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 3 link/ether 5e:91:cf:88:d1:38 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 4 link/ether e6:06:a1:96:2f:de brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off", "oc create -f <name>-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "apiVersion: v1 kind: SriovNetworkPoolConfig metadata: name: pool-1 1 namespace: openshift-sriov-network-operator 2 spec: maxUnavailable: 2 3 nodeSelector: 4 matchLabels: node-role.kubernetes.io/worker: \"\"", "oc create -f sriov-nw-pool.yaml", "oc create namespace sriov-test", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: pfNames: [\"ens1\"] nodeSelector: node-role.kubernetes.io/worker: \"\" numVfs: 5 priority: 99 resourceName: sriov_nic_1", "oc create -f sriov-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nic-1 namespace: openshift-sriov-network-operator spec: linkState: auto networkNamespace: sriov-test resourceName: sriov_nic_1 capabilities: '{ \"mac\": true, \"ips\": true }' ipam: '{ \"type\": \"static\" }'", "oc create -f sriov-network.yaml", "oc get sriovNetworkpoolConfig -n openshift-sriov-network-operator", "NAME AGE pool-1 67s 1", "oc patch SriovNetworkNodePolicy sriov-nic-1 -n openshift-sriov-network-operator --type merge -p '{\"spec\": {\"numVfs\": 4}}'", "oc get sriovNetworkNodeState -n openshift-sriov-network-operator", "NAMESPACE NAME SYNC STATUS DESIRED SYNC STATE CURRENT SYNC STATE AGE openshift-sriov-network-operator worker-0 InProgress Drain_Required DrainComplete 3d10h openshift-sriov-network-operator worker-1 InProgress Drain_Required DrainComplete 3d10h", "NAMESPACE NAME SYNC STATUS DESIRED SYNC STATE CURRENT SYNC STATE AGE openshift-sriov-network-operator worker-0 Succeeded Idle Idle 3d10h openshift-sriov-network-operator worker-1 Succeeded Idle Idle 3d10h", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name>", "\"lastSyncError\": \"write /sys/bus/pci/devices/0000:3b:00.1/sriov_numvfs: cannot allocate memory\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: example-network namespace: additional-sriov-network-1 spec: ipam: | { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } vlan: 0 resourceName: intelnics metaPlugins : | { \"type\": \"vrf\", 1 \"vrfname\": \"example-vrf-name\" 2 }", "oc create -f sriov-network-attachment.yaml", "oc get network-attachment-definitions -n <namespace> 1", "NAME AGE additional-sriov-network-1 14m", "ip vrf show", "Name Table ----------------------- red 10", "ip link", "5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <policy_name> namespace: openshift-sriov-network-operator spec: resourceName: sriovnuma0 1 nodeSelector: kubernetes.io/hostname: <node_name> numVfs: <number_of_Vfs> nicSelector: 2 vendor: \"<vendor_ID>\" deviceID: \"<device_ID>\" deviceType: netdevice excludeTopology: true 3", "oc create -f sriov-network-node-policy.yaml", "sriovnetworknodepolicy.sriovnetwork.openshift.io/policy-for-numa-0 created", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-numa-0-network 1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnuma0 2 networkNamespace: <namespace> 3 ipam: |- 4 { \"type\": \"<ipam_type>\", }", "oc create -f sriov-network.yaml", "sriovnetwork.sriovnetwork.openshift.io/sriov-numa-0-network created", "apiVersion: v1 kind: Pod metadata: name: <pod_name> annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"sriov-numa-0-network\", 1 } ] spec: containers: - name: <container_name> image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]", "oc create -f sriov-network-pod.yaml", "pod/example-pod created", "oc get pod <pod_name>", "NAME READY STATUS RESTARTS AGE test-deployment-sriov-76cbbf4756-k9v72 1/1 Running 0 45h", "oc debug pod/<pod_name>", "chroot /host", "lscpu | grep NUMA", "NUMA node(s): 2 NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18, NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,", "cat /proc/self/status | grep Cpus", "Cpus_allowed: aa Cpus_allowed_list: 1,3,5,7", "cat /sys/class/net/net1/device/numa_node", "0", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 ipam: |- 7 {} linkState: <link_state> 8 maxTxRate: <max_tx_rate> 9 minTxRate: <min_tx_rate> 10 vlanQoS: <vlan_qos> 11 trust: \"<trust_vf>\" 12 capabilities: <capabilities> 13", "{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #", "{ \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }", "cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"whereabouts-dual-stack\", \"cniVersion\": \"0.3.1, \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"ipRanges\": [ {\"range\": \"192.168.10.0/24\"}, {\"range\": \"2001:db8::/64\"} ] } }", "oc exec -it mypod -- ip a", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"gateway\": \"10.56.217.1\" }", "oc create -f <name>.yaml", "oc get net-attach-def -n <namespace>", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 ipam: |- 5 {} linkState: <link_state> 6 capabilities: <capabilities> 7", "{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #", "{ \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }", "cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"whereabouts-dual-stack\", \"cniVersion\": \"0.3.1, \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"ipRanges\": [ {\"range\": \"192.168.10.0/24\"}, {\"range\": \"2001:db8::/64\"} ] } }", "oc exec -it mypod -- ip a", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"gateway\": \"10.56.217.1\" }", "oc create -f <name>.yaml", "oc get net-attach-def -n <namespace>", "[ { \"name\": \"<name>\", 1 \"mac\": \"<mac_address>\", 2 \"ips\": [\"<cidr_range>\"] 3 } ]", "apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"net1\", \"mac\": \"20:04:0f:f1:88:01\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]", "[ { \"name\": \"<network_attachment>\", 1 \"infiniband-guid\": \"<guid>\", 2 \"ips\": [\"<cidr_range>\"] 3 } ]", "apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"ib1\", \"infiniband-guid\": \"c2:11:22:33:44:55:66:77\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]", "metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1", "metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]", "oc create -f <name>.yaml", "oc get pod <name> -o yaml", "oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:", "apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: <name> 1 spec: containers: - name: sample-container image: <image> 2 command: [\"sleep\", \"infinity\"] resources: limits: memory: \"1Gi\" 3 cpu: \"2\" 4 requests: memory: \"1Gi\" cpu: \"2\"", "oc create -f <filename> 1", "oc describe pod sample-pod", "oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus", "oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus", "apiVersion: v1 kind: Pod metadata: name: testpmd-sriov namespace: mynamespace annotations: cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" spec: containers: - name: testpmd command: [\"sleep\", \"99999\"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: [\"IPC_LOCK\",\"SYS_ADMIN\"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' openshift.io/sriov1: 1 limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi openshift.io/sriov1: 1 volumeMounts: - mountPath: /dev/hugepages name: hugepage readOnly: False runtimeClassName: performance-cnf-performanceprofile 1 volumes: - name: hugepage emptyDir: medium: HugePages", "oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable=\"true\"", "oc create namespace sysctl-tuning-test", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policyoneflag 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyoneflag 3 nodeSelector: 4 feature.node.kubernetes.io/network-sriov.capable=\"true\" priority: 10 5 numVfs: 5 6 nicSelector: 7 pfNames: [\"ens5\"] 8 deviceType: \"netdevice\" 9 isRdma: false 10", "oc create -f policyoneflag-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "Succeeded", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: onevalidflag 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyoneflag 3 networkNamespace: sysctl-tuning-test 4 ipam: '{ \"type\": \"static\" }' 5 capabilities: '{ \"mac\": true, \"ips\": true }' 6 metaPlugins : | 7 { \"type\": \"tuning\", \"capabilities\":{ \"mac\":true }, \"sysctl\":{ \"net.ipv4.conf.IFNAME.accept_redirects\": \"1\" } }", "oc create -f sriov-network-interface-sysctl.yaml", "oc get network-attachment-definitions -n <namespace> 1", "NAME AGE onevalidflag 14m", "apiVersion: v1 kind: Pod metadata: name: tunepod namespace: sysctl-tuning-test annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"onevalidflag\", 1 \"mac\": \"0a:56:0a:83:04:0c\", 2 \"ips\": [\"10.100.100.200/24\"] 3 } ] spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault", "oc apply -f examplepod.yaml", "oc get pod -n sysctl-tuning-test", "NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s", "oc rsh -n sysctl-tuning-test tunepod", "sysctl net.ipv4.conf.net1.accept_redirects", "net.ipv4.conf.net1.accept_redirects = 1", "oc create namespace sysctl-tuning-test", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policyallflags 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyallflags 3 nodeSelector: 4 node.alpha.kubernetes-incubator.io/nfd-network-sriov.capable = `true` priority: 10 5 numVfs: 5 6 nicSelector: 7 pfNames: [\"ens1f0\"] 8 deviceType: \"netdevice\" 9 isRdma: false 10", "oc create -f policyallflags-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "Succeeded", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: allvalidflags 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyallflags 3 networkNamespace: sysctl-tuning-test 4 capabilities: '{ \"mac\": true, \"ips\": true }' 5", "oc create -f sriov-network-attachment.yaml", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bond-sysctl-network namespace: sysctl-tuning-test spec: config: '{ \"cniVersion\":\"0.4.0\", \"name\":\"bound-net\", \"plugins\":[ { \"type\":\"bond\", 1 \"mode\": \"active-backup\", 2 \"failOverMac\": 1, 3 \"linksInContainer\": true, 4 \"miimon\": \"100\", \"links\": [ 5 {\"name\": \"net1\"}, {\"name\": \"net2\"} ], \"ipam\":{ 6 \"type\":\"static\" } }, { \"type\":\"tuning\", 7 \"capabilities\":{ \"mac\":true }, \"sysctl\":{ \"net.ipv4.conf.IFNAME.accept_redirects\": \"0\", \"net.ipv4.conf.IFNAME.accept_source_route\": \"0\", \"net.ipv4.conf.IFNAME.disable_policy\": \"1\", \"net.ipv4.conf.IFNAME.secure_redirects\": \"0\", \"net.ipv4.conf.IFNAME.send_redirects\": \"0\", \"net.ipv6.conf.IFNAME.accept_redirects\": \"0\", \"net.ipv6.conf.IFNAME.accept_source_route\": \"1\", \"net.ipv6.neigh.IFNAME.base_reachable_time_ms\": \"20000\", \"net.ipv6.neigh.IFNAME.retrans_time_ms\": \"2000\" } } ] }'", "oc create -f sriov-bond-network-interface.yaml", "oc get network-attachment-definitions -n <namespace> 1", "NAME AGE bond-sysctl-network 22m allvalidflags 47m", "apiVersion: v1 kind: Pod metadata: name: tunepod namespace: sysctl-tuning-test annotations: k8s.v1.cni.cncf.io/networks: |- [ {\"name\": \"allvalidflags\"}, 1 {\"name\": \"allvalidflags\"}, { \"name\": \"bond-sysctl-network\", \"interface\": \"bond0\", \"mac\": \"0a:56:0a:83:04:0c\", 2 \"ips\": [\"10.100.100.200/24\"] 3 } ] spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault", "oc apply -f examplepod.yaml", "oc get pod -n sysctl-tuning-test", "NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s", "oc rsh -n sysctl-tuning-test tunepod", "sysctl net.ipv6.neigh.bond0.base_reachable_time_ms", "net.ipv6.neigh.bond0.base_reachable_time_ms = 20000", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnetpolicy-mlx namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: deviceID: \"1017\" pfNames: - ens8f0np0#0-9 rootDevices: - 0000:d8:00.0 vendor: \"15b3\" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 10 priority: 99 resourceName: resourcemlx", "oc create -f sriovnetpolicy-mlx.yaml", "oc create namespace enable-allmulti-test", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: enableallmulti 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: enableallmulti 3 networkNamespace: enable-allmulti-test 4 ipam: '{ \"type\": \"static\" }' 5 capabilities: '{ \"mac\": true, \"ips\": true }' 6 trust: \"on\" 7 metaPlugins : | 8 { \"type\": \"tuning\", \"capabilities\":{ \"mac\":true }, \"allmulti\": true } }", "oc create -f sriov-enable-all-multicast.yaml", "oc get network-attachment-definitions -n <namespace> 1", "NAME AGE enableallmulti 14m", "oc get sriovnetwork -n openshift-sriov-network-operator", "apiVersion: v1 kind: Pod metadata: name: samplepod namespace: enable-allmulti-test annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"enableallmulti\", 1 \"mac\": \"0a:56:0a:83:04:0c\", 2 \"ips\": [\"10.100.100.200/24\"] 3 } ] spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault", "oc apply -f examplepod.yaml", "oc get pod -n enable-allmulti-test", "NAME READY STATUS RESTARTS AGE samplepod 1/1 Running 0 47s", "oc rsh -n enable-allmulti-test samplepod", "sh-4.4# ip link", "1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UP mode DEFAULT group default link/ether 0a:58:0a:83:00:10 brd ff:ff:ff:ff:ff:ff link-netnsid 0 1 3: net1@if24: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether ee:9b:66:a4:ec:1d brd ff:ff:ff:ff:ff:ff link-netnsid 0 2", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-example namespace: openshift-sriov-network-operator spec: resourceName: example nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 4 nicSelector: vendor: \"8086\" pfNames: ['ens803f0'] rootDevices: ['0000:86:00.0']", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: net-example namespace: openshift-sriov-network-operator spec: networkNamespace: default ipam: | 1 { \"type\": \"host-local\", 2 \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [ {\"dst\": \"224.0.0.0/5\"}, {\"dst\": \"232.0.0.0/5\"} ], \"gateway\": \"10.56.217.1\" } resourceName: example", "apiVersion: v1 kind: Pod metadata: name: testpmd namespace: default annotations: k8s.v1.cni.cncf.io/networks: nic1 spec: containers: - name: example image: rhel7:latest securityContext: capabilities: add: [\"NET_ADMIN\"] 1 command: [ \"sleep\", \"infinity\"]", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: intel-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: intelnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"8086\" deviceID: \"158b\" pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: vfio-pci 1", "oc create -f intel-dpdk-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: intel-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- ... 1 vlan: <vlan> resourceName: intelnics", "oc create -f intel-dpdk-network.yaml", "apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: intel-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: openshift.io/intelnics: \"1\" 5 memory: \"1Gi\" cpu: \"4\" 6 hugepages-1Gi: \"4Gi\" 7 requests: openshift.io/intelnics: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "oc create -f intel-dpdk-pod.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"15b3\" deviceID: \"1015\" 1 pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: netdevice 2 isRdma: true 3", "oc create -f mlx-dpdk-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 vlan: <vlan> resourceName: mlxnics", "oc create -f mlx-dpdk-network.yaml", "apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: openshift.io/mlxnics: \"1\" 5 memory: \"1Gi\" cpu: \"4\" 6 hugepages-1Gi: \"4Gi\" 7 requests: openshift.io/mlxnics: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "oc create -f mlx-dpdk-pod.yaml", "apiVersion: v1 kind: Namespace metadata: name: test-namespace labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: \"false\"", "oc apply -f test-namespace.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnic namespace: openshift-sriov-network-operator spec: deviceType: netdevice 1 isRdma: true 2 needVhostNet: true 3 nicSelector: vendor: \"15b3\" 4 deviceID: \"101b\" 5 rootDevices: [\"00:05.0\"] numVfs: 10 priority: 99 resourceName: sriovnic nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\"", "oc create -f sriov-node-network-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-network namespace: openshift-sriov-network-operator spec: networkNamespace: test-namespace resourceName: sriovnic spoofChk: \"off\" trust: \"on\"", "oc create -f sriov-network-attachment.yaml", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: tap-one namespace: test-namespace 1 spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"tap\", \"plugins\": [ { \"type\": \"tap\", \"multiQueue\": true, \"selinuxcontext\": \"system_u:system_r:container_t:s0\" }, { \"type\":\"tuning\", \"capabilities\":{ \"mac\":true } } ] }'", "oc apply -f tap-example.yaml", "apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: test-namespace 1 annotations: k8s.v1.cni.cncf.io/networks: '[ {\"name\": \"sriov-network\", \"namespace\": \"test-namespace\"}, {\"name\": \"tap-one\", \"interface\": \"ext0\", \"namespace\": \"test-namespace\"}]' spec: nodeSelector: kubernetes.io/hostname: \"worker-0\" securityContext: fsGroup: 1001 2 runAsGroup: 1001 3 seccompProfile: type: RuntimeDefault containers: - name: testpmd image: <DPDK_image> 4 securityContext: capabilities: drop: [\"ALL\"] 5 add: 6 - IPC_LOCK - NET_RAW #for mlx only 7 runAsUser: 1001 8 privileged: false 9 allowPrivilegeEscalation: true 10 runAsNonRoot: true 11 volumeMounts: - mountPath: /mnt/huge 12 name: hugepages resources: limits: openshift.io/sriovnic: \"1\" 13 memory: \"1Gi\" cpu: \"4\" 14 hugepages-1Gi: \"4Gi\" 15 requests: openshift.io/sriovnic: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] runtimeClassName: performance-cnf-performanceprofile 16 volumes: - name: hugepages emptyDir: medium: HugePages", "oc create -f dpdk-pod-rootless.yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: globallyDisableIrqLoadBalancing: true cpu: isolated: 21-51,73-103 1 reserved: 0-20,52-72 2 hugepages: defaultHugepagesSize: 1G 3 pages: - count: 32 size: 1G net: userLevelNetworking: true numa: topologyPolicy: \"single-numa-node\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "oc create -f mlx-dpdk-perfprofile-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci 1 needVhostNet: true 2 nicSelector: pfNames: [\"ens3f0\"] nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 10 priority: 99 resourceName: dpdk_nic_1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci needVhostNet: true nicSelector: pfNames: [\"ens3f1\"] nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 10 priority: 99 resourceName: dpdk_nic_2", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice 1 isRdma: true 2 nicSelector: rootDevices: - \"0000:5e:00.1\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 5 priority: 99 resourceName: dpdk_nic_1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-2 namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: rootDevices: - \"0000:5e:00.0\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 5 priority: 99 resourceName: dpdk_nic_2", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-network-1 namespace: openshift-sriov-network-operator spec: ipam: '{\"type\": \"host-local\",\"ranges\": [[{\"subnet\": \"10.0.1.0/24\"}]],\"dataDir\": \"/run/my-orchestrator/container-ipam-state-1\"}' 1 networkNamespace: dpdk-test 2 spoofChk: \"off\" trust: \"on\" resourceName: dpdk_nic_1 3 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-network-2 namespace: openshift-sriov-network-operator spec: ipam: '{\"type\": \"host-local\",\"ranges\": [[{\"subnet\": \"10.0.2.0/24\"}]],\"dataDir\": \"/run/my-orchestrator/container-ipam-state-1\"}' networkNamespace: dpdk-test spoofChk: \"off\" trust: \"on\" resourceName: dpdk_nic_2", "apiVersion: v1 kind: Namespace metadata: name: dpdk-test --- apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ 1 { \"name\": \"dpdk-network-1\", \"namespace\": \"dpdk-test\" }, { \"name\": \"dpdk-network-2\", \"namespace\": \"dpdk-test\" } ]' irq-load-balancing.crio.io: \"disable\" 2 cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" labels: app: dpdk name: testpmd namespace: dpdk-test spec: runtimeClassName: performance-performance 3 containers: - command: - /bin/bash - -c - sleep INF image: registry.redhat.io/openshift4/dpdk-base-rhel8 imagePullPolicy: Always name: dpdk resources: 4 limits: cpu: \"16\" hugepages-1Gi: 8Gi memory: 2Gi requests: cpu: \"16\" hugepages-1Gi: 8Gi memory: 2Gi securityContext: capabilities: add: - IPC_LOCK - SYS_RESOURCE - NET_RAW - NET_ADMIN runAsUser: 0 volumeMounts: - mountPath: /mnt/huge name: hugepages terminationGracePeriodSeconds: 5 volumes: - emptyDir: medium: HugePages name: hugepages", "#!/bin/bash set -ex export CPU=USD(cat /sys/fs/cgroup/cpuset/cpuset.cpus) echo USD{CPU} dpdk-testpmd -l USD{CPU} -a USD{PCIDEVICE_OPENSHIFT_IO_DPDK_NIC_1} -a USD{PCIDEVICE_OPENSHIFT_IO_DPDK_NIC_2} -n 4 -- -i --nb-cores=15 --rxd=4096 --txd=4096 --rxq=7 --txq=7 --forward-mode=mac --eth-peer=0,50:00:00:00:00:01 --eth-peer=1,50:00:00:00:00:02", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-rdma-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"15b3\" deviceID: \"1015\" 1 pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: netdevice 2 isRdma: true 3", "oc create -f mlx-rdma-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-rdma-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 vlan: <vlan> resourceName: mlxnics", "oc create -f mlx-rdma-network.yaml", "apiVersion: v1 kind: Pod metadata: name: rdma-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-rdma-network spec: containers: - name: testpmd image: <RDMA_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: memory: \"1Gi\" cpu: \"4\" 5 hugepages-1Gi: \"4Gi\" 6 requests: memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "oc create -f mlx-rdma-pod.yaml", "apiVersion: v1 kind: Pod metadata: name: testpmd-dpdk namespace: mynamespace annotations: cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" spec: containers: - name: testpmd command: [\"sleep\", \"99999\"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: [\"IPC_LOCK\",\"SYS_ADMIN\"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' openshift.io/dpdk1: 1 1 limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi openshift.io/dpdk1: 1 volumeMounts: - mountPath: /mnt/huge name: hugepage readOnly: False runtimeClassName: performance-cnf-performanceprofile 2 volumes: - name: hugepage emptyDir: medium: HugePages", "apiVersion: v1 kind: Pod metadata: name: testpmd-sriov namespace: mynamespace annotations: k8s.v1.cni.cncf.io/networks: hwoffload1 spec: runtimeClassName: performance-cnf-performanceprofile 1 containers: - name: testpmd command: [\"sleep\", \"99999\"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: [\"IPC_LOCK\",\"SYS_ADMIN\"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi volumeMounts: - mountPath: /mnt/huge name: hugepage readOnly: False volumes: - name: hugepage emptyDir: medium: HugePages", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bond-net1 namespace: demo spec: config: '{ \"type\": \"bond\", 1 \"cniVersion\": \"0.3.1\", \"name\": \"bond-net1\", \"mode\": \"active-backup\", 2 \"failOverMac\": 1, 3 \"linksInContainer\": true, 4 \"miimon\": \"100\", \"mtu\": 1500, \"links\": [ 5 {\"name\": \"net1\"}, {\"name\": \"net2\"} ], \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } }'", "apiVersion: v1 kind: Pod metadata: name: bondpod1 namespace: demo annotations: k8s.v1.cni.cncf.io/networks: demo/sriovnet1, demo/sriovnet2, demo/bond-net1 1 spec: containers: - name: podexample image: quay.io/openshift/origin-network-interface-bond-cni:4.11.0 command: [\"/bin/bash\", \"-c\", \"sleep INF\"]", "oc apply -f podbonding.yaml", "oc rsh -n demo bondpod1 sh-4.4# sh-4.4# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 3: eth0@if150: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP link/ether 62:b1:b5:c8:fb:7a brd ff:ff:ff:ff:ff:ff inet 10.244.1.122/24 brd 10.244.1.255 scope global eth0 valid_lft forever preferred_lft forever 4: net3: <BROADCAST,MULTICAST,UP,LOWER_UP400> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 1 inet 10.56.217.66/24 scope global bond0 valid_lft forever preferred_lft forever 43: net1: <BROADCAST,MULTICAST,UP,LOWER_UP800> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 2 44: net2: <BROADCAST,MULTICAST,UP,LOWER_UP800> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 3", "annotations: k8s.v1.cni.cncf.io/networks: demo/sriovnet1, demo/sriovnet2, demo/bond-net1@bond0", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default 1 namespace: openshift-sriov-network-operator spec: enableInjector: true enableOperatorWebhook: true configurationMode: \"systemd\" 2 logLevel: 2", "oc apply -f sriovOperatorConfig.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: mcp-offloading 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,mcp-offloading]} 2 nodeSelector: matchLabels: node-role.kubernetes.io/mcp-offloading: \"\" 3", "oc create -f mcp-offloading.yaml", "oc label node worker-2 node-role.kubernetes.io/mcp-offloading=\"\"", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 2d v1.27.3 master-1 Ready master 2d v1.27.3 master-2 Ready master 2d v1.27.3 worker-0 Ready worker 2d v1.27.3 worker-1 Ready worker 2d v1.27.3 worker-2 Ready mcp-offloading,worker 47h v1.27.3 worker-3 Ready mcp-offloading,worker 47h v1.27.3", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkPoolConfig metadata: name: sriovnetworkpoolconfig-offload namespace: openshift-sriov-network-operator spec: ovsHardwareOffloadConfig: name: mcp-offloading 1", "oc create -f <SriovNetworkPoolConfig_name>.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-policy 1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice 2 eSwitchMode: \"switchdev\" 3 nicSelector: deviceID: \"1019\" rootDevices: - 0000:d8:00.0 vendor: \"15b3\" pfNames: - ens8f0 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 6 priority: 5 resourceName: mlxnics", "oc create -f sriov-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USD{name} namespace: openshift-sriov-network-operator spec: deviceType: switchdev isRdma: true nicSelector: netFilter: openstack/NetworkID:USD{net_id} nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: USD{name}", "oc label node <node-name> network.operator.openshift.io/smart-nic=", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-mgmt-vf-policy namespace: openshift-sriov-network-operator spec: deviceType: netdevice eSwitchMode: \"switchdev\" nicSelector: deviceID: \"1019\" rootDevices: - 0000:d8:00.0 vendor: \"15b3\" pfNames: - ens8f0#0-0 1 nodeSelector: network.operator.openshift.io/smart-nic: \"\" numVfs: 6 2 priority: 5 resourceName: mgmtvf", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-policy namespace: openshift-sriov-network-operator spec: deviceType: netdevice eSwitchMode: \"switchdev\" nicSelector: deviceID: \"1019\" rootDevices: - 0000:d8:00.0 vendor: \"15b3\" pfNames: - ens8f0#1-5 1 nodeSelector: network.operator.openshift.io/smart-nic: \"\" numVfs: 6 2 priority: 5 resourceName: mlxnics", "oc create -f sriov-node-policy.yaml", "oc create -f sriov-node-mgmt-vf-policy.yaml", "apiVersion: v1 kind: ConfigMap metadata: name: hardware-offload-config namespace: openshift-network-operator data: mgmt-port-resource-name: openshift.io/mgmtvf", "oc create -f hardware-offload-config.yaml", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: net-attach-def 1 namespace: net-attach-def 2 annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/mlxnics 3 spec: config: '{\"cniVersion\":\"0.3.1\",\"name\":\"ovn-kubernetes\",\"type\":\"ovn-k8s-cni-overlay\",\"ipam\":{},\"dns\":{}}'", "oc create -f net-attach-def.yaml", "oc get net-attach-def -A", "NAMESPACE NAME AGE net-attach-def net-attach-def 43h", ". metadata: annotations: v1.multus-cni.io/default-network: net-attach-def/net-attach-def 1", "oc label node <example_node_name_one> node-role.kubernetes.io/sriov=", "oc label node <example_node_name_two> node-role.kubernetes.io/sriov=", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: sriov spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,sriov]} nodeSelector: matchLabels: node-role.kubernetes.io/sriov: \"\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: sriov name: 99-bf2-dpu spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,ZmluZF9jb250YWluZXIoKSB7CiAgY3JpY3RsIHBzIC1vIGpzb24gfCBqcSAtciAnLmNvbnRhaW5lcnNbXSB8IHNlbGVjdCgubWV0YWRhdGEubmFtZT09InNyaW92LW5ldHdvcmstY29uZmlnLWRhZW1vbiIpIHwgLmlkJwp9CnVudGlsIG91dHB1dD0kKGZpbmRfY29udGFpbmVyKTsgW1sgLW4gIiRvdXRwdXQiIF1dOyBkbwogIGVjaG8gIndhaXRpbmcgZm9yIGNvbnRhaW5lciB0byBjb21lIHVwIgogIHNsZWVwIDE7CmRvbmUKISBzdWRvIGNyaWN0bCBleGVjICRvdXRwdXQgL2JpbmRhdGEvc2NyaXB0cy9iZjItc3dpdGNoLW1vZGUuc2ggIiRAIgo= mode: 0755 overwrite: true path: /etc/default/switch_in_sriov_config_daemon.sh systemd: units: - name: dpu-switch.service enabled: true contents: | [Unit] Description=Switch BlueField2 card to NIC/DPU mode RequiresMountsFor=%t/containers Wants=network.target After=network-online.target kubelet.service [Service] SuccessExitStatus=0 120 RemainAfterExit=True ExecStart=/bin/bash -c '/etc/default/switch_in_sriov_config_daemon.sh nic || shutdown -r now' 1 Type=oneshot [Install] WantedBy=multi-user.target", "oc delete sriovnetwork -n openshift-sriov-network-operator --all", "oc delete sriovnetworknodepolicy -n openshift-sriov-network-operator --all", "oc delete sriovibnetwork -n openshift-sriov-network-operator --all", "oc delete crd sriovibnetworks.sriovnetwork.openshift.io", "oc delete crd sriovnetworknodepolicies.sriovnetwork.openshift.io", "oc delete crd sriovnetworknodestates.sriovnetwork.openshift.io", "oc delete crd sriovnetworkpoolconfigs.sriovnetwork.openshift.io", "oc delete crd sriovnetworks.sriovnetwork.openshift.io", "oc delete crd sriovoperatorconfigs.sriovnetwork.openshift.io", "oc delete mutatingwebhookconfigurations network-resources-injector-config", "oc delete MutatingWebhookConfiguration sriov-operator-webhook-config", "oc delete ValidatingWebhookConfiguration sriov-operator-webhook-config", "oc delete namespace openshift-sriov-network-operator" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/networking/hardware-networks
Chapter 1. OpenShift Container Platform installation overview
Chapter 1. OpenShift Container Platform installation overview 1.1. About OpenShift Container Platform installation The OpenShift Container Platform installation program offers four methods for deploying a cluster which are detailed in the following list: Interactive : You can deploy a cluster with the web-based Assisted Installer . This is an ideal approach for clusters with networks connected to the internet. The Assisted Installer is the easiest way to install OpenShift Container Platform, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios. Local Agent-based : You can deploy a cluster locally with the Agent-based Installer for disconnected environments or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the Agent-based Installer first. Configuration is done with a command-line interface. This approach is ideal for disconnected environments. Automated : You can deploy a cluster on installer-provisioned infrastructure. The installation program uses each cluster host's baseboard management controller (BMC) for provisioning. You can deploy clusters in connected or disconnected environments. Full control : You can deploy a cluster on infrastructure that you prepare and maintain, which provides maximum customizability. You can deploy clusters in connected or disconnected environments. Each method deploys a cluster with the following characteristics: Highly available infrastructure with no single points of failure, which is available by default. Administrators can control what updates are applied and when. 1.1.1. About the installation program You can use the installation program to deploy each type of cluster. The installation program generates the main assets, such as Ignition config files for the bootstrap, control plane, and compute machines. You can start an OpenShift Container Platform cluster with these three machine configurations, provided you correctly configured the infrastructure. The OpenShift Container Platform installation program uses a set of targets and dependencies to manage cluster installations. The installation program has a set of targets that it must achieve, and each target has a set of dependencies. Because each target is only concerned with its own dependencies, the installation program can act to achieve multiple targets in parallel with the ultimate target being a running cluster. The installation program recognizes and uses existing components instead of running commands to create them again because the program meets the dependencies. Figure 1.1. OpenShift Container Platform installation targets and dependencies 1.1.2. About Red Hat Enterprise Linux CoreOS (RHCOS) Post-installation, each cluster machine uses Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. RHCOS is the immutable container host version of Red Hat Enterprise Linux (RHEL) and features a RHEL kernel with SELinux enabled by default. RHCOS includes the kubelet , which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes. Every control plane machine in an OpenShift Container Platform 4.14 cluster must use RHCOS, which includes a critical first-boot provisioning tool called Ignition. This tool enables the cluster to configure the machines. Operating system updates are delivered as a bootable container image, using OSTree as a backend, that is deployed across the cluster by the Machine Config Operator. Actual operating system changes are made in-place on each machine as an atomic operation by using rpm-ostree . Together, these technologies enable OpenShift Container Platform to manage the operating system like it manages any other application on the cluster, by in-place upgrades that keep the entire platform up to date. These in-place updates can reduce the burden on operations teams. If you use RHCOS as the operating system for all cluster machines, the cluster manages all aspects of its components and machines, including the operating system. Because of this, only the installation program and the Machine Config Operator can change machines. The installation program uses Ignition config files to set the exact state of each machine, and the Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. 1.1.3. Glossary of common terms for OpenShift Container Platform installing The glossary defines common terms that relate to the installation content. Read the following list of terms to better understand the installation process. Assisted Installer An installer hosted at console.redhat.com that provides a web-based user interface or a RESTful API for creating a cluster configuration. The Assisted Installer generates a discovery image. Cluster machines boot with the discovery image, which installs RHCOS and an agent. Together, the Assisted Installer and agent provide preinstallation validation and installation for the cluster. Agent-based Installer An installer similar to the Assisted Installer, but you must download the Agent-based Installer first. The Agent-based Installer is ideal for disconnected environments. Bootstrap node A temporary machine that runs a minimal Kubernetes configuration required to deploy the OpenShift Container Platform control plane. Control plane A container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers. Also known as control plane machines. Compute node Nodes that are responsible for executing workloads for cluster users. Also known as worker nodes. Disconnected installation In some situations, parts of a data center might not have access to the internet, even through proxy servers. You can still install the OpenShift Container Platform in these environments, but you must download the required software and images and make them available to the disconnected environment. The OpenShift Container Platform installation program A program that provisions the infrastructure and deploys a cluster. Installer-provisioned infrastructure The installation program deploys and configures the infrastructure that the cluster runs on. Ignition config files A file that the Ignition tool uses to configure Red Hat Enterprise Linux CoreOS (RHCOS) during operating system initialization. The installation program generates different Ignition configuration files to initialize bootstrap, control plane, and worker nodes. Kubernetes manifests Specifications of a Kubernetes API object in a JSON or YAML format. A configuration file can include deployments, config maps, secrets, daemonsets, and so on. Kubelet A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod. Load balancers A load balancer serves as the single point of contact for clients. Load balancers for the API distribute incoming traffic across control plane nodes. Machine Config Operator An Operator that manages and applies configurations and updates of the base operating system and container runtime, including everything between the kernel and kubelet, for the nodes in the cluster. Operators The preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Container Platform cluster. An operator takes human operational knowledge and encodes it into software that is easily packaged and shared with customers. User-provisioned infrastructure You can install OpenShift Container Platform on infrastructure that you provide. You can use the installation program to generate the assets required to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. 1.1.4. Installation process Except for the Assisted Installer, when you install an OpenShift Container Platform cluster, you must download the installation program from the appropriate Cluster Type page on the OpenShift Cluster Manager Hybrid Cloud Console. This console manages: REST API for accounts. Registry tokens, which are the pull secrets that you use to obtain the required components. Cluster registration, which associates the cluster identity to your Red Hat account to facilitate the gathering of usage metrics. In OpenShift Container Platform 4.14, the installation program is a Go binary file that performs a series of file transformations on a set of assets. The way you interact with the installation program differs depending on your installation type. Consider the following installation use cases: To deploy a cluster with the Assisted Installer, you must configure the cluster settings by using the Assisted Installer . There is no installation program to download and configure. After you finish setting the cluster configuration, you download a discovery ISO and then boot cluster machines with that image. You can install clusters with the Assisted Installer on Nutanix, vSphere, and bare metal with full integration, and other platforms without integration. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. To deploy clusters with the Agent-based Installer, you can download the Agent-based Installer first. You can then configure the cluster and generate a discovery image. You boot cluster machines with the discovery image, which installs an agent that communicates with the installation program and handles the provisioning for you instead of you interacting with the installation program or setting up a provisioner machine yourself. You must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. This approach is ideal for disconnected environments. For clusters with installer-provisioned infrastructure, you delegate the infrastructure bootstrapping and provisioning to the installation program instead of doing it yourself. The installation program creates all of the networking, machines, and operating systems that are required to support the cluster, except if you install on bare metal. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. If you provision and manage the infrastructure for your cluster, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. For the installation program, the program uses three sets of files during installation: an installation configuration file that is named install-config.yaml , Kubernetes manifests, and Ignition config files for your machine types. Important You can modify Kubernetes and the Ignition config files that control the underlying RHCOS operating system during installation. However, no validation is available to confirm the suitability of any modifications that you make to these objects. If you modify these objects, you might render your cluster non-functional. Because of this risk, modifying Kubernetes and Ignition config files is not supported unless you are following documented procedures or are instructed to do so by Red Hat support. The installation configuration file is transformed into Kubernetes manifests, and then the manifests are wrapped into Ignition config files. The installation program uses these Ignition config files to create the cluster. The installation configuration files are all pruned when you run the installation program, so be sure to back up all the configuration files that you want to use again. Important You cannot modify the parameters that you set during installation, but you can modify many cluster attributes after installation. The installation process with the Assisted Installer Installation with the Assisted Installer involves creating a cluster configuration interactively by using the web-based user interface or the RESTful API. The Assisted Installer user interface prompts you for required values and provides reasonable default values for the remaining parameters, unless you change them in the user interface or with the API. The Assisted Installer generates a discovery image, which you download and use to boot the cluster machines. The image installs RHCOS and an agent, and the agent handles the provisioning for you. You can install OpenShift Container Platform with the Assisted Installer and full integration on Nutanix, vSphere, and bare metal. Additionally, you can install OpenShift Container Platform with the Assisted Installer on other platforms without integration. OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. If possible, use the Assisted Installer feature to avoid having to download and configure the Agent-based Installer. The installation process with Agent-based infrastructure Agent-based installation is similar to using the Assisted Installer, except that you must initially download and install the Agent-based Installer . An Agent-based installation is useful when you want the convenience of the Assisted Installer, but you need to install a cluster in a disconnected environment. If possible, use the Agent-based installation feature to avoid having to create a provisioner machine with a bootstrap VM, and then provision and maintain the cluster infrastructure. The installation process with installer-provisioned infrastructure The default installation type uses installer-provisioned infrastructure. By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters. You can also customize the installation process to support advanced infrastructure scenarios. The installation program provisions the underlying infrastructure for the cluster. You can install either a standard cluster or a customized cluster. With a standard cluster, you provide minimum details that are required to install the cluster. With a customized cluster, you can specify more details about the platform, such as the number of machines that the control plane uses, the type of virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. In all other environments, you use the installation program to generate the assets that you require to provision your cluster infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. The installation process with user-provisioned infrastructure You can also install OpenShift Container Platform on infrastructure that you provide. You use the installation program to generate the assets that you require to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. If you do not use infrastructure that the installation program provisioned, you must manage and maintain the cluster resources yourself. The following list details some of these self-managed resources: The underlying infrastructure for the control plane and compute machines that make up the cluster Load balancers Cluster networking, including the DNS records and required subnets Storage for the cluster infrastructure and applications If your cluster uses user-provisioned infrastructure, you have the option of adding RHEL compute machines to your cluster. Installation process details When a cluster is provisioned, each machine in the cluster requires information about the cluster. OpenShift Container Platform uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. The temporary bootstrap machine boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process: Figure 1.2. Creating the bootstrap, control plane, and compute machines After the cluster machines initialize, the bootstrap machine is destroyed. All clusters use the bootstrap process to initialize the cluster, but if you provision the infrastructure for your cluster, you must complete many of the steps manually. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Consider using Ignition config files within 12 hours after they are generated, because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Bootstrapping a cluster involves the following steps: The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. If you provision the infrastructure, this step requires manual intervention. The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. If you provision the infrastructure, this step requires manual intervention. The temporary control plane schedules the production control plane to the production control plane machines. The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes. The temporary control plane shuts down and passes control to the production control plane. The bootstrap machine injects OpenShift Container Platform components into the production control plane. The installation program shuts down the bootstrap machine. If you provision the infrastructure, this step requires manual intervention. The control plane sets up the compute nodes. The control plane installs additional services in the form of a set of Operators. The result of this bootstrapping process is a running OpenShift Container Platform cluster. The cluster then downloads and configures remaining components needed for the day-to-day operations, including the creation of compute machines in supported environments. Additional resources Red Hat OpenShift Network Calculator 1.1.5. Verifying node state after installation The OpenShift Container Platform installation completes when the following installation health checks are successful: The provisioner can access the OpenShift Container Platform web console. All control plane nodes are ready. All cluster Operators are available. Note After the installation completes, the specific cluster Operators responsible for the worker nodes continuously attempt to provision all worker nodes. Some time is required before all worker nodes report as READY . For installations on bare metal, wait a minimum of 60 minutes before troubleshooting a worker node. For installations on all other platforms, wait a minimum of 40 minutes before troubleshooting a worker node. A DEGRADED state for the cluster Operators responsible for the worker nodes depends on the Operators' own resources and not on the state of the nodes. After your installation completes, you can continue to monitor the condition of the nodes in your cluster. Prerequisites The installation program resolves successfully in the terminal. Procedure Show the status of all worker nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a Show the phase of all worker machine nodes: USD oc get machines -A Example output NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m Additional resources Getting the BareMetalHost resource Following the progress of the installation Validating an installation Agent-based Installer Assisted Installer for OpenShift Container Platform Installation scope The scope of the OpenShift Container Platform installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes. Additional resources See Available cluster customizations for details about OpenShift Container Platform configuration resources. 1.1.6. OpenShift Local overview OpenShift Local supports rapid application development to get started building OpenShift Container Platform clusters. OpenShift Local is designed to run on a local computer to simplify setup and testing, and to emulate the cloud development environment locally with all of the tools needed to develop container-based applications. Regardless of the programming language you use, OpenShift Local hosts your application and brings a minimal, preconfigured Red Hat OpenShift Container Platform cluster to your local PC without the need for a server-based infrastructure. On a hosted environment, OpenShift Local can create microservices, convert them into images, and run them in Kubernetes-hosted containers directly on your laptop or desktop running Linux, macOS, or Windows 10 or later. For more information about OpenShift Local, see Red Hat OpenShift Local Overview . 1.2. Supported platforms for OpenShift Container Platform clusters In OpenShift Container Platform 4.14, you can install a cluster that uses installer-provisioned infrastructure on the following platforms: Alibaba Cloud Amazon Web Services (AWS) Bare metal Google Cloud Platform (GCP) IBM Cloud(R) Microsoft Azure Microsoft Azure Stack Hub Nutanix Red Hat OpenStack Platform (RHOSP) The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware vSphere For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat. Important After installation, the following changes are not supported: Mixing cloud provider platforms. Mixing cloud provider components. For example, using a persistent storage framework from a another platform on the platform where you installed the cluster. In OpenShift Container Platform 4.14, you can install a cluster that uses user-provisioned infrastructure on the following platforms: AWS Azure Azure Stack Hub Bare metal GCP IBM Power(R) IBM Z(R) or IBM(R) LinuxONE RHOSP The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware Cloud on AWS VMware vSphere Depending on the supported cases for the platform, you can perform installations on user-provisioned infrastructure, so that you can run machines with full internet access, place your cluster behind a proxy, or perform a disconnected installation. In a disconnected installation, you can download the images that are required to install a cluster, place them in a mirror registry, and use that data to install your cluster. While you require internet access to pull images for platform containers, with a disconnected installation on vSphere or bare metal infrastructure, your cluster machines do not require direct internet access. The OpenShift Container Platform 4.x Tested Integrations page contains details about integration testing for different platforms. Additional resources See Supported installation methods for different platforms for more information about the types of installations that are available for each supported platform. See Selecting a cluster installation method and preparing it for users for information about choosing an installation method and preparing the required resources. Red Hat OpenShift Network Calculator can help you design your cluster network during both the deployment and expansion phases. It addresses common questions related to the cluster network and provides output in a convenient JSON format.
[ "oc get nodes", "NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a", "oc get machines -A", "NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installation_overview/ocp-installation-overview
Chapter 7. Network considerations
Chapter 7. Network considerations Review the strategies for redirecting your application network traffic after migration. 7.1. DNS considerations The DNS domain of the target cluster is different from the domain of the source cluster. By default, applications get FQDNs of the target cluster after migration. To preserve the source DNS domain of migrated applications, select one of the two options described below. 7.1.1. Isolating the DNS domain of the target cluster from the clients You can allow the clients' requests sent to the DNS domain of the source cluster to reach the DNS domain of the target cluster without exposing the target cluster to the clients. Procedure Place an exterior network component, such as an application load balancer or a reverse proxy, between the clients and the target cluster. Update the application FQDN on the source cluster in the DNS server to return the IP address of the exterior network component. Configure the network component to send requests received for the application in the source domain to the load balancer in the target cluster domain. Create a wildcard DNS record for the *.apps.source.example.com domain that points to the IP address of the load balancer of the source cluster. Create a DNS record for each application that points to the IP address of the exterior network component in front of the target cluster. A specific DNS record has higher priority than a wildcard record, so no conflict arises when the application FQDN is resolved. Note The exterior network component must terminate all secure TLS connections. If the connections pass through to the target cluster load balancer, the FQDN of the target application is exposed to the client and certificate errors occur. The applications must not return links referencing the target cluster domain to the clients. Otherwise, parts of the application might not load or work properly. 7.1.2. Setting up the target cluster to accept the source DNS domain You can set up the target cluster to accept requests for a migrated application in the DNS domain of the source cluster. Procedure For both non-secure HTTP access and secure HTTPS access, perform the following steps: Create a route in the target cluster's project that is configured to accept requests addressed to the application's FQDN in the source cluster: USD oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> \ -n <app1-namespace> With this new route in place, the server accepts any request for that FQDN and sends it to the corresponding application pods. In addition, when you migrate the application, another route is created in the target cluster domain. Requests reach the migrated application using either of these hostnames. Create a DNS record with your DNS provider that points the application's FQDN in the source cluster to the IP address of the default load balancer of the target cluster. This will redirect traffic away from your source cluster to your target cluster. The FQDN of the application resolves to the load balancer of the target cluster. The default Ingress Controller router accept requests for that FQDN because a route for that hostname is exposed. For secure HTTPS access, perform the following additional step: Replace the x509 certificate of the default Ingress Controller created during the installation process with a custom certificate. Configure this certificate to include the wildcard DNS domains for both the source and target clusters in the subjectAltName field. The new certificate is valid for securing connections made using either DNS domain. Additional resources See Replacing the default ingress certificate for more information. 7.2. Network traffic redirection strategies After a successful migration, you must redirect network traffic of your stateless applications from the source cluster to the target cluster. The strategies for redirecting network traffic are based on the following assumptions: The application pods are running on both the source and target clusters. Each application has a route that contains the source cluster hostname. The route with the source cluster hostname contains a CA certificate. For HTTPS, the target router CA certificate contains a Subject Alternative Name for the wildcard DNS record of the source cluster. Consider the following strategies and select the one that meets your objectives. Redirecting all network traffic for all applications at the same time Change the wildcard DNS record of the source cluster to point to the target cluster router's virtual IP address (VIP). This strategy is suitable for simple applications or small migrations. Redirecting network traffic for individual applications Create a DNS record for each application with the source cluster hostname pointing to the target cluster router's VIP. This DNS record takes precedence over the source cluster wildcard DNS record. Redirecting network traffic gradually for individual applications Create a proxy that can direct traffic to both the source cluster router's VIP and the target cluster router's VIP, for each application. Create a DNS record for each application with the source cluster hostname pointing to the proxy. Configure the proxy entry for the application to route a percentage of the traffic to the target cluster router's VIP and the rest of the traffic to the source cluster router's VIP. Gradually increase the percentage of traffic that you route to the target cluster router's VIP until all the network traffic is redirected. User-based redirection of traffic for individual applications Using this strategy, you can filter TCP/IP headers of user requests to redirect network traffic for predefined groups of users. This allows you to test the redirection process on specific populations of users before redirecting the entire network traffic. Create a proxy that can direct traffic to both the source cluster router's VIP and the target cluster router's VIP, for each application. Create a DNS record for each application with the source cluster hostname pointing to the proxy. Configure the proxy entry for the application to route traffic matching a given header pattern, such as test customers , to the target cluster router's VIP and the rest of the traffic to the source cluster router's VIP. Redirect traffic to the target cluster router's VIP in stages until all the traffic is on the target cluster router's VIP.
[ "oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> -n <app1-namespace>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/migration_toolkit_for_containers/network-considerations-mtc
15.5. Configuring Access Control for Users
15.5. Configuring Access Control for Users Authorization is the mechanism that checks whether a user is allowed to perform an operation. Authorization points are defined in certain groups of operations that require an authorization check. 15.5.1. About Access Control Access control lists (ACLs) are the mechanisms that specify the authorization to server operations. An ACL exists for each set of operations where an authorization check occurs. Additional operations can be added to a ACL. The ACL contains access control instructions (ACIs) which specifically allow or deny operations, such as read or modify. The ACI also contains an evaluator expression. The default implementation of ACLs specifies only users, groups, and IP addresses as possible evaluator types. Each ACI in an ACL specifies whether access is allowed or denied, what the specific operator is being allowed or denied, and which users, groups, or IP addresses are being allowed or denied to perform the operation. The privileges of Certificate System users are changed by changing the access control lists (ACL) that are associated with the group in which the user is a member, for the users themselves, or for the IP address of the user. New groups are assigned access control by adding that group to the access control lists. For example, a new group for administrators who are only authorized to view logs, LogAdmins , can be added to the ACLs relevant to logs to allow read or modify access to this group. If this group is not added to any other ACLs, members of this group only have access to the logs. The access for a user, group, or IP address is changed by editing the ACI entries in the ACLs. In the ACL interface, each ACI is shown on a line of its own. In this interface window, the ACI has the following syntax: Note The IP address can be an IPv4 or IPv6 address. An IPv4 address must be in the format n.n.n.n or n.n.n.n,m.m.m.m . For example, 128.21.39.40 or 128.21.39.40,255.255.255.00 . An IPv6 address uses a 128-bit namespace, with the IPv6 address separated by colons and the netmask separated by periods. For example, 0:0:0:0:0:0:13.1.68.3 , FF01::43 , 0:0:0:0:0:0:13.1.68.3,FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:255.255.255.0 , and FF01::43,FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:FF00:0000 . For example, the following is an ACI that allows administrators to perform read operations: An ACI can have more than one operation or action configured. The operations are separated with a comma with no space on either side. For example: An ACI can have more than one group, user, or IP address by separating them with two pipe symbols ( || ) with a space on either side. For example: The administrative console can create or modify ACIs. The interface sets whether to allow or deny the operation in the Allow and Deny field, sets which operations are possible in the Operations field, and then lists the groups, users, or IP addresses being granted or denied access in the Syntax field. An ACI can either allow or deny an operation for the specified group, user ID, or IP address. Generally, ACIs do not need to be created to deny access. If there are no allow ACIs that include a user ID, group, or IP address, then the group, user ID, or IP address is denied access. Note If a user is not explicitly allowed access to any of the operations for a resource, then this user is considered denied; he does not specifically need to be denied access. For example, user JohnB is a member of the Administrators group. If an ACL has only the following ACL, JohnB is denied any access since he does not match any of the allow ACIs: There usually is no need to include a deny statement. Some situations can arise, however, when it is useful to specify one. For example, JohnB , a member of the Administrators group, has just been fired. It may be necessary to deny access specifically to JohnB if the user cannot be deleted immediately. Another situation is that a user, BrianC , is an administrator, but he should not have the ability to change some resource. Since the Administrators group must access this resource, BrianC can be specifically denied access by creating an ACI that denies this user access. The allowed rights are the operations which the ACI controls, either by allowing or denying permission to perform the operation. The actions that can be set for an ACL vary depending on the ACL and subsystem. Two common operations that can be defined are read and modify. The syntax field of the ACI editor sets the evaluator for the expression. The evaluator can specify group, name, and IP address (both IPv4 and IPv6 addresses). These are specified along with the name of the entity set as equals ( = ) or does not equal ( != ). The syntax to include a group in the ACL is group="groupname" . The syntax to exclude a group is group!="groupname" , which allows any group except for the group named. For example: It is also possible to use regular expressions to specify the group, such as using wildcard characters like an asterisk ( * ). For example: For more information on supported regular expression patterns, see https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html . The syntax to include a user in the ACL is user="userID" . The syntax to exclude the user is user!="userID" , which allows any user ID except for the user ID named. For example: To specify all users, provide the value anybody . For example: It is also possible to use regular expressions to specify the user names, such as using wildcard characters like an asterisk ( * ). For example: For more information on supported regular expression patterns, see https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html . The syntax to include an IP address in the ACL is ipaddress="ipaddress" . The syntax to exclude an ID address from the ACL is ipaddress!="ipaddress" . An IP address is specified using its numeric value; DNS values are not permitted. For example: The IP address can be an IPv4 address, as shown above, or IPv6 address. An IPv4 address has the format n.n.n.n or n.n.n.n,m.m.m.m with the netmask. An IPv6 address uses a 128-bit namespace, with the IPv6 address separated by colons and the netmask separated by periods. For example: It is also possible to use regular expressions to specify the IP address, such as using wildcard characters like an asterisk ( * ). For example: For more information on supported regular expression patterns, see https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html . It is possible to create a string with more than one value by separating each value with two pipe characters (||) with a space on either side. For example: 15.5.2. Changing the Access Control Settings for the Subsystem For instruction on how to configure this feature by editing the CS.cfg file, see the Changing the Access Control Settings for the Subsystem section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . 15.5.3. Adding ACLs ACLs are stored in the internal database and can only be modified in the administrative console. To add a new ACL: Log into the administrative console. Select Access Control List . Click Add to open the Access Control Editor . Fill the Resource name and Available rights fields. To add an access control instruction (ACI), click Add , and supply the ACI information. Select the allow or deny radio button from the Access field to allow or deny the operation to the groups, users, or IP addresses specified. For more information about allowing or denying access, see Section 15.5.1, "About Access Control" . Set the rights. The available options are read and modify . To select both, hold the Ctrl or Shift button while selecting the entries. Specify the user, group, or IP address that will be granted or denied access in the Syntax field. See Section 15.5.1, "About Access Control" for details on syntax. Click OK to return to the Access Control Editor window. Click OK to store the ACI. 15.5.4. Editing ACLs ACLs are stored in the internal database and can only be modified in the administrative console. To edit the existing ACLs: Log into the administrative console. Select Access Control List in the left navigation menu. Select the ACL to edit from the list, and click Edit . The ACL opens in the Access Control Editor window. To add an ACI, click Add , and supply the ACI information. To edit an ACI, select the ACI from the list in the ACI entries text area of the ACL Editor window. Click Edit . Select the allow or deny radio button from the Access field to allow or deny the operation to the groups, users, or IP addresses specified. For more information about allowing or denying access, see Section 15.5.1, "About Access Control" . Set the rights for the access control. The options are read and modify . To set both, use the Ctrl or Shift buttons. Specify the user, group, or IP address that will be granted or denied access in the Syntax field. See Section 15.5.1, "About Access Control" for details on syntax.
[ "allow|deny (operation) user|group|IP=\"name\"", "allow (read) group=\"Administrators\"", "allow (read,modify) group=\"Administrators\"", "allow (read) group=\"Administrators\" || group=\"Auditors\"", "Allow (read,modify) group=\"Auditors\" || user=\"BrianC\"", "group=\"Administrators\" || group!=\"Auditors\"", "group=\"* Managers\"", "user=\"BobC\" || user!=\"JaneK\"", "user=\"anybody\"", "user=\"*johnson\"", "ipaddress=\"12.33.45.99\" ipaddress!=\"23.99.09.88\"", "ipaddress=\"0:0:0:0:0:0:13.1.68.3\"", "ipaddress=\"12.33.45.*\"", "user=\"BobC\" || group=\"Auditors\" || group=\"Administrators\"" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/authorization_for_crts_users
Chapter 20. OperatorHub [config.openshift.io/v1]
Chapter 20. OperatorHub [config.openshift.io/v1] Description OperatorHub is the Schema for the operatorhubs API. It can be used to change the state of the default hub sources for OperatorHub on the cluster from enabled to disabled and vice versa. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 20.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OperatorHubSpec defines the desired state of OperatorHub status object OperatorHubStatus defines the observed state of OperatorHub. The current state of the default hub sources will always be reflected here. 20.1.1. .spec Description OperatorHubSpec defines the desired state of OperatorHub Type object Property Type Description disableAllDefaultSources boolean disableAllDefaultSources allows you to disable all the default hub sources. If this is true, a specific entry in sources can be used to enable a default source. If this is false, a specific entry in sources can be used to disable or enable a default source. sources array sources is the list of default hub sources and their configuration. If the list is empty, it implies that the default hub sources are enabled on the cluster unless disableAllDefaultSources is true. If disableAllDefaultSources is true and sources is not empty, the configuration present in sources will take precedence. The list of default hub sources and their current state will always be reflected in the status block. sources[] object HubSource is used to specify the hub source and its configuration 20.1.2. .spec.sources Description sources is the list of default hub sources and their configuration. If the list is empty, it implies that the default hub sources are enabled on the cluster unless disableAllDefaultSources is true. If disableAllDefaultSources is true and sources is not empty, the configuration present in sources will take precedence. The list of default hub sources and their current state will always be reflected in the status block. Type array 20.1.3. .spec.sources[] Description HubSource is used to specify the hub source and its configuration Type object Property Type Description disabled boolean disabled is used to disable a default hub source on cluster name string name is the name of one of the default hub sources 20.1.4. .status Description OperatorHubStatus defines the observed state of OperatorHub. The current state of the default hub sources will always be reflected here. Type object Property Type Description sources array sources encapsulates the result of applying the configuration for each hub source sources[] object HubSourceStatus is used to reflect the current state of applying the configuration to a default source 20.1.5. .status.sources Description sources encapsulates the result of applying the configuration for each hub source Type array 20.1.6. .status.sources[] Description HubSourceStatus is used to reflect the current state of applying the configuration to a default source Type object Property Type Description disabled boolean disabled is used to disable a default hub source on cluster message string message provides more information regarding failures name string name is the name of one of the default hub sources status string status indicates success or failure in applying the configuration 20.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/operatorhubs DELETE : delete collection of OperatorHub GET : list objects of kind OperatorHub POST : create an OperatorHub /apis/config.openshift.io/v1/operatorhubs/{name} DELETE : delete an OperatorHub GET : read the specified OperatorHub PATCH : partially update the specified OperatorHub PUT : replace the specified OperatorHub /apis/config.openshift.io/v1/operatorhubs/{name}/status GET : read status of the specified OperatorHub PATCH : partially update status of the specified OperatorHub PUT : replace status of the specified OperatorHub 20.2.1. /apis/config.openshift.io/v1/operatorhubs Table 20.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OperatorHub Table 20.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 20.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OperatorHub Table 20.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 20.5. HTTP responses HTTP code Reponse body 200 - OK OperatorHubList schema 401 - Unauthorized Empty HTTP method POST Description create an OperatorHub Table 20.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.7. Body parameters Parameter Type Description body OperatorHub schema Table 20.8. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 201 - Created OperatorHub schema 202 - Accepted OperatorHub schema 401 - Unauthorized Empty 20.2.2. /apis/config.openshift.io/v1/operatorhubs/{name} Table 20.9. Global path parameters Parameter Type Description name string name of the OperatorHub Table 20.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OperatorHub Table 20.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 20.12. Body parameters Parameter Type Description body DeleteOptions schema Table 20.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OperatorHub Table 20.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 20.15. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OperatorHub Table 20.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 20.17. Body parameters Parameter Type Description body Patch schema Table 20.18. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OperatorHub Table 20.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.20. Body parameters Parameter Type Description body OperatorHub schema Table 20.21. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 201 - Created OperatorHub schema 401 - Unauthorized Empty 20.2.3. /apis/config.openshift.io/v1/operatorhubs/{name}/status Table 20.22. Global path parameters Parameter Type Description name string name of the OperatorHub Table 20.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified OperatorHub Table 20.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 20.25. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OperatorHub Table 20.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 20.27. Body parameters Parameter Type Description body Patch schema Table 20.28. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OperatorHub Table 20.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.30. Body parameters Parameter Type Description body OperatorHub schema Table 20.31. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 201 - Created OperatorHub schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/config_apis/operatorhub-config-openshift-io-v1
probe::signal.procmask
probe::signal.procmask Name probe::signal.procmask - Examining or changing blocked signals Synopsis signal.procmask Values name Name of the probe point sigset The actual value to be set for sigset_t (correct?) how Indicates how to change the blocked signals; possible values are SIG_BLOCK=0 (for blocking signals), SIG_UNBLOCK=1 (for unblocking signals), and SIG_SETMASK=2 for setting the signal mask. sigset_addr The address of the signal set (sigset_t) to be implemented oldsigset_addr The old address of the signal set (sigset_t)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-signal-procmask
Chapter 1. Overview of Insights for Red Hat Enterprise Linux advisor service assessment and monitoring
Chapter 1. Overview of Insights for Red Hat Enterprise Linux advisor service assessment and monitoring Use the advisor service to assess and monitor the health of your Red Hat Enterprise Linux (RHEL) infrastructure. Whether you are concerned with individual or groups of systems, or with your whole infrastructure, be aware of the exposure of your systems to configuration issues that can affect availability, stability, performance, and security. After installing and registering the Insights for Red Hat Enterprise Linux client, the client runs daily to check systems against a database of Recommendations , which are sets of conditions that can leave your RHEL systems at risk. Your data is then uploaded to the Operations > Advisor > Recommendations page where you can perform the following actions: See all of the recommendations for your entire RHEL infrastructure. Use robust filtering capabilities to refine your results to those recommendations, systems, groups, or workloads that are of greatest concern to you, including SAP workloads, Satellite host collections, and custom tags. Learn more about individual recommendations, details about the risks they present, and get resolutions tailored to your individual systems. Share results with other stakeholders. For more information, see Generating Advisor Service Reports with FedRAMP . Create and manage remediation playbooks to fix issues right from the Insights for Red Hat Enterprise Linux application. For more information, see Red Hat Insights Remediations Guide with FedRAMP . 1.1. User Access settings in the Red Hat Hybrid Cloud Console User Access is the Red Hat implementation of role-based access control (RBAC). Your Organization Administrator uses User Access to configure what users can see and do on the Red Hat Hybrid Cloud Console (the console): Control user access by organizing roles instead of assigning permissions individually to users. Create groups that include roles and their corresponding permissions. Assign users to these groups, allowing them to inherit the permissions associated with their group's roles. All users on your account have access to most of the data in Insights for Red Hat Enterprise Linux. 1.1.1. Predefined User Access groups and roles To make groups and roles easier to manage, Red Hat provides two predefined groups and a set of predefined roles. 1.1.1.1. Predefined groups The Default access group contains all users in your organization. Many predefined roles are assigned to this group. It is automatically updated by Red Hat. Note If the Organization Administrator makes changes to the Default access group its name changes to Custom default access group and it is no longer updated by Red Hat. The Default admin access group contains only users who have Organization Administrator permissions. This group is automatically maintained and users and roles in this group cannot be changed. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (⚙) > Identity & Access Management > User Access > Groups to see the current groups in your account. This view is limited to the Organization Administrator. 1.1.1.2. Predefined roles assigned to groups The Default access group contains many of the predefined roles. Because all users in your organization are members of the Default access group, they inherit all permissions assigned to that group. The Default admin access group includes many (but not all) predefined roles that provide update and delete permissions. The roles in this group usually include administrator in their name. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (⚙) > Identity & Access Management > User Access > Roles to see the current roles in your account. You can see how many groups each role is assigned to. This view is limited to the Organization Administrator. 1.1.2. Access permissions The Prerequisites for each procedure list which predefined role provides the permissions you must have. As a user, you can navigate to Red Hat Hybrid Cloud Console > the Settings icon (⚙) > My User Access to view the roles and application permissions currently inherited by you. If you try to access Insights for Red Hat Enterprise Linux features and see a message that you do not have permission to perform this action, you must obtain additional permissions. The Organization Administrator or the User Access administrator for your organization configures those permissions. Use the Red Hat Hybrid Cloud Console Virtual Assistant to ask "Contact my Organization Administrator". The assistant sends an email to the Organization Administrator on your behalf. Additional resources For more information about user access and permissions, see User Access Configuration Guide for Role-based Access Control (RBAC) with FedRAMP . 1.1.3. User Access roles for advisor service users The following roles enable standard or enhanced access to remediations features in Insights for Red Hat Enterprise Linux: RHEL Advisor administrator. Perform any available operation against any Insights for Red Hat Enterprise Linux advisor-service resource. RHEL Advisor viewer. Be able to read advisor data.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_rhel_configuration_issues_using_the_red_hat_insights_advisor_service_with_fedramp/assembly-adv-assess-overview
Chapter 33. Monitoring Messaging Statistics
Chapter 33. Monitoring Messaging Statistics When statistics collection is enabled for a messaging server in the messaging-activemq subsystem, you can view runtime statistics for resources on the messaging server. 33.1. Enabling Messaging Statistics Because it can negatively impact performance, statistics collection for the messaging-activemq subsystem is not enabled by default. You do not need to enable queue statistics to obtain basic information, such as the number of messages on a queue or the number of messages added to a queue. Those statistics are available using queue attributes without requiring that you set statistics-enabled to true . You can enable additional statistics collection using the management CLI or the management console . Enable Messaging Statistics Using the Management CLI The following management CLI command enables the collection of statistics for the default messaging server. Pooled connection factory statistics are enabled separately from the other messaging server statistics. Use the following command to enable statistics for a pooled connection factory. Reload the server for the changes to take effect. Enable Messaging Statistics Using the Management Console Use the following steps to enable statistics collection for a messaging server using the management console. Navigate to Configuration Subsystems Messaging (ActiveMQ) Server . Select the server and click View . Click Edit under the Statistics tab. Set the Statistics Enabled field to ON and click Save . Pooled connection factory statistics are enabled separately from the other messaging server statistics. Use the following steps to enable statistics collection for a pooled connection factory. Navigate to Configuration Subsystems Messaging (ActiveMQ) Server . Select the server, select Connections , and click View . Select the Pooled Connection Factory tab. Select the pooled connection factory and click Edit under the Attributes tab. Set the Statistics Enabled field to ON and click Save . Reload the server for the changes to take effect. 33.2. Viewing Messaging Statistics You can view runtime statistics for a messaging server using the management CLI or management console . View Messaging Statistics Using the Management CLI You can view messaging statistics using the following management CLI commands. Be sure to include the include-runtime=true argument as statistics are runtime information. View statistics for a queue. View statistics for a topic. View statistics for a pooled connection factory. Note Pooled connection factory statistics are enabled separately from the other messaging server statistics. See Enabling Messaging Statistics for instructions. View Messaging Statistics Using the Management Console To view messaging statistics from the management console, navigate to the Messaging (ActiveMQ) subsystem from the Runtime tab, and select the server. Select a destination to view its statistics. Note The Prepared Transactions page is where you can view, commit, and roll back prepared transactions. See Managing Messaging Journal Prepared Transactions for more information. See Messaging Statistics for a detailed list of all available statistics. 33.3. Configuring Message Counters You can configure the following message counter attributes for a messaging server. message-counter-max-day-history : The number of days the message counter history is kept. message-counter-sample-period : How often, in milliseconds, the queue is sampled. The management CLI command to configure these options uses the following syntax. Be sure to replace STATISTICS_NAME and STATISTICS_VALUE with the statistic name and value you want to configure. For example, use the following commands to set the message-counter-max-day-history to five days and the message-counter-sample-period to two seconds. 33.4. Viewing the Message Counter and History for a Queue You can view the message counter and message counter history for a queue using the following management CLI operations. list-message-counter-as-json list-message-counter-as-html list-message-counter-history-as-json list-message-counter-history-as-html The management CLI command to use display these values uses the following syntax. Be sure to replace QUEUE_NAME and OPERATION_NAME with the queue name and operation you want to use. For example, use the following command to view the message counter for the TestQueue queue in JSON format. 33.5. Reset the Message Counter for a Queue You can reset the message counter for a queue using the reset-message-counter management CLI operation. 33.6. Runtime Operations Using the Management Console Using the management console you can: Perform forced failover to another messaging server Reset all message counters for a messaging server Reset all message counters history for a messaging server View information related to a messaging server Close connections for a messaging server Roll back transactions Commit transactions Performing Forced Failover to Another Messaging Server Access the management console and navigate to Server using either of the following: Runtime -> Browse By Hosts Host Server Runtime -> Browse By Server Groups Server Group Server Click Messaging ActiveMQ Server Click the arrow button to View and click Force Failover . On the Force Failover window, click Yes . Resetting All Message Counters for a Messaging Server Access the management console and navigate to Server using either of the following: Runtime -> Browse By Hosts Host Server Runtime -> Browse By Server Groups Server Group Server Click Messaging ActiveMQ Server Click the arrow button to View and click Reset . On the Reset window, click the toggle button to Reset all message counters to enable the functionality. The button now displays ON in a blue background. Click Reset . Resetting Message Counters History for a Messaging Server Access the management console and navigate to Server using either of the following: Runtime -> Browse By Hosts Host Server Runtime -> Browse By Server Groups Server Group Server Click Messaging ActiveMQ Server Click the arrow button to View and click Reset . On the Reset window, click the toggle button to Reset all message counters history to enable the functionality. The button now displays ON in a blue background. Click Reset . Viewing Information Related to a Messaging Server Using the management console you can view a list of the following information related to a messaging server: Connections Consumers Producers Connectors Roles Transactions To view information related to a messaging server: Access the management console and navigate to Server using either of the following: Runtime -> Browse By Hosts Host Server Runtime -> Browse By Server Groups Server Group Server Click Messaging ActiveMQ Server and then click View . Click the appropriate item on the navigation pane to view a list of the item on the right pane. Closing Connections for a Messaging Server You can close connections by providing an IP address, an ActiveMQ address match or a user name. To close connections for a messaging server: Access the management console and navigate to Server using either of the following: Runtime -> Browse By Hosts Host Server Runtime -> Browse By Server Groups Server Group Server Click Messaging ActiveMQ Server and then click View . On the navigation pane, click Connections . On the Close window, click the appropriate tab based on which connection you want to close. Based on your selection, enter the IP address, ActiveMQ address match, or the user name, and then click Close . Rolling Back Transactions for a Messaging Server Access the management console and navigate to Server using either of the following: Runtime -> Browse By Hosts Host Server Runtime -> Browse By Server Groups Server Group Server Click Messaging ActiveMQ Server and then click View . On the navigation pane, click Transactions . Select the transaction you want to roll back and click Rollback . Committing Transactions for a Messaging Server Access the management console and navigate to Server using either of the following: Runtime -> Browse By Hosts Host Server Runtime -> Browse By Server Groups Server Group Server Click Messaging ActiveMQ Server and then click View . On the navigation pane, click Transactions . Select the transaction you want to commit and click Commit .
[ "/subsystem=messaging-activemq/server=default:write-attribute(name=statistics-enabled,value=true)", "/subsystem=messaging-activemq/server=default/pooled-connection-factory=activemq-ra:write-attribute(name=statistics-enabled,value=true)", "/subsystem=messaging-activemq/server=default/jms-queue=DLQ:read-resource(include-runtime=true) { \"outcome\" => \"success\", \"result\" => { \"consumer-count\" => 0, \"dead-letter-address\" => \"jms.queue.DLQ\", \"delivering-count\" => 0, \"durable\" => true, } }", "/subsystem=messaging-activemq/server=default/jms-topic=testTopic:read-resource(include-runtime=true) { \"outcome\" => \"success\", \"result\" => { \"delivering-count\" => 0, \"durable-message-count\" => 0, \"durable-subscription-count\" => 0, } }", "/subsystem=messaging-activemq/server=default/pooled-connection-factory=activemq-ra/statistics=pool:read-resource(include-runtime=true) { \"outcome\" => \"success\", \"result\" => { \"ActiveCount\" => 1, \"AvailableCount\" => 20, \"AverageBlockingTime\" => 0L, \"AverageCreationTime\" => 13L, \"AverageGetTime\" => 14L, } }", "/subsystem=messaging-activemq/server=default::write-attribute(name= STATISTICS_NAME ,value= STATISTICS_VALUE )", "/subsystem=messaging-activemq/server=default:write-attribute(name=message-counter-max-day-history,value=5) /subsystem=messaging-activemq/server=default:write-attribute(name=message-counter-sample-period,value=2000)", "/subsystem=messaging-activemq/server=default/jms-queue= QUEUE_NAME : OPERATION_NAME", "/subsystem=messaging-activemq/server=default/jms-queue=TestQueue:list-message-counter-as-json { \"outcome\" => \"success\", \"result\" => \"{\\\"destinationName\\\":\\\"TestQueue\\\",\\\"destinationSubscription\\\":null,\\\"destinationDurable\\\":true,\\\"count\\\":0,\\\"countDelta\\\":0,\\\"messageCount\\\":0,\\\"messageCountDelta\\\":0,\\\"lastAddTimestamp\\\":\\\"12/31/69 7:00:00 PM\\\",\\\"updateTimestamp\\\":\\\"2/20/18 2:24:05 PM\\\"}\" }", "/subsystem=messaging-activemq/server=default/jms-queue=TestQueue:reset-message-counter { \"outcome\" => \"success\", \"result\" => undefined }" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/messaging_statistics
Deploying and managing OpenShift Data Foundation using Red Hat OpenStack Platform
Deploying and managing OpenShift Data Foundation using Red Hat OpenStack Platform Red Hat OpenShift Data Foundation 4.17 Instructions on deploying and managing OpenShift Data Foundation on Red Hat OpenStack Platform Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install and manage Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP). Important Deploying and managing OpenShift Data Foundation on Red Hat OpenStack Platform is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) using Red Hat OpenStack Platform clusters. Note Both internal and external OpenShift Data Foundation clusters are supported on Red Hat OpenStack Platform. See Planning your deployment for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the appropriate deployment process based on your requirement: Internal mode Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in internal mode Deploy standalone Multicloud Object Gateway component External mode Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in external mode Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps: Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Kubernetes authentication using KMS . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Follow these steps: Create a KMIP client if one does not exist. From the user interface, select KMIP -> Client Profile -> Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP -> Registration Token -> New Registration Token . Copy the token for the step. To register the client, navigate to KMIP -> Registered Clients -> Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings -> Interfaces -> Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys -> Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide. Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Chapter 2. Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in internal mode Deploying OpenShift Data Foundation on OpenShift Container Platform in internal mode using dynamic storage devices provided by Red Hat OpenStack Platform installer-provisioned infrastructure (IPI) enables you to create internal cluster resources. This results in internal provisioning of the base services, which helps to make additional storage classes available to applications. Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.3.1. Enabling key rotation when using KMS Security common practices require periodic encryption key rotation. You can enable key rotation when using KMS using this procedure. To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to either Namespace , StorageClass , or PersistentVolumeClaims (in order of precedence). <value> can be either @hourly , @daily , @weekly , @monthly , or @yearly . If <value> is empty, the default is @weekly . The below examples use @weekly . Important Key rotation is only supported for RBD backed volumes. Annotating Namespace Annotating StorageClass Annotating PersistentVolumeClaims 2.4. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub . Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set to standard . Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that OpenShift Data Foundation is successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. 2.5. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 2.5.1. Verifying the state of the pods Procedure Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) ux-backend-server- * (1 pod on any storage node) * ocs-client-operator -* (1 pod on any storage node) ocs-client-operator-console -* (1 pod on any storage node) ocs-provider-server -* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 2.5.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.5.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 2.5.4. Verifying that the specific storage classes exist Procedure Click Storage -> Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io 2.6. Uninstalling OpenShift Data Foundation 2.6.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation . Chapter 3. Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in external mode Red Hat OpenShift Data Foundation can use an externally hosted Red Hat Ceph Storage (RHCS) cluster as the storage provider on Red Hat OpenStack Platform. See Planning your deployment for more information. For instructions regarding how to install a RHCS cluster, see the installation guide . Follow these steps to deploy OpenShift Data Foundation in external mode: Install the OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 3.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.2. Creating an OpenShift Data foundation Cluster for external mode You need to create a new OpenShift Data Foundation cluster after you install OpenShift Data Foundation operator on OpenShift Container Platform deployed on Red Hat OpenStack platform. Prerequisites Ensure the OpenShift Container Platform version is 4.17 or above before deploying OpenShift Data Foundation 4.17. OpenShift Data Foundation operator must be installed. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub . To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Select Service Type as ODF as Self-Managed Service . Select appropriate Version from the drop down. On the Versions tab, click the Supported RHCS Compatibility tab. If you have updated the Red Hat Ceph Storage cluster from a version lower than 4.1.1 to the latest release and is not a freshly deployed cluster, you must manually set the application type for the CephFS pool on the Red Hat Ceph Storage cluster to enable CephFS PVC creation in external mode. For more details, see Troubleshooting CephFS PVC creation in external mode . Red Hat Ceph Storage must have Ceph Dashboard installed and configured. For more information, see Ceph Dashboard installation and access . Red Hat recommends that the external Red Hat Ceph Storage cluster has the PG Autoscaler enabled. For more information, see The placement group autoscaler section in the Red Hat Ceph Storage documentation. The external Ceph cluster should have an existing RBD pool pre-configured for use. If it does not exist, contact your Red Hat Ceph Storage administrator to create one before you move ahead with OpenShift Data Foundation deployment. Red Hat recommends to use a separate pool for each OpenShift Data Foundation cluster. Procedure Click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation -> Create Instance link of Storage Cluster. Select Mode as External . By default, Internal is selected as deployment mode. Figure 3.1. Connect to external cluster section on Create Storage Cluster form In the Connect to external cluster section, click on the Download Script link to download the python script for extracting Ceph cluster details. For extracting the Red Hat Ceph Storage (RHCS) cluster details, contact the RHCS administrator to run the downloaded python script on a Red Hat Ceph Storage node with admin key . Run the following command on the RHCS node to view the list of available arguments. Important Use python instead of python3 if the Red Hat Ceph Storage 4.x cluster is deployed on Red Hat Enterprise Linux 7.x (RHEL 7.x) cluster. Note You can also run the script from inside a MON container (containerized deployment) or from a MON node (rpm deployment). To retrieve the external cluster details from the RHCS cluster, run the following command For example: In the above example, --rbd-data-pool-name is a mandatory parameter used for providing block storage in OpenShift Data Foundation. --rgw-endpoint is optional. Provide this parameter if object storage is to be provisioned through Ceph Rados Gateway for OpenShift Data Foundation. Provide the endpoint in the following format: <ip_address>:<port> --monitoring-endpoint is optional. It is the IP address of the active ceph-mgr reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. -- run-as-user is an optional parameter used for providing a name for the Ceph user which is created by the script. If this parameter is not specified, a default user name client.healthchecker is created. The permissions for the new user is set as: caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool= RGW_POOL_PREFIX.rgw.meta , allow r pool= .rgw.root , allow rw pool= RGW_POOL_PREFIX.rgw.control , allow rx pool= RGW_POOL_PREFIX.rgw.log , allow x pool= RGW_POOL_PREFIX.rgw.buckets.index Example of JSON output generated using the python script: Save the JSON output to a file with .json extension Note For OpenShift Data Foundation to work seamlessly, ensure that the parameters (RGW endpoint, CephFS details, RBD pool, and so on) to be uploaded using the JSON file remain unchanged on the RHCS external cluster after the storage cluster creation. Click External cluster metadata -> Browse to select and upload the JSON file. The content of the JSON file is populated and displayed in the text box. Figure 3.2. Json file content Click Create . The Create button is enabled only after you upload the .json file. Verification steps Verify that the final Status of the installed storage cluster shows as Phase: Ready with a green tick mark. Click Operators -> Installed Operators -> Storage Cluster link to view the storage cluster installation status. Alternatively, when you are on the Operator Details tab, you can click on the Storage Cluster tab to view the status. To verify that OpenShift Data Foundation, pods and StorageClass are successfully installed, see Verifying your external mode OpenShift Data Foundation installation . 3.3. Verifying your OpenShift Data Foundation installation for external mode Use this section to verify that OpenShift Data Foundation is deployed correctly. 3.3.1. Verifying the state of the pods Click Workloads -> Pods from the left pane of the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 3.1, "Pods corresponding to OpenShift Data Foundation components" Verify that the following pods are in running state: Table 3.1. Pods corresponding to OpenShift Data Foundation components Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) csi-addons-controller-manager-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any worker node) noobaa-db-pg-* (1 pod on any worker node) noobaa-endpoint-* (1 pod on any worker node) CSI cephfs csi-cephfsplugin-* (1 pod on each worker node) csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes) Note If an MDS is not deployed in the external cluster, the csi-cephfsplugin pods will not be created. rbd csi-rbdplugin-* (1 pod on each worker node) csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes) 3.3.2. Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that both Storage Cluster and Data Resiliency have a green tick. In the Details card, verify that the cluster information is displayed as follows. + Service Name:: OpenShift Data Foundation Cluster Name:: ocs-external-storagecluster Provider:: OpenStack Mode:: External Version:: ocs-operator-4.17.0 For more information on the health of OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 3.3.3. Verifying that the Multicloud Object Gateway is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the Multicloud Object Gateway (MCG) information is displayed. Note The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details were included while deploying OpenShift Data Foundation in external mode. For more information on the health of OpenShift Data Foundation cluster using the object dashboard, see Monitoring OpenShift Data Foundation . 3.3.4. Verifying that the storage classes are created and listed Click Storage -> Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-external-storagecluster-ceph-rbd ocs-external-storagecluster-ceph-rgw ocs-external-storagecluster-cephfs openshift-storage.noobaa.io Note If MDS is not deployed in the external cluster, ocs-external-storagecluster-cephfs storage class will not be created. If RGW is not deployed in the external cluster, the ocs-external-storagecluster-ceph-rgw storage class will not be created. For more information regarding MDS and RGW, see Red Hat Ceph Storage documentation 3.3.5. Verifying that Ceph cluster is connected Run the following command to verify if the OpenShift Data Foundation cluster is connected to the external Red Hat Ceph Storage cluster. 3.3.6. Verifying that storage cluster is ready Run the following command to verify if the storage cluster is ready and the External option is set to true. 3.4. Uninstalling OpenShift Data Foundation 3.4.1. Uninstalling OpenShift Data Foundation from external storage system Use the steps in this section to uninstall OpenShift Data Foundation. Uninstalling OpenShift Data Foundation does not remove the RBD pool from the external cluster, or uninstall the external Red Hat Ceph Storage cluster. Uninstall Annotations Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster: uninstall.ocs.openshift.io/cleanup-policy: delete uninstall.ocs.openshift.io/mode: graceful Note The uninstall.ocs.openshift.io/cleanup-policy is not applicable for external mode. The below table provides information on the different values that can used with these annotations: Table 3.2. uninstall.ocs.openshift.io uninstall annotations descriptions Annotation Value Default Behavior cleanup-policy delete Yes Rook cleans up the physical drives and the DataDirHostPath cleanup-policy retain No Rook does not clean up the physical drives and the DataDirHostPath mode graceful Yes Rook and NooBaa pauses the uninstall process until the PVCs and the OBCs are removed by the administrator/user mode forced No Rook and NooBaa proceeds with uninstall even if PVCs/OBCs provisioned using Rook and NooBaa exist respectively You can change the uninstall mode by editing the value of the annotation by using the following commands: Prerequisites Ensure that the OpenShift Data Foundation cluster is in a healthy state. The uninstall process can fail when some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling OpenShift Data Foundation. Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Data Foundation. Procedure Delete the volume snapshots that are using OpenShift Data Foundation. List the volume snapshots from all the namespaces From the output of the command, identify and delete the volume snapshots that are using OpenShift Data Foundation. Delete PVCs and OBCs that are using OpenShift Data Foundation. In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use OpenShift Data Foundation are deleted. If you wish to delete the Storage Cluster without deleting the PVCs beforehand, you may set the uninstall mode annotation to "forced" and skip this step. Doing so will result in orphan PVCs and OBCs in the system. Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Data Foundation. See Removing monitoring stack from OpenShift Data Foundation Delete OpenShift Container Platform Registry PVCs using OpenShift Data Foundation. Removing OpenShift Container Platform registry from OpenShift Data Foundation Delete OpenShift Container Platform logging PVCs using OpenShift Data Foundation. Removing the cluster logging operator from OpenShift Data Foundation Delete other PVCs and OBCs provisioned using OpenShift Data Foundation. Given below is a sample script to identify the PVCs and OBCs provisioned using OpenShift Data Foundation. The script ignores the PVCs and OBCs that are used internally by OpenShift Data Foundation. Delete the OBCs. Delete the PVCs. Ensure that you have removed any custom backing stores, bucket classes, and so on that are created in the cluster. Delete the Storage Cluster object and wait for the removal of the associated resources. Delete the namespace and wait until the deletion is complete. You will need to switch to another project if openshift-storage is the active project. For example: The project is deleted if the following command returns a NotFound error. Note While uninstalling OpenShift Data Foundation, if the namespace is not deleted completely and remains in Terminating state, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated. Confirm all PVs provisioned using OpenShift Data Foundation are deleted. If there is any PV left in the Released state, delete it. Remove CustomResourceDefinitions . To ensure that OpenShift Data Foundation is uninstalled completely: In the OpenShift Container Platform Web Console, click Storage . Verify that OpenShift Data Foundation no longer appears under Storage. 3.4.2. Removing monitoring stack from OpenShift Data Foundation Use this section to clean up the monitoring stack from OpenShift Data Foundation. The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace. Prerequisites PVCs are configured to use the OpenShift Container Platform monitoring stack. For information, see configuring monitoring stack . Procedure List the pods and PVCs that are currently running in the openshift-monitoring namespace. Edit the monitoring configmap . Remove any config sections that reference the OpenShift Data Foundation storage classes as shown in the following example and save it. Before editing After editing In this example, alertmanagerMain and prometheusK8s monitoring components are using the OpenShift Data Foundation PVCs. List the pods consuming the PVC. In this example, the alertmanagerMain and prometheusK8s pods that were consuming the PVCs are in the Terminating state. You can delete the PVCs once these pods are no longer using OpenShift Data Foundation PVC. Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes. 3.4.3. Removing OpenShift Container Platform registry from OpenShift Data Foundation Use this section to clean up the OpenShift Container Platform registry from OpenShift Data Foundation. If you want to configure an alternative storage, see image registry The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry namespace. Prerequisites The image registry should have been configured to use an OpenShift Data Foundation PVC. Procedure Edit the configs.imageregistry.operator.openshift.io object and remove the content in the storage section. Before editing After editing In this example, the PVC is called registry-cephfs-rwx-pvc , which is now safe to delete. Delete the PVC. 3.4.4. Removing the cluster logging operator from OpenShift Data Foundation Use this section to clean up the cluster logging operator from OpenShift Data Foundation. The Persistent Volume Claims (PVCs) that are created as a part of configuring the cluster logging operator are in the openshift-logging namespace. Prerequisites The cluster logging instance should have been configured to use the OpenShift Data Foundation PVCs. Procedure Remove the ClusterLogging instance in the namespace. The PVCs in the openshift-logging namespace are now safe to delete. Delete the PVCs. <pvc-name> Is the name of the PVC Chapter 4. Deploy standalone Multicloud Object Gateway in internal mode Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component in internal mode, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Note Deploying standalone Multicloud Object Gateway component is not supported in external mode deployments. 4.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 4.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) Chapter 5. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage -> Data Foundation -> Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. Chapter 6. Storage classes and storage pools The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create a custom storage class if you want the storage class to have a different behavior. You can create multiple storage pools which map to storage classes that provide the following features: Enable applications with their own high availability to use persistent volumes with two replicas, potentially improving application performance. Save space for persistent volume claims using storage classes with compression enabled. Note Multiple storage classes and multiple pools are not supported for external mode OpenShift Data Foundation clusters. Note With a minimal cluster of a single device set, only two new storage classes can be created. Every storage cluster expansion allows two new additional storage classes. 6.1. Creating storage classes and pools You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage -> StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Reclaim Policy is set to Delete as the default option. Use this setting. If you change the reclaim policy to Retain in the storage class, the persistent volume (PV) remains in Released state even after deleting the persistent volume claim (PVC). Volume binding mode is set to WaitForConsumer as the default option. If you choose the Immediate option, then the PV gets created immediately when creating the PVC. Select RBD or CephFS Provisioner as the plugin for provisioning the persistent volumes. Choose a Storage system for your workloads. Select an existing Storage Pool from the list or create a new pool. Note The 2-way replication data protection policy is only supported for the non-default RBD pool. 2-way replication can be used by creating an additional pool. To know about Data Availability and Integrity considerations for replica 2 pools, see Knowledgebase Customer Solution Article . Create new pool Click Create New Pool . Enter Pool name . Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy. Select Enable compression if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Create to create the new storage pool. Click Finish after the pool is created. Optional: Select Enable Encryption checkbox. Click Create to create the storage class. 6.2. Creating a storage class for persistent volume encryption Prerequisites Based on your use case, you must ensure to configure access to KMS for one of the following: Using vaulttokens : Ensure to configure access as described in Configuring access to KMS using vaulttokens Using vaulttenantsa (Technology Preview): Ensure to configure access as described in Configuring access to KMS using vaulttenantsa Using Thales CipherTrust Manager (using KMIP): Ensure to configure access as described in Configuring access to KMS using Thales CipherTrust Manager (For users on Azure platform only) Using Azure Vault: Ensure to set up client authentication and fetch the client credentials from Azure using the following steps: Create Azure Vault. For more information, see Quickstart: Create a key vault using the Azure portal in Microsoft product documentation. Create Service Principal with certificate based authentication. For more information, see Create an Azure service principal with Azure CLI in Microsoft product documentation. Set Azure Key Vault role based access control (RBAC). For more information, see Enable Azure RBAC permissions on Key Vault . Procedure In the OpenShift Web Console, navigate to Storage -> StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Select either Delete or Retain for the Reclaim Policy . By default, Delete is selected. Select either Immediate or WaitForFirstConsumer as the Volume binding mode . WaitForConsumer is set as the default option. Select RBD Provisioner openshift-storage.rbd.csi.ceph.com which is the plugin used for provisioning the persistent volumes. Select Storage Pool where the volume data is stored from the list or create a new pool. Select the Enable encryption checkbox. Choose one of the following options to set the KMS connection details: Choose existing KMS connection : Select an existing KMS connection from the drop-down list. The list is populated from the the connection details available in the csi-kms-connection-details ConfigMap. Select the Provider from the drop down. Select the Key service for the given provider from the list. Create new KMS connection : This is applicable for vaulttokens and Thales CipherTrust Manager (using KMIP) only. Select one of the following Key Management Service Provider and provide the required details. Vault Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name . In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example, Address : 123.34.3.2, Port : 5696. Upload the Client Certificate , CA certificate , and Client Private Key . Enter the Unique Identifier for the key to be used for encryption and decryption, generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Azure Key Vault (Only for Azure users on Azure platform) For information about setting up client authentication and fetching the client credentials, see the Prerequisites in Creating an OpenShift Data Foundation cluster section of the Deploying OpenShift Data Foundation using Microsoft Azure guide. Enter a unique Connection name for the key management service within the project. Enter Azure Vault URL . Enter Client ID . Enter Tenant ID . Upload Certificate file in .PEM format and the certificate file must include a client certificate and a private key. Click Save . Click Create . Edit the ConfigMap to add the vaultBackend parameter if the HashiCorp Vault setup does not allow automatic detection of the Key/Value (KV) secret engine API version used by the backend path. Note vaultBackend is an optional parameters that is added to the configmap to specify the version of the KV secret engine API associated with the backend path. Ensure that the value matches the KV secret engine API version that is set for the backend path, otherwise it might result in a failure during persistent volume claim (PVC) creation. Identify the encryptionKMSID being used by the newly created storage class. On the OpenShift Web Console, navigate to Storage -> Storage Classes . Click the Storage class name -> YAML tab. Capture the encryptionKMSID being used by the storage class. Example: On the OpenShift Web Console, navigate to Workloads -> ConfigMaps . To view the KMS connection details, click csi-kms-connection-details . Edit the ConfigMap. Click Action menu (...) -> Edit ConfigMap . Add the vaultBackend parameter depending on the backend that is configured for the previously identified encryptionKMSID . You can assign kv for KV secret engine API, version 1 and kv-v2 for KV secret engine API, version 2. Example: Click Save steps The storage class can be used to create encrypted persistent volumes. For more information, see managing persistent volume claims . Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the HashiCorp product. For technical assistance with this product, contact HashiCorp . Chapter 7. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as image registry, monitoring, and logging. The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have plenty of storage capacity for these services. If the storage for these critical services runs out of space, the cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data sub section of Configuring persistent storage in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 7.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators -> Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration -> Cluster Settings -> Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage -> StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage -> Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration -> Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) -> Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads -> Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 7.2. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators -> Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration -> Cluster Settings -> Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage -> StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads -> Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage -> Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 7.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads -> Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 7.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 7.3. Persistent Volume Claims attached to prometheus-k8s-* pod 7.3. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 7.3.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 7.3.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration -> Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage -> Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 7.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload -> Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide. Chapter 8. Backing OpenShift Container Platform applications with OpenShift Data Foundation You cannot directly install OpenShift Data Foundation during the OpenShift Container Platform installation. However, you can install OpenShift Data Foundation on an existing OpenShift Container Platform by using the Operator Hub and then configure the OpenShift Container Platform applications to be backed by OpenShift Data Foundation. Prerequisites OpenShift Container Platform is installed and you have administrative access to OpenShift Web Console. OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure In the OpenShift Web Console, perform one of the following: Click Workloads -> Deployments . In the Deployments page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. Click Workloads -> Deployment Configs . In the Deployment Configs page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment Config to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. In the Add Storage page, you can choose one of the following options: Click the Use existing claim option and select a suitable PVC from the drop-down list. Click the Create new claim option. Select the appropriate CephFS or RBD storage class from the Storage Class drop-down list. Provide a name for the Persistent Volume Claim. Select ReadWriteOnce (RWO) or ReadWriteMany (RWX) access mode. Note ReadOnlyMany (ROX) is deactivated as it is not supported. Select the size of the desired storage capacity. Note You can expand the block PVs but cannot reduce the storage capacity after the creation of Persistent Volume Claim. Specify the mount path and subpath (if required) for the mount path volume inside the container. Click Save . Verification steps Depending on your configuration, perform one of the following: Click Workloads -> Deployments . Click Workloads -> Deployment Configs . Set the Project as required. Click the deployment for which you added storage to display the deployment details. Scroll down to Volumes and verify that your deployment has a Type that matches the Persistent Volume Claim that you assigned. Click the Persistent Volume Claim name and verify the storage class name in the Persistent Volume Claim Overview page. Chapter 9. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 9.3, "Manual creation of infrastructure nodes" section for more information. 9.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: Openshift-dns daemonsets doesn't include toleration to run on nodes with taints . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 9.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 9.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements. 9.4. Taint a node from the user interface This section explains the procedure to taint nodes after the OpenShift Data Foundation deployment. Procedure In the OpenShift Web Console, click Compute -> Nodes , and then select the node which has to be tainted. In the Details page click on Edit taints . Enter the values in the Key <nodes.openshift.ocs.io/storage>, Value <true> and in the Effect <Noschedule> field. Click Save. Verification steps Follow the steps to verify that the node has tainted successfully: Navigate to Compute -> Nodes . Select the node to verify its status, and then click on the YAML tab. In the specs section check the values of the following parameters: Additional resources For more information, refer to Creating the OpenShift Data Foundation cluster on VMware vSphere . Chapter 10. Scaling storage nodes To scale the storage capacity of OpenShift Data Foundation, you can do either of the following: Scale up storage nodes - Add storage capacity to the existing OpenShift Data Foundation worker nodes Scale out storage nodes - Add new worker nodes containing storage capacity 10.1. Requirements for scaling storage nodes Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Data Foundation instance: Platform requirements Storage device requirements Dynamic storage devices Capacity planning Warning Always ensure that you have plenty of storage capacity. If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space. Completely full storage is very difficult to recover. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. If you do run out of storage space completely, contact Red Hat Customer Support. 10.2. Scaling up storage by adding capacity to your OpenShift Data Foundation nodes on Red Hat OpenStack Platform infrastructure To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators -> Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. The storage class should be set to standard if you are using the default storage class generated during deployment. If you have created other storage classes, select whichever is appropriate. + The Raw Capacity field shows the size set during storage class creation. The total amount of storage consumed is three times this amount, because OpenShift Data Foundation uses a replica count of 3. Click Add . To check the status, navigate to Storage -> Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 10.3. Scaling out storage capacity by adding new nodes To scale out storage capacity, you need to perform the following: Add a new node to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs, which is the increment of 3 OSDs of the capacity selected during initial configuration. Verify that the new node is added successfully Scale up the storage capacity after the node is added 10.3.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute -> Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute -> Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 10.3.2. Scaling up storage capacity After you add a new node to OpenShift Data Foundation, you must scale up the storage capacity as described in Scaling up storage by adding capacity . Chapter 11. Multicloud Object Gateway 11.1. About the Multicloud Object Gateway The Multicloud Object Gateway (MCG) is a lightweight object storage service for OpenShift, allowing users to start small and then scale as needed on-premise, in multiple clusters, and with cloud-native storage. 11.2. Accessing the Multicloud Object Gateway with your applications You can access the object service with any application targeting AWS S3 or code that uses AWS S3 Software Development Kit (SDK). Applications need to specify the Multicloud Object Gateway (MCG) endpoint, an access key, and a secret access key. You can use your terminal or the MCG CLI to retrieve this information. Prerequisites A running OpenShift Data Foundation Platform. 11.3. Adding storage resources for hybrid or Multicloud 11.3.1. Creating a new backing store Use this procedure to create a new backing store in OpenShift Data Foundation. Prerequisites Administrator access to OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Backing Store tab. Click Create Backing Store . On the Create New Backing Store page, perform the following: Enter a Backing Store Name . Select a Provider . Select a Region . Optional: Enter an Endpoint . Select a Secret from the drop-down list, or create your own secret. Optionally, you can Switch to Credentials view which lets you fill in the required secrets. For more information on creating an OCP secret, see the section Creating the secret in the Openshift Container Platform documentation. Each backingstore requires a different secret. For more information on creating the secret for a particular backingstore, see the Section 11.3.2, "Adding storage resources for hybrid or Multicloud using the MCG command line interface" and follow the procedure for the addition of storage resources using a YAML. Note This menu is relevant for all providers except Google Cloud and local PVC. Enter the Target bucket . The target bucket is a container storage that is hosted on the remote cloud service. It allows you to create a connection that tells the MCG that it can use this bucket for the system. Click Create Backing Store . Verification steps In the OpenShift Web Console, click Storage -> Object Storage . Click the Backing Store tab to view all the backing stores. 11.3.2. Adding storage resources for hybrid or Multicloud using the MCG command line interface The Multicloud Object Gateway (MCG) simplifies the process of spanning data across cloud provider and clusters. You must add a backing storage that can be used by the MCG. Depending on the type of your deployment, you can choose one of the following procedures to create a backing storage: For creating an AWS-backed backingstore, see Section 11.3.2.1, "Creating an AWS-backed backingstore" For creating an IBM COS-backed backingstore, see Section 11.3.2.2, "Creating an IBM COS-backed backingstore" For creating an Azure-backed backingstore, see Section 11.3.2.3, "Creating an Azure-backed backingstore" For creating a GCP-backed backingstore, see Section 11.3.2.4, "Creating a GCP-backed backingstore" For creating a local Persistent Volume-backed backingstore, see Section 11.3.2.5, "Creating a local Persistent Volume-backed backingstore" For VMware deployments, skip to Section 11.3.3, "Creating an s3 compatible Multicloud Object Gateway backingstore" for further instructions. 11.3.2.1. Creating an AWS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument indicates to the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> Supply and encode your own AWS access key ID and secret access key using Base64, and use the results for <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . <backingstore-secret-name> The name of the backingstore secret created in the step. Apply the following YAML for a specific backing store: <bucket-name> The existing AWS bucket name. <backingstore-secret-name> The name of the backingstore secret created in the step. 11.3.2.2. Creating an IBM COS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , and <IBM COS ENDPOINT> An IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. To generate the above keys on IBM cloud, you must include HMAC credentials while creating the service credentials for your target bucket. <bucket-name> An existing IBM bucket name. This argument indicates MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using an YAML Create a secret with the credentials: <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> Provide and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> The name of the backingstore secret. Apply the following YAML for a specific backing store: <bucket-name> an existing IBM COS bucket name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <endpoint> A regional endpoint that corresponds to the location of the existing IBM bucket name. This argument indicates to MCG about the endpoint to use for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> The name of the secret created in the step. 11.3.2.3. Creating an Azure-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AZURE ACCOUNT KEY> and <AZURE ACCOUNT NAME> An AZURE account key and account name you created for this purpose. <blob container name> An existing Azure blob container name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AZURE ACCOUNT NAME ENCODED IN BASE64> and <AZURE ACCOUNT KEY ENCODED IN BASE64> Supply and encode your own Azure Account Name and Account Key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> A unique name of backingstore secret. Apply the following YAML for a specific backing store: <blob-container-name> An existing Azure blob container name. This argument indicates to the MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> with the name of the secret created in the step. 11.3.2.4. Creating a GCP-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> Name of the backingstore. <PATH TO GCP PRIVATE KEY JSON FILE> A path to your GCP private key created for this purpose. <GCP bucket name> An existing GCP object storage bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <GCP PRIVATE KEY ENCODED IN BASE64> Provide and encode your own GCP service account private key using Base64, and use the results for this attribute. <backingstore-secret-name> A unique name of the backingstore secret. Apply the following YAML for a specific backing store: <target bucket> An existing Google storage bucket. This argument indicates to the MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage dfdand administration. <backingstore-secret-name> The name of the secret created in the step. 11.3.2.5. Creating a local Persistent Volume-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Adding storage resources using the MCG command-line interface From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. Adding storage resources using YAML Apply the following YAML for a specific backing store: <backingstore_name > The name of the backingstore. <NUMBER OF VOLUMES> The number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. <VOLUME SIZE> Required size in GB of each volume. <CPU REQUEST> Guaranteed amount of CPU requested in CPU unit m . <MEMORY REQUEST> Guaranteed amount of memory requested. <CPU LIMIT> Maximum amount of CPU that can be consumed in CPU unit m . <MEMORY LIMIT> Maximum amount of memory that can be consumed. <LOCAL STORAGE CLASS> The local storage class name, recommended to use ocs-storagecluster-ceph-rbd . The output will be similar to the following: 11.3.3. Creating an s3 compatible Multicloud Object Gateway backingstore The Multicloud Object Gateway (MCG) can use any S3 compatible object storage as a backing store, for example, Red Hat Ceph Storage's RADOS Object Gateway (RGW). The following procedure shows how to create an S3 compatible MCG backing store for Red Hat Ceph Storage's RGW. Note that when the RGW is deployed, OpenShift Data Foundation operator creates an S3 compatible backingstore for MCG automatically. Procedure From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. To get the <RGW ACCESS KEY> and <RGW SECRET KEY> , run the following command using your RGW user secret name: Decode the access key ID and the access key from Base64 and keep them. Replace <RGW USER ACCESS KEY> and <RGW USER SECRET ACCESS KEY> with the appropriate, decoded data from the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . The output will be similar to the following: You can also create the backingstore using a YAML: Create a CephObjectStore user. This also creates a secret containing the RGW credentials: Replace <RGW-Username> and <Display-name> with a unique username and display name. Apply the following YAML for an S3-Compatible backing store: Replace <backingstore-secret-name> with the name of the secret that was created with CephObjectStore in the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . 11.3.4. Adding storage resources for hybrid and Multicloud using the user interface Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Storage Systems tab, select the storage system and then click Overview -> Object tab. Select the Multicloud Object Gateway link. Select the Resources tab in the left, highlighted below. From the list that populates, select Add Cloud Resource . Select Add new connection . Select the relevant native cloud provider or S3 compatible option and fill in the details. Select the newly created connection and map it to the existing bucket. Repeat these steps to create as many backing stores as needed. Note Resources created in NooBaa UI cannot be used by OpenShift UI or MCG CLI. 11.3.5. Creating a new bucket class Bucket class is a CRD representing a class of buckets that defines tiering policies and data placements for an Object Bucket Class. Use this procedure to create a bucket class in OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab. Click Create Bucket Class . On the Create new Bucket Class page, perform the following: Select the bucket class type and enter a bucket class name. Select the BucketClass type . Choose one of the following options: Standard : data will be consumed by a Multicloud Object Gateway (MCG), deduped, compressed and encrypted. Namespace : data is stored on the NamespaceStores without performing de-duplication, compression or encryption. By default, Standard is selected. Enter a Bucket Class Name . Click . In Placement Policy , select Tier 1 - Policy Type and click . You can choose either one of the options as per your requirements. Spread allows spreading of the data across the chosen resources. Mirror allows full duplication of the data across the chosen resources. Click Add Tier to add another policy tier. Select at least one Backing Store resource from the available list if you have selected Tier 1 - Policy Type as Spread and click . Alternatively, you can also create a new backing store . Note You need to select at least 2 backing stores when you select Policy Type as Mirror in step. Review and confirm Bucket Class settings. Click Create Bucket Class . Verification steps In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab and search the new Bucket Class. 11.3.6. Editing a bucket class Use the following procedure to edit the bucket class components through the YAML file by clicking the edit button on the Openshift web console. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class . You are redirected to the YAML file, make the required changes in this file and click Save . 11.3.7. Editing backing stores for bucket class Use the following procedure to edit an existing Multicloud Object Gateway (MCG) bucket class to change the underlying backing stores used in a bucket class. Prerequisites Administrator access to OpenShift Web Console. A bucket class. Backing stores. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class Resources . On the Edit Bucket Class Resources page, edit the bucket class resources either by adding a backing store to the bucket class or by removing a backing store from the bucket class. You can also edit bucket class resources created with one or two tiers and different placement policies. To add a backing store to the bucket class, select the name of the backing store. To remove a backing store from the bucket class, uncheck the name of the backing store. Click Save . 11.4. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. You can interact with objects in a namespace bucket using the S3 API. See S3 API endpoints for objects in namespace buckets for more information. Note A namespace bucket can only be used if its write target is available and functional. 11.4.1. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API. Ensure that the credentials provided for the Multicloud Object Gateway (MCG) enables you to perform the AWS S3 namespace bucket operations. You can use the AWS tool, aws-cli to verify that all the operations can be performed on the target bucket. Also, the list bucket which is using this MCG account shows the target bucket. Red Hat OpenShift Data Foundation supports the following namespace bucket operations: ListBuckets ListObjects ListMultipartUploads ListObjectVersions GetObject HeadObject CopyObject PutObject CreateMultipartUpload UploadPartCopy UploadPart ListParts AbortMultipartUpload PubObjectTagging DeleteObjectTagging GetObjectTagging GetObjectAcl PutObjectAcl DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 11.4.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 11.4.2.1. Adding an AWS S3 namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). For information, see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: where <namespacestore-secret-name> is a unique NamespaceStore name. You must provide and encode your own AWS access key ID and secret access key using Base64 , and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <resource-name> The name you want to give to the resource. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: <my-bucket-class> A unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the names of the NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step using the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 11.4.2.2. Adding an IBM COS namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: <namespacestore-secret-name> A unique NamespaceStore name. You must provide and encode your own IBM COS access key ID and secret access key using Base64 , and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <IBM COS ENDPOINT> The appropriate IBM COS endpoint. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . The namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. The namespace policy of type multi requires the following configuration: <my-bucket-class> The unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the NamespaceStores names that defines the read targets of the namespace bucket. To create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step, apply the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 11.4.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy can be either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single namespace-store that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single namespace-store that defines the write target of the namespace bucket. <read-resources>s A list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and a ConfigMap with the same name and in the same namespace as that of the OBC. 11.4.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface binary from the customer portal and make it executable. Note Choose either Linux(x86_64), Windows, or Mac OS. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> An IBM access key ID, secret access key, and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. <bucket-name> An existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single NamespaceStore that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A comma-separated list of NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 11.4.3. Adding a namespace bucket using the OpenShift Container Platform user interface You can add namespace buckets using the OpenShift Container Platform user interface. For information about namespace buckets, see Managing namespace buckets . Prerequisites Ensure that Openshift Container Platform with OpenShift Data Foundation operator is already installed. Access to the Multicloud Object Gateway (MCG). Procedure On the OpenShift Web Console, navigate to Storage -> Object Storage -> Namespace Store tab. Click Create namespace store to create a namespacestore resources to be used in the namespace bucket. Enter a namespacestore name. Choose a provider and region. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key. Enter a target bucket. Click Create . On the Namespace Store tab, verify that the newly created namespacestore is in the Ready state. Repeat steps 2 and 3 until you have created all the desired amount of resources. Navigate to Bucket Class tab and click Create Bucket Class . Choose Namespace BucketClass type radio button. Enter a BucketClass name and click . Choose a Namespace Policy Type for your namespace bucket, and then click . If your namespace policy type is Single , you need to choose a read resource. If your namespace policy type is Multi , you need to choose read resources and a write resource. If your namespace policy type is Cache , you need to choose a Hub namespace store that defines the read and write target of the namespace bucket. Select one Read and Write NamespaceStore which defines the read and write targets of the namespace bucket and click . Review your new bucket class details, and then click Create Bucket Class . Navigate to Bucket Class tab and verify that your newly created resource is in the Ready phase. Navigate to Object Bucket Claims tab and click Create Object Bucket Claim . Enter ObjectBucketClaim Name for the namespace bucket. Select StorageClass as openshift-storage.noobaa.io . Select the BucketClass that you created earlier for your namespacestore from the list. By default, noobaa-default-bucket-class gets selected. Click Create . The namespace bucket is created along with Object Bucket Claim for your namespace. Navigate to Object Bucket Claims tab and verify that the Object Bucket Claim created is in Bound state. Navigate to Object Buckets tab and verify that the your namespace bucket is present in the list and is in Bound state. 11.5. Mirroring data for hybrid and Multicloud buckets You can use the simplified process of the Multicloud Object Gateway (MCG) to span data across cloud providers and clusters. Before you create a bucket class that reflects the data management policy and mirroring, you must add a backing storage that can be used by the MCG. For information, see Chapter 4, Section 11.3, "Adding storage resources for hybrid or Multicloud" . You can set up mirroring data by using the OpenShift UI, YAML or MCG command-line interface. See the following sections: Section 11.5.1, "Creating bucket classes to mirror data using the MCG command-line-interface" Section 11.5.2, "Creating bucket classes to mirror data using a YAML" 11.5.1. Creating bucket classes to mirror data using the MCG command-line-interface Prerequisites Ensure to download Multicloud Object Gateway (MCG) command-line interface. Procedure From the Multicloud Object Gateway (MCG) command-line interface, run the following command to create a bucket class with a mirroring policy: Set the newly created bucket class to a new bucket claim to generate a new bucket that will be mirrored between two locations: 11.5.2. Creating bucket classes to mirror data using a YAML Apply the following YAML. This YAML is a hybrid example that mirrors data between local Ceph storage and AWS: Add the following lines to your standard Object Bucket Claim (OBC): For more information about OBCs, see Section 11.7, "Object Bucket Claim" . 11.6. Bucket policies in the Multicloud Object Gateway OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them. 11.6.1. Introduction to bucket policies Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview . 11.6.2. Using bucket policies in Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Section 11.2, "Accessing the Multicloud Object Gateway with your applications" A valid Multicloud Object Gateway user account. See Creating a user in the Multicloud Object Gateway for instructions to create a user account. Procedure To use bucket policies in the MCG: Create the bucket policy in JSON format. For example: Replace [email protected] with a valid Multicloud Object Gateway user account. Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket: Replace ENDPOINT with the S3 endpoint. Replace MyBucket with the bucket to set the policy on. Replace BucketPolicy with the bucket policy JSON file. Add --no-verify-ssl if you are using the default self signed certificates. For example: For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy . Note The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io . Note Bucket policy conditions are not supported. Additional resources There are many available elements for bucket policies with regard to access permissions. For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview . For more examples of bucket policies, see AWS Bucket Policy Examples . OpenShift Data Foundation version 4.17 introduces the bucket policy elements NotPrincipal , NotAction , and NotResource . For more information on these elements, see IAM JSON policy elements reference . 11.6.3. Creating a user in the Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Execute the following command to create an MCG user account: <noobaa-account-name> Specify the name of the new MCG user account. --allow_bucket_create Allows the user to create new buckets. --allowed_buckets Sets the user's allowed bucket list (use commas or multiple flags). --default_resource Sets the default resource.The new buckets are created on this default resource (including the future ones). --full_permission Allows this account to access all existing and future buckets. Important You need to provide permission to access atleast one bucket or full permission to access all the buckets. 11.7. Object Bucket Claim An Object Bucket Claim can be used to request an S3 compatible bucket backend for your workloads. You can create an Object Bucket Claim in three ways: Section 11.7.1, "Dynamic Object Bucket Claim" Section 11.7.2, "Creating an Object Bucket Claim using the command line interface" Section 11.7.3, "Creating an Object Bucket Claim using the OpenShift Web Console" An object bucket claim creates a new bucket and an application account in NooBaa with permissions to the bucket, including a new access key and secret access key. The application account is allowed to access only a single bucket and can't create new buckets by default. 11.7.1. Dynamic Object Bucket Claim Similar to Persistent Volumes, you can add the details of the Object Bucket claim (OBC) to your application's YAML, and get the object service endpoint, access key, and secret access key available in a configuration map and secret. It is easy to read this information dynamically into environment variables of your application. Note The Multicloud Object Gateway endpoints uses self-signed certificates only if OpenShift uses self-signed certificates. Using signed certificates in OpenShift automatically replaces the Multicloud Object Gateway endpoints certificates with signed certificates. Get the certificate currently used by Multicloud Object Gateway by accessing the endpoint via the browser. See Accessing the Multicloud Object Gateway with your applications for more information. Procedure Add the following lines to your application YAML: These lines are the OBC itself. Replace <obc-name> with the a unique OBC name. Replace <obc-bucket-name> with a unique bucket name for your OBC. To automate the use of the OBC add more lines to the YAML file. For example: The example is the mapping between the bucket claim result, which is a configuration map with data and a secret with the credentials. This specific job claims the Object Bucket from NooBaa, which creates a bucket and an account. Replace all instances of <obc-name> with your OBC name. Replace <your application image> with your application image. Apply the updated YAML file: Replace <yaml.file> with the name of your YAML file. To view the new configuration map, run the following: Replace obc-name with the name of your OBC. You can expect the following environment variables in the output: BUCKET_HOST - Endpoint to use in the application. BUCKET_PORT - The port available for the application. The port is related to the BUCKET_HOST . For example, if the BUCKET_HOST is https://my.example.com , and the BUCKET_PORT is 443, the endpoint for the object service would be https://my.example.com:443 . BUCKET_NAME - Requested or generated bucket name. AWS_ACCESS_KEY_ID - Access key that is part of the credentials. AWS_SECRET_ACCESS_KEY - Secret access key that is part of the credentials. Important Retrieve the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . The names are used so that it is compatible with the AWS S3 API. You need to specify the keys while performing S3 operations, especially when you read, write or list from the Multicloud Object Gateway (MCG) bucket. The keys are encoded in Base64. Decode the keys before using them. <obc_name> Specify the name of the object bucket claim. 11.7.2. Creating an Object Bucket Claim using the command line interface When creating an Object Bucket Claim (OBC) using the command-line interface, you get a configuration map and a Secret that together contain all the information your application needs to use the object storage service. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Use the command-line interface to generate the details of a new bucket and credentials. Run the following command: Replace <obc-name> with a unique OBC name, for example, myappobc . Additionally, you can use the --app-namespace option to specify the namespace where the OBC configuration map and secret will be created, for example, myapp-namespace . For example: The MCG command-line-interface has created the necessary configuration and has informed OpenShift about the new OBC. Run the following command to view the OBC: For example: Run the following command to view the YAML file for the new OBC: For example: Inside of your openshift-storage namespace, you can find the configuration map and the secret to use this OBC. The CM and the secret have the same name as the OBC. Run the following command to view the secret: For example: The secret gives you the S3 access credentials. Run the following command to view the configuration map: For example: The configuration map contains the S3 endpoint information for your application. 11.7.3. Creating an Object Bucket Claim using the OpenShift Web Console You can create an Object Bucket Claim (OBC) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. In order for your applications to communicate with the OBC, you need to use the configmap and secret. For more information about this, see Section 11.7.1, "Dynamic Object Bucket Claim" . Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage -> Object Storage -> Object Bucket Claims -> Create Object Bucket Claim . Enter a name for your object bucket claim and select the appropriate storage class based on your deployment, internal or external, from the dropdown menu: Internal mode The following storage classes, which were created after deployment, are available for use: ocs-storagecluster-ceph-rgw uses the Ceph Object Gateway (RGW) openshift-storage.noobaa.io uses the Multicloud Object Gateway (MCG) External mode The following storage classes, which were created after deployment, are available for use: ocs-external-storagecluster-ceph-rgw uses the RGW openshift-storage.noobaa.io uses the MCG Note The RGW OBC storage class is only available with fresh installations of OpenShift Data Foundation version 4.5. It does not apply to clusters upgraded from OpenShift Data Foundation releases. Click Create . Once you create the OBC, you are redirected to its detail page. 11.7.4. Attaching an Object Bucket Claim to a deployment Once created, Object Bucket Claims (OBCs) can be attached to specific deployments. Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage -> Object Storage -> Object Bucket Claims . Click the Action menu (...) to the OBC you created. From the drop-down menu, select Attach to Deployment . Select the desired deployment from the Deployment Name list, then click Attach . 11.7.5. Viewing object buckets using the OpenShift Web Console You can view the details of object buckets created for Object Bucket Claims (OBCs) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage -> Object Storage -> Object Buckets . Optonal: You can also navigate to the details page of a specific OBC, and click the Resource link to view the object buckets for that OBC. Select the object bucket of which you want to see the details. Once selected you are navigated to the Object Bucket Details page. 11.7.6. Deleting Object Bucket Claims Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage -> Object Storage -> Object Bucket Claims . Click the Action menu (...) to the Object Bucket Claim (OBC) you want to delete. Select Delete Object Bucket Claim . Click Delete . 11.8. Caching policy for object buckets A cache bucket is a namespace bucket with a hub target and a cache target. The hub target is an S3 compatible large object storage bucket. The cache bucket is the local Multicloud Object Gateway bucket. You can create a cache bucket that caches an AWS bucket or an IBM COS bucket. Important Cache buckets are a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . AWS S3 IBM COS 11.8.1. Creating an AWS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the namespacestore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First create a secret with credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <namespacestore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-cache-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim (OBC) resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 11.8.2. Creating an IBM COS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First, Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint. Replace <backingstore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 11.9. Scaling Multicloud Object Gateway performance by adding endpoints The Multicloud Object Gateway performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints. The Multicloud Object Gateway resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default: Storage service S3 endpoint service 11.9.1. Scaling the Multicloud Object Gateway with storage nodes Prerequisites A running OpenShift Data Foundation cluster on OpenShift Container Platform with access to the Multicloud Object Gateway (MCG). A storage node in the MCG is a NooBaa daemon container attached to one or more Persistent Volumes (PVs) and used for local object service data storage. NooBaa daemons can be deployed on Kubernetes nodes. This can be done by creating a Kubernetes pool consisting of StatefulSet pods. Procedure Log in to OpenShift Web Console . From the MCG user interface, click Overview -> Add Storage Resources . In the window, click Deploy Kubernetes Pool . In the Create Pool step create the target pool for the future installed nodes. In the Configure step, configure the number of requested pods and the size of each PV. For each new pod, one PV is to be created. In the Review step, you can find the details of the new pool and select the deployment method you wish to use: local or external deployment. If local deployment is selected, the Kubernetes nodes will deploy within the cluster. If external deployment is selected, you will be provided with a YAML file to run externally. All nodes will be assigned to the pool you chose in the first step, and can be found under Resources -> Storage resources -> Resource name . 11.10. Automatic scaling of MultiCloud Object Gateway endpoints The number of MultiCloud Object Gateway (MCG) endpoints scale automatically when the load on the MCG S3 service increases or decreases. OpenShift Data Foundation clusters are deployed with one active MCG endpoint. Each MCG endpoint pod is configured by default with 1 CPU and 2Gi memory request, with limits matching the request. When the CPU load on the endpoint crosses over an 80% usage threshold for a consistent period of time, a second endpoint is deployed lowering the load on the first endpoint. When the average CPU load on both endpoints falls below the 80% threshold for a consistent period of time, one of the endpoints is deleted. This feature improves performance and serviceability of the MCG. You can scale the Horizontal Pod Autoscaler (HPA) for noobaa-endpoint using the following oc patch command, for example: The example above sets the minCount to 3 and the maxCount to `10 . Chapter 12. Managing persistent volume claims Important Expanding PVCs is not supported for PVCs backed by OpenShift Data Foundation. 12.1. Configuring application pods to use OpenShift Data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for an application pod. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators -> Installed Operators to view installed operators. The default storage classes provided by OpenShift Data Foundation are available. In OpenShift Web Console, click Storage -> StorageClasses to view default storage classes. Procedure Create a Persistent Volume Claim (PVC) for the application to use. In OpenShift Web Console, click Storage -> Persistent Volume Claims . Set the Project for the application pod. Click Create Persistent Volume Claim . Specify a Storage Class provided by OpenShift Data Foundation. Specify the PVC Name , for example, myclaim . Select the required Access Mode . Note The Access Mode , Shared access (RWX) is not supported in IBM FlashSystem. For Rados Block Device (RBD), if the Access mode is ReadWriteOnce ( RWO ), select the required Volume mode . The default volume mode is Filesystem . Specify a Size as per application requirement. Click Create and wait until the PVC is in Bound status. Configure a new or existing application pod to use the new PVC. For a new application pod, perform the following steps: Click Workloads -> Pods . Create a new application pod. Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod. For example: For an existing application pod, perform the following steps: Click Workloads -> Deployment Configs . Search for the required deployment config associated with the application pod. Click on its Action menu (...) -> Edit Deployment Config . Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod and click Save . For example: Verify that the new configuration is being used. Click Workloads -> Pods . Set the Project for the application pod. Verify that the application pod appears with a status of Running . Click the application pod name to view pod details. Scroll down to Volumes section and verify that the volume has a Type that matches your new Persistent Volume Claim, for example, myclaim . 12.2. Viewing Persistent Volume Claim request status Use this procedure to view the status of a PVC request. Prerequisites Administrator access to OpenShift Data Foundation. Procedure Log in to OpenShift Web Console. Click Storage -> Persistent Volume Claims Search for the required PVC name by using the Filter textbox. You can also filter the list of PVCs by Name or Label to narrow down the list Check the Status column corresponding to the required PVC. Click the required Name to view the PVC details. 12.3. Reviewing Persistent Volume Claim request events Use this procedure to review and address Persistent Volume Claim (PVC) request events. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click Overview -> Block and File . Locate the Inventory card to see the number of PVCs with errors. Click Storage -> Persistent Volume Claims Search for the required PVC using the Filter textbox. Click on the PVC name and navigate to Events Address the events as required or as directed. 12.4. Dynamic provisioning 12.4.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any intimate knowledge about the underlying storage volume sources. The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Container Platform. Storage plug-ins might support static provisioning, dynamic provisioning or both provisioning types. 12.4.2. Dynamic provisioning in OpenShift Data Foundation Red Hat OpenShift Data Foundation is software-defined storage that is optimised for container environments. It runs as an operator on OpenShift Container Platform to provide highly integrated and simplified persistent storage management for containers. OpenShift Data Foundation supports a variety of storage types, including: Block storage for databases Shared file storage for continuous integration, messaging, and data aggregation Object storage for archival, backup, and media storage Version 4 uses Red Hat Ceph Storage to provide the file, block, and object storage that backs persistent volumes, and Rook.io to manage and orchestrate provisioning of persistent volumes and claims. NooBaa provides object storage, and its Multicloud Gateway allows object federation across multiple cloud environments (available as a Technology Preview). In OpenShift Data Foundation 4, the Red Hat Ceph Storage Container Storage Interface (CSI) driver for RADOS Block Device (RBD) and Ceph File System (CephFS) handles the dynamic provisioning requests. When a PVC request comes in dynamically, the CSI driver has the following options: Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on Ceph RBDs with volume mode Block . Create a PVC with ReadWriteOnce (RWO) access that is based on Ceph RBDs with volume mode Filesystem . Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on CephFS for volume mode Filesystem . Create a PVC with ReadWriteOncePod (RWOP) access that is based on CephFS,NFS and RBD. With RWOP access mode, you mount the volume as read-write by a single pod on a single node. The judgment of which driver (RBD or CephFS) to use is based on the entry in the storageclass.yaml file. 12.4.3. Available dynamic provisioning plug-ins OpenShift Container Platform provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plug-in name Notes OpenStack Cinder kubernetes.io/cinder AWS Elastic Block Store (EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. AWS Elastic File System (EFS) Dynamic provisioning is accomplished through the EFS provisioner pod and not through a provisioner plug-in. Azure Disk kubernetes.io/azure-disk Azure File kubernetes.io/azure-file The persistent-volume-binder ServiceAccount requires permissions to create and get Secrets to store the Azure storage account and keys. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. VMware vSphere kubernetes.io/vsphere-volume Red Hat Virtualization csi.ovirt.org Important Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. Chapter 13. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. You can create multiple snapshots of the same persistent volume claim (PVC). For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note You cannot schedule periodic creation of snapshots. 13.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Data Foundation only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage -> Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) -> Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions -> Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage -> Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 13.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage -> Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 13.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) -> Delete Volume Snapshot . From Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) -> Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage -> Volume Snapshots and ensure that the deleted volume snapshot is not listed. Chapter 14. Volume cloning A clone is a duplicate of an existing storage volume that is used as any standard volume. You create a clone of a volume to make a point in time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size. You can create up to 512 clones per PVC for both CephFS and RADOS Block Device (RBD). 14.1. Creating a clone Prerequisites Source PVC must be in Bound state and must not be in use. Note Do not create a clone of a PVC if a Pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Procedure Click Storage -> Persistent Volume Claims from the OpenShift Web Console. To create a clone, do one of the following: Beside the desired PVC, click Action menu (...) -> Clone PVC . Click on the PVC that you want to clone and click Actions -> Clone PVC . Enter a Name for the clone. Select the access mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Enter the required size of the clone. Select the storage class in which you want to create the clone. The storage class can be any RBD storage class and it need not necessarily be the same as the parent PVC. Click Clone . You are redirected to the new PVC details page. Wait for the cloned PVC status to become Bound . The cloned PVC is now available to be consumed by the pods. This cloned PVC is independent of its dataSource PVC. Chapter 15. Replacing storage nodes You can choose one of the following procedures to replace storage nodes: Section 15.1, "Replacing operational nodes on Red Hat OpenStack Platform installer-provisioned infrastructure" Section 15.2, "Replacing failed nodes on Red Hat OpenStack Platform installer-provisioned infrastructure" 15.1. Replacing operational nodes on Red Hat OpenStack Platform installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute -> Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute -> Machines . Search for the required machine. Besides the required machine, click Action menu (...) -> Delete Machine . Click Delete to confirm that the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute -> Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node: From the user interface For the new node, click Action Menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads -> Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 15.2. Replacing failed nodes on Red Hat OpenStack Platform installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute -> Nodes . Identify the faulty node, and click on its Machine Name . Click Actions -> Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining , and click Save . Click Actions -> Delete Machine , and click Delete . A new machine is automatically created, wait for new machine to start. Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute -> Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Optional: If the failed Red Hat OpenStack Platform instance is not removed automatically, terminate the instance from Red Hat OpenStack Platform console. Verification steps Verify that the new node is present in the output: Click Workloads -> Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . Chapter 16. Replacing storage devices 16.1. Replacing operational or failed storage devices on Red Hat OpenStack Platform installer-provisioned infrastructure Use this procedure to replace storage device in OpenShift Data Foundation which is deployed on Red Hat OpenStack Platform. This procedure helps to create a new Persistent Volume Claim (PVC) on a new volume and remove the old object storage device (OSD). Procedure Identify the OSD that needs to be replaced and the OpenShift Container Platform node that has the OSD scheduled on it. Example output: In this example, rook-ceph-osd-0-6d77d6c7c6-m8xj6 needs to be replaced and compute-2 is the OpenShift Container platform node on which the OSD is scheduled. Note If the OSD to be replaced is healthy, the status of the pod will be Running . Scale down the OSD deployment for the OSD to be replaced. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-0 . Example output: Verify that the rook-ceph-osd pod is terminated. Example output: Note If the rook-ceph-osd pod is in terminating state, use the force option to delete the pod. Example output: Incase, the persistent volume associated with the failed OSD fails, get the failed persistent volumes details and delete them using the following commands: Remove the old OSD from the cluster so that a new OSD can be added. Delete any old ocs-osd-removal jobs. Example output: Change to the openshift-storage project. Remove the old OSD from the cluster. You can add comma separated OSD IDs in the command to remove more than one OSD. (For example, FAILED_OSD_IDS=0,1,2). The FORCE_OSD_REMOVAL value must be changed to "true" in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSD devices that are removed from the respective OpenShift Data Foundation nodes. Get PVC name(s) of the replaced OSD(s) from the logs of ocs-osd-removal-job pod : For example: For each of the nodes identified in step #1, do the following: Create a debug pod and chroot to the host on the storage node. Find relevant device name based on the PVC names identified in the step Remove the mapped device. Note If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find PID of the process which was stuck. Terminate the process using kill command. Verify that the device name is removed. Delete the ocs-osd-removal job. Example output: Verfication steps Verify that there is a new OSD running. Example output: Verify that there is a new PVC created which is in Bound state. Example output: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in step, do the following: Create a debug pod and open a chroot environment for the selected host(s). Run "lsblk" and check for the "crypt" keyword beside the ocs-deviceset name(s) Log in to OpenShift Web Console and view the storage dashboard. Figure 16.1. OSD status in OpenShift Container Platform storage dashboard after device replacement Chapter 17. Upgrading to OpenShift Data Foundation 17.1. Overview of the OpenShift Data Foundation update process This chapter helps you to upgrade between the minor releases and z-streams for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. You can upgrade OpenShift Data Foundation and its components, either between minor releases like 4.16 and 4.17, or between z-stream updates like 4.16.0 and 4.16.1 by enabling automatic updates (if not done so during operator installation) or performing manual updates. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic. Extended Update Support (EUS) EUS to EUS upgrade in OpenShift Data Foundation is sequential and it is aligned with OpenShift upgrade. For more information, see Performing an EUS-to-EUS update and EUS-to-EUS update for layered products and Operators installed through Operator Lifecycle Manager . For EUS upgrade of OpenShift Container Platform and OpenShift Data Foundation, make sure that OpenShift Data Foundation is upgraded along with OpenShift Container Platform and compatibility between OpenShift Data Foundation and OpenShift Container Platform is always maintained. Example workflow of EUS upgrade: Pause the worker machine pools. Update OpenShift <4.y> -> OpenShift <4.y+1>. Update OpenShift Data Foundation <4.y> -> OpenShift Data Foundation <4.y+1>. Update OpenShift <4.y+1> -> OpenShift <4.y+2>. Update to OpenShift Data Foundation <4.y+2>. Unpause the worker machine pools. Note You can update to ODF <4.y+2> either before or after worker machine pools are unpaused. Important When you update OpenShift Data Foundation in external mode, make sure that the Red Had Ceph Storage and OpenShift Data Foundation versions are compatible. For more information about supported Red Had Ceph Storage version in external mode, refer to Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Provide the required OpenShift Data Foundation version in the checker to see the supported Red Had Ceph version corresponding to the version in use. You also need to upgrade the different parts of Red Hat OpenShift Data Foundation in the following order for both internal and external mode deployments: Update OpenShift Container Platform according to the Updating clusters documentation for OpenShift Container Platform. Update Red Hat OpenShift Data Foundation. To prepare a disconnected environment for updates , see Operators guide to using Operator Lifecycle Manager on restricted networks to be able to update OpenShift Data Foundation as well as Local Storage Operator when in use. For updating between minor releases , see Updating Red Hat OpenShift Data Foundation 4.14 to 4.15 . For updating between z-stream releases , see Updating Red Hat OpenShift Data Foundation 4.15.x to 4.15.y . For updating external mode deployments , you must also perform the steps from section Updating the Red Hat OpenShift Data Foundation external secret . If you use local storage, then update the Local Storage operator . See Checking for Local Storage Operator deployments if you are unsure. Important If you have an existing setup of OpenShift Data Foundation 4.12 with disaster recovery (DR) enabled, ensure to update all your clusters in the environment at the same time and avoid updating a single cluster. This is to avoid any potential issues and maintain best compatibility. It is also important to maintain consistency across all OpenShift Data Foundation DR instances. Update considerations Review the following important considerations before you begin. The Red Hat OpenShift Container Platform version is the same as Red Hat OpenShift Data Foundation. See the Interoperability Matrix for more information about supported combinations of OpenShift Container Platform and Red Hat OpenShift Data Foundation. To know whether your cluster was deployed in internal or external mode, refer to the knowledgebase article on How to determine if ODF cluster has storage in internal or external mode . The Local Storage Operator is fully supported only when the Local Storage Operator version matches the Red Hat OpenShift Container Platform version. Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 17.2. Updating Red Hat OpenShift Data Foundation 4.16 to 4.17 This chapter helps you to upgrade between the minor releases for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. You must upgrade Red Hat Ceph Storage along with OpenShift Data Foundation to get new feature support, security fixes, and other bug fixes. As there is no dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. For more information about RHCS releases, see the knowledgebase solution, solution . Important Upgrading to 4.17 directly from any version older than 4.16 is not supported. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.17.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of both Overview - Block and File and Object tabs. Green tick indicates that the storage cluster , object service and data resiliency are all healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads -> Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Prerequisite relevant only for OpenShift Data Foundation deployments on AWS using AWS Security Token Service (STS) Add another entry in the trust policy for noobaa-core account as follows: Log into AWS web console where the AWS role resides using http://console.aws.amazon.com/ . Enter the IAM management tool and click Roles . Find the name of the role created for AWS STS to support Multicloud Object Gateway (MCG) authentication using the following command in OpenShift CLI: Search for the role name that you obtained from the step in the tool and click on the role name. Under the role summary, click Trust relationships . In the Trusted entities tab, click Edit trust policy on the right. Under the "Action": "sts:AssumeRoleWithWebIdentity" field, there are two fields to enable access for two NooBaa service accounts noobaa and noobaa-endpoint . Add another entry for the core pod's new service account name, system:serviceaccount:openshift-storage:noobaa-core . Click Update policy at the bottom right of the page. The update might take about 5 minutes to get in place. Procedure On the OpenShift Web Console, navigate to Operators -> Installed Operators . Select openshift-storage project. Click the OpenShift Data Foundation operator name. Click the Subscription tab and click the link under Update Channel . Select the stable-4.17 update channel and Save it. If the Upgrade status shows requires approval , click on requires approval . On the Install Plan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . Navigate to Operators -> Installed Operators . Select the openshift-storage project. Wait for the OpenShift Data Foundation Operator Status to change to Up to date . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Note After upgrading, if your cluster has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators -> Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview- Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. If verification steps fail, contact Red Hat Support . Important After updating external mode deployments, you must also update the external secret. For instructions, see Updating the OpenShift Data Foundation external secret . Additional Resources If you face any issues while updating OpenShift Data Foundation, see the Commonly required logs for troubleshooting section in the Troubleshooting guide . 17.3. Updating Red Hat OpenShift Data Foundation 4.17.x to 4.17.y This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. Hence, we recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about RHCS releases. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic . If the update strategy is set to Manual then use the following procedure. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.17.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads -> Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators -> Installed Operators . Select openshift-storage project. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click the OpenShift Data Foundation operator name. Click the Subscription tab. If the Upgrade Status shows require approval , click on requires approval link. On the InstallPlan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators -> Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy If verification steps fail, contact Red Hat Support . 17.4. Changing the update approval strategy To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic . Changing the update approval strategy to Manual will need manual approval for each upgrade. Procedure Navigate to Operators -> Installed Operators . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click on OpenShift Data Foundation operator name Go to the Subscription tab. Click on the pencil icon for changing the Update approval . Select the update approval strategy and click Save . Verification steps Verify that the Update approval shows the newly selected approval strategy below it.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json", "oc -n openshift-storage create serviceaccount <serviceaccount_name>", "oc -n openshift-storage create serviceaccount odf-vault-auth", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid", "vault auth enable kubernetes", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "oc get namespace default NAME STATUS AGE default Active 5d2h", "oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated", "oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h", "oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated", "oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h", "oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated", "oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s", "oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated", "oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s", "oc annotate namespace openshift-storage openshift.io/node-selector=", "python3 ceph-external-cluster-details-exporter.py --help", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> [optional arguments]", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs", "[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"client.healthchecker\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"ceph-rbd\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxx:xxxx\", \"poolPrefix\": \"default\"}}]", "oc get cephcluster -n openshift-storage", "NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH ocs-external-storagecluster-cephcluster 31m15s Connected Cluster connected successfully HEALTH_OK", "oc get storagecluster -n openshift-storage", "NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 31m15s Ready true 2021-02-29T20:43:04Z 4.17.0", "oc annotate storagecluster ocs-external-storagecluster -n openshift-storage uninstall.ocs.openshift.io/mode=\"forced\" --overwrite storagecluster.ocs.openshift.io/ocs-external-storagecluster annotated", "oc get volumesnapshot --all-namespaces", "oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>", "#!/bin/bash RBD_PROVISIONER=\"openshift-storage.rbd.csi.ceph.com\" CEPHFS_PROVISIONER=\"openshift-storage.cephfs.csi.ceph.com\" NOOBAA_PROVISIONER=\"openshift-storage.noobaa.io/obc\" RGW_PROVISIONER=\"openshift-storage.ceph.rook.io/bucket\" NOOBAA_DB_PVC=\"noobaa-db\" NOOBAA_BACKINGSTORE_PVC=\"noobaa-default-backing-store-noobaa-pvc\" Find all the OCS StorageClasses OCS_STORAGECLASSES=USD(oc get storageclasses | grep -e \"USDRBD_PROVISIONER\" -e \"USDCEPHFS_PROVISIONER\" -e \"USDNOOBAA_PROVISIONER\" -e \"USDRGW_PROVISIONER\" | awk '{print USD1}') List PVCs in each of the StorageClasses for SC in USDOCS_STORAGECLASSES do echo \"======================================================================\" echo \"USDSC StorageClass PVCs and OBCs\" echo \"======================================================================\" oc get pvc --all-namespaces --no-headers 2>/dev/null | grep USDSC | grep -v -e \"USDNOOBAA_DB_PVC\" -e \"USDNOOBAA_BACKINGSTORE_PVC\" oc get obc --all-namespaces --no-headers 2>/dev/null | grep USDSC echo done", "oc delete obc <obc name> -n <project name>", "oc delete pvc <pvc name> -n <project-name>", "oc delete -n openshift-storage storagesystem --all --wait=true", "oc project default oc delete project openshift-storage --wait=true --timeout=5m", "oc get project openshift-storage", "oc get pv oc delete pv <pv name>", "oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io storagesystems.odf.openshift.io --wait=true --timeout=5m", "oc get pod,pvc -n openshift-monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Running 0 8d pod/alertmanager-main-1 3/3 Running 0 8d pod/alertmanager-main-2 3/3 Running 0 8d pod/cluster-monitoring- operator-84457656d-pkrxm 1/1 Running 0 8d pod/grafana-79ccf6689f-2ll28 2/2 Running 0 8d pod/kube-state-metrics- 7d86fb966-rvd9w 3/3 Running 0 8d pod/node-exporter-25894 2/2 Running 0 8d pod/node-exporter-4dsd7 2/2 Running 0 8d pod/node-exporter-6p4zc 2/2 Running 0 8d pod/node-exporter-jbjvg 2/2 Running 0 8d pod/node-exporter-jj4t5 2/2 Running 0 6d18h pod/node-exporter-k856s 2/2 Running 0 6d18h pod/node-exporter-rf8gn 2/2 Running 0 8d pod/node-exporter-rmb5m 2/2 Running 0 6d18h pod/node-exporter-zj7kx 2/2 Running 0 8d pod/openshift-state-metrics- 59dbd4f654-4clng 3/3 Running 0 8d pod/prometheus-adapter- 5df5865596-k8dzn 1/1 Running 0 7d23h pod/prometheus-adapter- 5df5865596-n2gj9 1/1 Running 0 7d23h pod/prometheus-k8s-0 6/6 Running 1 8d pod/prometheus-k8s-1 6/6 Running 1 8d pod/prometheus-operator- 55cfb858c9-c4zd9 1/1 Running 0 6d21h pod/telemeter-client- 78fc8fc97d-2rgfp 3/3 Running 0 8d NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0 Bound pvc-0d519c4f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1 Bound pvc-0d5a9825-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2 Bound pvc-0d6413dc-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0 Bound pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1 Bound pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", ". . . apiVersion: v1 data: config.yaml: | alertmanagerMain: volumeClaimTemplate: metadata: name: my-alertmanager-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-external-storagecluster-ceph-rbd prometheusK8s: volumeClaimTemplate: metadata: name: my-prometheus-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-external-storagecluster-ceph-rbd kind: ConfigMap metadata: creationTimestamp: \"2019-12-02T07:47:29Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"22110\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: fd6d988b-14d7-11ea-84ff-066035b9efa8 . . .", ". . . apiVersion: v1 data: config.yaml: | kind: ConfigMap metadata: creationTimestamp: \"2019-11-21T13:07:05Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"404352\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: d12c796a-0c5f-11ea-9832-063cd735b81c . . .", "oc get pod,pvc -n openshift-monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Terminating 0 10h pod/alertmanager-main-1 3/3 Terminating 0 10h pod/alertmanager-main-2 3/3 Terminating 0 10h pod/cluster-monitoring-operator-84cd9df668-zhjfn 1/1 Running 0 18h pod/grafana-5db6fd97f8-pmtbf 2/2 Running 0 10h pod/kube-state-metrics-895899678-z2r9q 3/3 Running 0 10h pod/node-exporter-4njxv 2/2 Running 0 18h pod/node-exporter-b8ckz 2/2 Running 0 11h pod/node-exporter-c2vp5 2/2 Running 0 18h pod/node-exporter-cq65n 2/2 Running 0 18h pod/node-exporter-f5sm7 2/2 Running 0 11h pod/node-exporter-f852c 2/2 Running 0 18h pod/node-exporter-l9zn7 2/2 Running 0 11h pod/node-exporter-ngbs8 2/2 Running 0 18h pod/node-exporter-rv4v9 2/2 Running 0 18h pod/openshift-state-metrics-77d5f699d8-69q5x 3/3 Running 0 10h pod/prometheus-adapter-765465b56-4tbxx 1/1 Running 0 10h pod/prometheus-adapter-765465b56-s2qg2 1/1 Running 0 10h pod/prometheus-k8s-0 6/6 Terminating 1 9m47s pod/prometheus-k8s-1 6/6 Terminating 1 9m47s pod/prometheus-operator-cbfd89f9-ldnwc 1/1 Running 0 43m pod/telemeter-client-7b5ddb4489-2xfpz 3/3 Running 0 10h NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-0 Bound pvc-2eb79797-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-1 Bound pvc-2ebeee54-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-2 Bound pvc-2ec6a9cf-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-0 Bound pvc-3162a80c-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-1 Bound pvc-316e99e2-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h", "oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m", "oc edit configs.imageregistry.operator.openshift.io", ". . . storage: pvc: claim: registry-cephfs-rwx-pvc . . .", ". . . storage: emptyDir: {} . . .", "oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m", "oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m", "oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m", "oc annotate namespace openshift-storage openshift.io/node-selector=", "encryptionKMSID: 1-vault", "kind: ConfigMap apiVersion: v1 metadata: name: csi-kms-connection-details [...] data: 1-vault: |- { \"encryptionKMSType\": \"vaulttokens\", \"kmsServiceName\": \"1-vault\", [...] \"vaultBackend\": \"kv-v2\" } 2-vault: |- { \"encryptionKMSType\": \"vaulttenantsa\", [...] \"vaultBackend\": \"kv\" }", "storage: pvc: claim: <new-pvc-name>", "storage: pvc: claim: ocs4registry", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, for example 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}", "spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd", "config.yaml: | openshift-storage: delete: days: 5", "spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"", "template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"", "label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"", "adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule", "Taints: Key: node.openshift.ocs.io/storage Value: true Effect: Noschedule", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "noobaa backingstore create aws-s3 <backingstore_name> --access-key=<AWS ACCESS KEY> --secret-key=<AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"aws-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-aws-resource\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: awsS3: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: aws-s3", "noobaa backingstore create ibm-cos <backingstore_name> --access-key=<IBM ACCESS KEY> --secret-key=<IBM SECRET ACCESS KEY> --endpoint=<IBM COS ENDPOINT> --target-bucket <bucket-name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"ibm-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-ibm-resource\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: ibmCos: endpoint: <endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: ibm-cos", "noobaa backingstore create azure-blob <backingstore_name> --account-key=<AZURE ACCOUNT KEY> --account-name=<AZURE ACCOUNT NAME> --target-blob-container <blob container name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"azure-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-azure-resource\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: azureBlob: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBlobContainer: <blob-container-name> type: azure-blob", "noobaa backingstore create google-cloud-storage <backingstore_name> --private-key-json-file=<PATH TO GCP PRIVATE KEY JSON FILE> --target-bucket <GCP bucket name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"google-gcp\" INFO[0002] ✅ Created: Secret \"backing-store-google-cloud-storage-gcp\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: GoogleServiceAccountPrivateKeyJson: <GCP PRIVATE KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: googleCloudStorage: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <target bucket> type: google-cloud-storage", "noobaa -n openshift-storage backingstore create pv-pool <backingstore_name> --num-volumes <NUMBER OF VOLUMES> --pv-size-gb <VOLUME SIZE> --request-cpu <CPU REQUEST> --request-memory <MEMORY REQUEST> --limit-cpu <CPU LIMIT> --limit-memory <MEMORY LIMIT> --storage-class <LOCAL STORAGE CLASS>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore_name> namespace: openshift-storage spec: pvPool: numVolumes: <NUMBER OF VOLUMES> resources: requests: storage: <VOLUME SIZE> cpu: <CPU REQUEST> memory: <MEMORY REQUEST> limits: cpu: <CPU LIMIT> memory: <MEMORY LIMIT> storageClass: <LOCAL STORAGE CLASS> type: pv-pool", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Exists: BackingStore \"local-mcg-storage\"", "noobaa backingstore create s3-compatible rgw-resource --access-key=<RGW ACCESS KEY> --secret-key=<RGW SECRET KEY> --target-bucket=<bucket-name> --endpoint=<RGW endpoint> -n openshift-storage", "get secret <RGW USER SECRET NAME> -o yaml -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"rgw-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-rgw-resource\"", "apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: <RGW-Username> namespace: openshift-storage spec: store: ocs-storagecluster-cephobjectstore displayName: \"<Display-name>\"", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore-name> namespace: openshift-storage spec: s3Compatible: endpoint: <RGW endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage signatureVersion: v4 targetBucket: <RGW-bucket-name> type: s3-compatible", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>", "noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage", "noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>", "noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage", "noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>", "noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement Mirror", "noobaa obc create mirrored-bucket --bucketclass=mirror-to-aws", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <bucket-class-name> namespace: openshift-storage spec: placementPolicy: tiers: - backingStores: - <backing-store-1> - <backing-store-2> placement: Mirror", "additionalConfig: bucketclass: mirror-to-aws", "{ \"Version\": \"NewVersion\", \"Statement\": [ { \"Sid\": \"Example\", \"Effect\": \"Allow\", \"Principal\": [ \"[email protected]\" ], \"Action\": [ \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::john_bucket\" ] } ] }", "aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy file:// BucketPolicy", "aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy", "noobaa account create <noobaa-account-name> [--allow_bucket_create=true] [--allowed_buckets=[]] [--default_resource=''] [--full_permission=false]", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <obc-name> spec: generateBucketName: <obc-bucket-name> storageClassName: openshift-storage.noobaa.io", "apiVersion: batch/v1 kind: Job metadata: name: testjob spec: template: spec: restartPolicy: OnFailure containers: - image: <your application image> name: test env: - name: BUCKET_NAME valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_NAME - name: BUCKET_HOST valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_HOST - name: BUCKET_PORT valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_PORT - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: <obc-name> key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: <obc-name> key: AWS_SECRET_ACCESS_KEY", "oc apply -f <yaml.file>", "oc get cm <obc-name> -o yaml", "oc get secret <obc_name> -o yaml", "noobaa obc create <obc-name> -n openshift-storage", "INFO[0001] ✅ Created: ObjectBucketClaim \"test21obc\"", "oc get obc -n openshift-storage", "NAME STORAGE-CLASS PHASE AGE test21obc openshift-storage.noobaa.io Bound 38s", "oc get obc test21obc -o yaml -n openshift-storage", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer generation: 2 labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage resourceVersion: \"40756\" selfLink: /apis/objectbucket.io/v1alpha1/namespaces/openshift-storage/objectbucketclaims/test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af spec: ObjectBucketName: obc-openshift-storage-test21obc bucketName: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 generateBucketName: test21obc storageClassName: openshift-storage.noobaa.io status: phase: Bound", "oc get -n openshift-storage secret test21obc -o yaml", "apiVersion: v1 data: AWS_ACCESS_KEY_ID: c0M0R2xVanF3ODR3bHBkVW94cmY= AWS_SECRET_ACCESS_KEY: Wi9kcFluSWxHRzlWaFlzNk1hc0xma2JXcjM1MVhqa051SlBleXpmOQ== kind: Secret metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40751\" selfLink: /api/v1/namespaces/openshift-storage/secrets/test21obc uid: 65117c1c-f662-11e9-9094-0a5305de57bb type: Opaque", "oc get -n openshift-storage cm test21obc -o yaml", "apiVersion: v1 data: BUCKET_HOST: 10.0.171.35 BUCKET_NAME: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 BUCKET_PORT: \"31242\" BUCKET_REGION: \"\" BUCKET_SUBREGION: \"\" kind: ConfigMap metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40752\" selfLink: /api/v1/namespaces/openshift-storage/configmaps/test21obc uid: 651c6501-f662-11e9-9094-0a5305de57bb", "noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3", "noobaa bucketclass create namespace-bucketclass cache <my-cache-bucket-class> --backingstores <backing-store> --hub-resource <namespacestore>", "noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>", "noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <backingstore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos", "noobaa bucketclass create namespace-bucketclass cache <my-bucket-class> --backingstores <backing-store> --hubResource <namespacestore>", "noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>", "oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"multiCloudGateway\": {\"endpoints\": {\"minCount\": 3,\"maxCount\": 10}}}}'", "volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>", "volumes: - name: mypd persistentVolumeClaim: claimName: myclaim", "volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>", "volumes: - name: mypd persistentVolumeClaim: claimName: myclaim", "oc adm cordon <node_name>", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide", "rook-ceph-osd-0-6d77d6c7c6-m8xj6 0/1 CrashLoopBackOff 0 24h 10.129.0.16 compute-2 <none> <none> rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 24h 10.128.2.24 compute-0 <none> <none> rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 24h 10.130.0.18 compute-1 <none> <none>", "osd_id_to_remove=0 oc scale -n openshift-storage deployment rook-ceph-osd-USD{osd_id_to_remove} --replicas=0", "deployment.extensions/rook-ceph-osd-0 scaled", "oc get -n openshift-storage pods -l ceph-osd-id=USD{osd_id_to_remove}", "No resources found.", "oc delete pod rook-ceph-osd-0-6d77d6c7c6-m8xj6 --force --grace-period=0", "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod \"rook-ceph-osd-0-6d77d6c7c6-m8xj6\" force deleted", "oc get pv oc delete pv <failed-pv-name>", "oc delete -n openshift-storage job ocs-osd-removal-USD{osd_id_to_remove}", "job.batch \"ocs-osd-removal-0\" deleted", "oc project openshift-storage", "oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD{osd_id_to_remove} FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f -", "oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'", "2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 |egrep -i 'pvc|deviceset'", "2021-05-12 14:31:34.666000 I | cephosd: removing the OSD PVC \"ocs-deviceset-xxxx-xxx-xxx-xxx\"", "oc debug node/<node name> chroot /host", "sh-4.4# dmsetup ls| grep <pvc name> ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt (253:0)", "cryptsetup luksClose --debug --verbose ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt", "ps -ef | grep crypt", "kill -9 <PID>", "dmsetup ls", "oc delete -n openshift-storage job ocs-osd-removal-USD{osd_id_to_remove}", "job.batch \"ocs-osd-removal-0\" deleted", "oc get -n openshift-storage pods -l app=rook-ceph-osd", "rook-ceph-osd-0-5f7f4747d4-snshw 1/1 Running 0 4m47s rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 1d20h rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 1d20h", "oc get -n openshift-storage pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-noobaa-db-0 Bound pvc-b44ebb5e-3c67-4000-998e-304752deb5a7 50Gi RWO ocs-storagecluster-ceph-rbd 6d ocs-deviceset-0-data-0-gwb5l Bound pvc-bea680cd-7278-463d-a4f6-3eb5d3d0defe 512Gi RWO standard 94s ocs-deviceset-1-data-0-w9pjm Bound pvc-01aded83-6ef1-42d1-a32e-6ca0964b96d4 512Gi RWO standard 6d ocs-deviceset-2-data-0-7bxcq Bound pvc-5d07cd6c-23cb-468c-89c1-72d07040e308 512Gi RWO standard 6d", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/_<OSD-pod-name>_", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/<node name> chroot /host", "lsblk", "oc get deployment noobaa-operator -o yaml -n openshift-storage | grep ROLEARN -A1 value: arn:aws:iam::123456789101:role/your-role-name-here" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html-single/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/index
Chapter 5. Networking
Chapter 5. Networking This chapter covers network optimization topics for virtualized environments. 5.1. Networking Tuning Tips Use multiple networks to avoid congestion on a single network. For example, have dedicated networks for management, backups, or live migration. Red Hat recommends not using multiple interfaces in the same network segment. However, if this is unavoidable, you can use arp_filter to prevent ARP Flux, an undesirable condition that can occur in both hosts and guests and is caused by the machine responding to ARP requests from more than one network interface: echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter or edit /etc/sysctl.conf to make this setting persistent. Note For more information on ARP Flux, see http://linux-ip.net/html/ether-arp.html#ether-arp-flux
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/chap-virtualization_tuning_optimization_guide-networking
Chapter 43. Exchange Interface
Chapter 43. Exchange Interface Abstract This chapter describes the Exchange interface. Since the refactoring of the camel-core module performed in Apache Camel 2.0, there is no longer any necessity to define custom exchange types. The DefaultExchange implementation can now be used in all cases. 43.1. The Exchange Interface Overview An instance of org.apache.camel.Exchange type encapsulates the current message passing through a route, with additional metadata encoded as exchange properties. Figure 43.1, "Exchange Inheritance Hierarchy" shows the inheritance hierarchy for the exchange type. The default implementation, DefaultExchange , is always used. Figure 43.1. Exchange Inheritance Hierarchy The Exchange interface Example 43.1, "Exchange Interface" shows the definition of the org.apache.camel.Exchange interface. Example 43.1. Exchange Interface Exchange methods The Exchange interface defines the following methods: getPattern() , setPattern() - The exchange pattern can be one of the values enumerated in org.apache.camel.ExchangePattern . The following exchange pattern values are supported: InOnly RobustInOnly InOut InOptionalOut OutOnly RobustOutOnly OutIn OutOptionalIn setProperty() , getProperty() , getProperties() , removeProperty() , hasProperties() - Use the property setter and getter methods to associate named properties with the exchange instance. The properties consist of miscellaneous metadata that you might need for your component implementation. setIn() , getIn() - Setter and getter methods for the In message. The getIn() implementation provided by the DefaultExchange class implements lazy creation semantics: if the In message is null when getIn() is called, the DefaultExchange class creates a default In message. setOut() , getOut() , hasOut() - Setter and getter methods for the Out message. The getOut() method implicitly supports lazy creation of an Out message. That is, if the current Out message is null , a new message instance is automatically created. setException() , getException() - Getter and setter methods for an exception object (of Throwable type). isFailed() - Returns true , if the exchange failed either due to an exception or due to a fault. isTransacted() - Returns true , if the exchange is transacted. isRollback() - Returns true , if the exchange is marked for rollback. getContext() - Returns a reference to the associated CamelContext instance. copy() - Creates a new, identical (apart from the exchange ID) copy of the current custom exchange object. The body and headers of the In message, the Out message (if any), and the Fault message (if any) are also copied by this operation. setFromEndpoint() , getFromEndpoint() - Getter and setter methods for the consumer endpoint that orginated this message (which is typically the endpoint appearing in the from() DSL command at the start of a route). setFromRouteId() , getFromRouteId() - Getters and setters for the route ID that originated this exchange. The getFromRouteId() method should only be called internally. setUnitOfWork() , getUnitOfWork() - Getter and setter methods for the org.apache.camel.spi.UnitOfWork bean property. This property is only required for exchanges that can participate in a transaction. setExchangeId() , getExchangeId() - Getter and setter methods for the exchange ID. Whether or not a custom component uses an exchange ID is an implementation detail. addOnCompletion() - Adds an org.apache.camel.spi.Synchronization callback object, which gets called when processing of the exchange has completed. handoverCompletions() - Hands over all of the OnCompletion callback objects to the specified exchange object.
[ "package org.apache.camel; import java.util.Map; import org.apache.camel.spi.Synchronization; import org.apache.camel.spi.UnitOfWork; public interface Exchange { // Exchange property names (string constants) // (Not shown here) ExchangePattern getPattern(); void setPattern(ExchangePattern pattern); Object getProperty(String name); Object getProperty(String name, Object defaultValue); <T> T getProperty(String name, Class<T> type); <T> T getProperty(String name, Object defaultValue, Class<T> type); void setProperty(String name, Object value); Object removeProperty(String name); Map<String, Object> getProperties(); boolean hasProperties(); Message getIn(); <T> T getIn(Class<T> type); void setIn(Message in); Message getOut(); <T> T getOut(Class<T> type); void setOut(Message out); boolean hasOut(); Throwable getException(); <T> T getException(Class<T> type); void setException(Throwable e); boolean isFailed(); boolean isTransacted(); boolean isRollbackOnly(); CamelContext getContext(); Exchange copy(); Endpoint getFromEndpoint(); void setFromEndpoint(Endpoint fromEndpoint); String getFromRouteId(); void setFromRouteId(String fromRouteId); UnitOfWork getUnitOfWork(); void setUnitOfWork(UnitOfWork unitOfWork); String getExchangeId(); void setExchangeId(String id); void addOnCompletion(Synchronization onCompletion); void handoverCompletions(Exchange target); }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/ExchangeIntf
2.2. Server Security
2.2. Server Security When a system is used as a server on a public network, it becomes a target for attacks. Hardening the system and locking down services is therefore of paramount importance for the system administrator. Before delving into specific issues, review the following general tips for enhancing server security: Keep all services current, to protect against the latest threats. Use secure protocols whenever possible. Serve only one type of network service per machine whenever possible. Monitor all servers carefully for suspicious activity. 2.2.1. Securing Services With TCP Wrappers and xinetd TCP Wrappers provide access control to a variety of services. Most modern network services, such as SSH, Telnet, and FTP, make use of TCP Wrappers, which stand guard between an incoming request and the requested service. The benefits offered by TCP Wrappers are enhanced when used in conjunction with xinetd , a super server that provides additional access, logging, binding, redirection, and resource utilization control. Note It is a good idea to use iptables firewall rules in conjunction with TCP Wrappers and xinetd to create redundancy within service access controls. Refer to Section 2.8, "Firewalls" for more information about implementing firewalls with iptables commands. The following subsections assume a basic knowledge of each topic and focus on specific security options. 2.2.1.1. Enhancing Security With TCP Wrappers TCP Wrappers are capable of much more than denying access to services. This section illustrates how they can be used to send connection banners, warn of attacks from particular hosts, and enhance logging functionality. Refer to the hosts_options man page for information about the TCP Wrapper functionality and control language. Refer to the xinetd.conf man page available online at http://linux.die.net/man/5/xinetd.conf for available flags, which act as options you can apply to a service. 2.2.1.1.1. TCP Wrappers and Connection Banners Displaying a suitable banner when users connect to a service is a good way to let potential attackers know that the system administrator is being vigilant. You can also control what information about the system is presented to users. To implement a TCP Wrappers banner for a service, use the banner option. This example implements a banner for vsftpd . To begin, create a banner file. It can be anywhere on the system, but it must have same name as the daemon. For this example, the file is called /etc/banners/vsftpd and contains the following lines: The %c token supplies a variety of client information, such as the user name and hostname, or the user name and IP address to make the connection even more intimidating. For this banner to be displayed to incoming connections, add the following line to the /etc/hosts.allow file: 2.2.1.1.2. TCP Wrappers and Attack Warnings If a particular host or network has been detected attacking the server, TCP Wrappers can be used to warn the administrator of subsequent attacks from that host or network using the spawn directive. In this example, assume that an attacker from the 206.182.68.0/24 network has been detected attempting to attack the server. Place the following line in the /etc/hosts.deny file to deny any connection attempts from that network, and to log the attempts to a special file: The %d token supplies the name of the service that the attacker was trying to access. To allow the connection and log it, place the spawn directive in the /etc/hosts.allow file. Note Because the spawn directive executes any shell command, it is a good idea to create a special script to notify the administrator or execute a chain of commands in the event that a particular client attempts to connect to the server. 2.2.1.1.3. TCP Wrappers and Enhanced Logging If certain types of connections are of more concern than others, the log level can be elevated for that service using the severity option. For this example, assume that anyone attempting to connect to port 23 (the Telnet port) on an FTP server is an attacker. To denote this, place an emerg flag in the log files instead of the default flag, info , and deny the connection. To do this, place the following line in /etc/hosts.deny : This uses the default authpriv logging facility, but elevates the priority from the default value of info to emerg , which posts log messages directly to the console. 2.2.1.2. Enhancing Security With xinetd This section focuses on using xinetd to set a trap service and using it to control resource levels available to any given xinetd service. Setting resource limits for services can help thwart Denial of Service ( DoS ) attacks. Refer to the man pages for xinetd and xinetd.conf for a list of available options. 2.2.1.2.1. Setting a Trap One important feature of xinetd is its ability to add hosts to a global no_access list. Hosts on this list are denied subsequent connections to services managed by xinetd for a specified period or until xinetd is restarted. You can do this using the SENSOR attribute. This is an easy way to block hosts attempting to scan the ports on the server. The first step in setting up a SENSOR is to choose a service you do not plan on using. For this example, Telnet is used. Edit the file /etc/xinetd.d/telnet and change the flags line to read: Add the following line: This denies any further connection attempts to that port by that host for 30 minutes. Other acceptable values for the deny_time attribute are FOREVER, which keeps the ban in effect until xinetd is restarted, and NEVER, which allows the connection and logs it. Finally, the last line should read: This enables the trap itself. While using SENSOR is a good way to detect and stop connections from undesirable hosts, it has two drawbacks: It does not work against stealth scans. An attacker who knows that a SENSOR is running can mount a Denial of Service attack against particular hosts by forging their IP addresses and connecting to the forbidden port. 2.2.1.2.2. Controlling Server Resources Another important feature of xinetd is its ability to set resource limits for services under its control. It does this using the following directives: cps = <number_of_connections> <wait_period> - Limits the rate of incoming connections. This directive takes two arguments: <number_of_connections> - The number of connections per second to handle. If the rate of incoming connections is higher than this, the service is temporarily disabled. The default value is fifty (50). <wait_period> - The number of seconds to wait before re-enabling the service after it has been disabled. The default interval is ten (10) seconds. instances = <number_of_connections> - Specifies the total number of connections allowed to a service. This directive accepts either an integer value or UNLIMITED . per_source = <number_of_connections> - Specifies the number of connections allowed to a service by each host. This directive accepts either an integer value or UNLIMITED . rlimit_as = <number[K|M]> - Specifies the amount of memory address space the service can occupy in kilobytes or megabytes. This directive accepts either an integer value or UNLIMITED . rlimit_cpu = <number_of_seconds> - Specifies the amount of time in seconds that a service may occupy the CPU. This directive accepts either an integer value or UNLIMITED . Using these directives can help prevent any single xinetd service from overwhelming the system, resulting in a denial of service.
[ "220-Hello, %c 220-All activity on ftp.example.com is logged. 220-Inappropriate use will result in your access privileges being removed.", "vsftpd : ALL : banners /etc/banners/", "ALL : 206.182.68.0 : spawn /bin/echo `date` %c %d >> /var/log/intruder_alert", "in.telnetd : ALL : severity emerg", "flags = SENSOR", "deny_time = 30", "disable = no" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-server_security
11.4. Preparing and Adding POSIX-compliant File System Storage
11.4. Preparing and Adding POSIX-compliant File System Storage 11.4.1. Preparing POSIX-compliant File System Storage POSIX file system support allows you to mount file systems using the same mount options that you would normally use when mounting them manually from the command line. This functionality is intended to allow access to storage not exposed using NFS, iSCSI, or FCP. Any POSIX-compliant file system used as a storage domain in Red Hat Virtualization must be a clustered file system, such as Global File System 2 (GFS2), and must support sparse files and direct I/O. The Common Internet File System (CIFS), for example, does not support direct I/O, making it incompatible with Red Hat Virtualization. For information on setting up and configuring POSIX-compliant file system storage, see Red Hat Enterprise Linux Global File System 2 . Important Do not mount NFS storage by creating a POSIX-compliant file system storage domain. Always create an NFS storage domain instead. 11.4.2. Adding POSIX-compliant File System Storage This procedure shows you how to attach existing POSIX-compliant file system storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name for the storage domain. Select the Data Center to be associated with the storage domain. The data center selected must be of type POSIX (POSIX compliant FS) . Alternatively, select (none) . Select Data from the Domain Function drop-down list, and POSIX compliant FS from the Storage Type drop-down list. If applicable, select the Format from the drop-down menu. Select a host from the Host drop-down list. Enter the Path to the POSIX file system, as you would normally provide it to the mount command. Enter the VFS Type , as you would normally provide it to the mount command using the -t argument. See man mount for a list of valid VFS types. Enter additional Mount Options , as you would normally provide them to the mount command using the -o argument. The mount options should be provided in a comma-separated list. See man mount for a list of valid mount options. Optionally, you can configure the advanced parameters. Click Advanced Parameters . Enter a percentage value in the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value in the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Click OK .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-preparing_and_adding_posix_compliant_file_system_storage
Chapter 6. Viewing and managing quotas on DNS resources
Chapter 6. Viewing and managing quotas on DNS resources Red Hat OpenStack Platform (RHOSP) provides a set of DNS resource quotas that cloud administrators can modify using the DNS service (designate). Using DNS quotas can help you to secure your RHOSP site from events like denial-of-service attacks, by setting a limit on projects' DNS resources. Using DNS quotas can also help you to track your users' DNS resource consumption. Cloud administrators can set DNS quota values that apply to all projects, or configure one or more quotas on a project-by-project basis. The topics included in this section are: Section 6.1, "Viewing quotas for DNS resources" Section 6.2, "Modifying quotas for DNS resources" Section 6.3, "Resetting DNS resource quotas to their default values" Section 6.4, "DNS service quotas and their default values" 6.1. Viewing quotas for DNS resources You can view resource quotas for Red Hat OpenStack Platform (RHOSP) projects by using the DNS service (designate). Prerequisites You must be a member of the project whose quotas you want to view. A RHOSP user with the admin role can view quotas for any project. Procedure Source your credentials file. Example View the DNS resource quotas set for your project: Sample output A RHOSP user with the admin role can query the quotas for other projects: Obtain the ID for the project whose quotas you want to modify. Remember the ID, because you need it for a later step. Using the project ID, view the DNS resource quotas set for the project. Example In this example, the DNS quotas for project ID ecd4341280d645e5959d32a4b7659da1 are displayed: Sample output Additional resources dns quota list in the Command Line Interface Reference 6.2. Modifying quotas for DNS resources You can change DNS resource quotas for Red Hat OpenStack Platform (RHOSP) projects by using the DNS service (designate). Prerequisites You must be a RHOSP user with the admin role. Procedure As a cloud administrator, source your credentials file. Example Obtain the ID for the project whose quotas you want to modify. Remember the ID, because you need it for a later step. Using the project ID, modify a DNS resource quota for a project. For a list of available quotas, see Section 6.4, "DNS service quotas and their default values" . Example In this example, the zones quota has been modified. The total number of zones that project ID ecd4341280d645e5959d32a4b7659da1 can contain is 30: Sample output Additional resources dns quota set in the Command Line Interface Reference Section 6.4, "DNS service quotas and their default values" 6.3. Resetting DNS resource quotas to their default values You can reset DNS resource quotas for Red Hat OpenStack Platform (RHOSP) projects to their default values by using the DNS service (designate). Prerequisites You must be a RHOSP user with the admin role. Procedure As a cloud administrator, source your credentials file. Example Obtain the ID for the project whose quotas you want to reset. Remember the ID, because you need it for a later step. Using the project ID, reset the DNS resource quotas for a project. Example In this example, the quotas for a project whose ID is ecd4341280d645e5959d32a4b7659da1 are being reset to the default values: There is no output from a successful openstack dns quota reset command. Verification Confirm that the DNS resource quotas for the project have been reset: Example Sample output Additional resources dns quota reset in the Command Line Interface Reference Section 6.4, "DNS service quotas and their default values" 6.4. DNS service quotas and their default values The Red Hat OpenStack Platform (RHOSP) DNS service (designate) has quotas that a cloud administrator can set to limit DNS resource consumption by cloud users in one or in all RHOSP projects. Table 6.1. Zone quotas Quota Default Description zones 10 The number of zones allowed per project. Table 6.2. Records and record set quotas Quota Default Description zone_recordsets 500 The number of record sets allowed per zone. zone_records 500 The number of records allowed per zone. recordset_records 20 The number of records allowed per record set. Table 6.3. Zone export quotas Quota Default Description api_export_size 1000 The number of record sets allowed in a zone export.
[ "source ~/overcloudrc", "openstack dns quota list", "+-------------------+-------+ | Field | Value | +-------------------+-------+ | api_export_size | 1000 | | recordset_records | 20 | | zone_records | 500 | | zone_recordsets | 500 | | zones | 10 | +-------------------+-------+", "openstack project list", "openstack dns quota list --project-id ecd4341280d645e5959d32a4b7659da1", "+-------------------+-------+ | Field | Value | +-------------------+-------+ | api_export_size | 2500 | | recordset_records | 25 | | zone_records | 750 | | zone_recordsets | 750 | | zones | 25 | +-------------------+-------+", "source ~/overcloudrc", "openstack project list", "openstack dns quota set --project-id ecd4341280d645e5959d32a4b7659da1 --zones 30", "+-------------------+-------+ | Field | Value | +-------------------+-------+ | api_export_size | 1000 | | recordset_records | 20 | | zone_records | 500 | | zone_recordsets | 500 | | zones | 30 | +-------------------+-------+", "source ~/overcloudrc", "openstack project list", "openstack dns quota reset --project-id ecd4341280d645e5959d32a4b7659da1", "openstack dns quota list --project-id ecd4341280d645e5959d32a4b7659da1", "+-------------------+-------+ | Field | Value | +-------------------+-------+ | api_export_size | 1000 | | recordset_records | 20 | | zone_records | 500 | | zone_recordsets | 500 | | zones | 10 | +-------------------+-------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/using_designate_for_dns-as-a-service/manage-quotas-dns-resources_rhosp-dnsaas
Chapter 4. Kubernetes overview
Chapter 4. Kubernetes overview Kubernetes is an open source container orchestration tool developed by Google. You can run and manage container-based workloads by using Kubernetes. The most common Kubernetes use case is to deploy an array of interconnected microservices, building an application in a cloud native way. You can create Kubernetes clusters that can span hosts across on-premise, public, private, or hybrid clouds. Traditionally, applications were deployed on top of a single operating system. With virtualization, you can split the physical host into several virtual hosts. Working on virtual instances on shared resources is not optimal for efficiency and scalability. Because a virtual machine (VM) consumes as many resources as a physical machine, providing resources to a VM such as CPU, RAM, and storage can be expensive. Also, you might see your application degrading in performance due to virtual instance usage on shared resources. Figure 4.1. Evolution of container technologies for classical deployments To solve this problem, you can use containerization technologies that segregate applications in a containerized environment. Similar to a VM, a container has its own filesystem, vCPU, memory, process space, dependencies, and more. Containers are decoupled from the underlying infrastructure, and are portable across clouds and OS distributions. Containers are inherently much lighter than a fully-featured OS, and are lightweight isolated processes that run on the operating system kernel. VMs are slower to boot, and are an abstraction of physical hardware. VMs run on a single machine with the help of a hypervisor. You can perform the following actions by using Kubernetes: Sharing resources Orchestrating containers across multiple hosts Installing new hardware configurations Running health checks and self-healing applications Scaling containerized applications 4.1. Kubernetes components Table 4.1. Kubernetes components Component Purpose kube-proxy Runs on every node in the cluster and maintains the network traffic between the Kubernetes resources. kube-controller-manager Governs the state of the cluster. kube-scheduler Allocates pods to nodes. etcd Stores cluster data. kube-apiserver Validates and configures data for the API objects. kubelet Runs on nodes and reads the container manifests. Ensures that the defined containers have started and are running. kubectl Allows you to define how you want to run workloads. Use the kubectl command to interact with the kube-apiserver . Node Node is a physical machine or a VM in a Kubernetes cluster. The control plane manages every node and schedules pods across the nodes in the Kubernetes cluster. container runtime container runtime runs containers on a host operating system. You must install a container runtime on each node so that pods can run on the node. Persistent storage Stores the data even after the device is shut down. Kubernetes uses persistent volumes to store the application data. container-registry Stores and accesses the container images. Pod The pod is the smallest logical unit in Kubernetes. A pod contains one or more containers to run in a worker node. 4.2. Kubernetes resources A custom resource is an extension of the Kubernetes API. You can customize Kubernetes clusters by using custom resources. Operators are software extensions which manage applications and their components with the help of custom resources. Kubernetes uses a declarative model when you want a fixed desired result while dealing with cluster resources. By using Operators, Kubernetes defines its states in a declarative way. You can modify the Kubernetes cluster resources by using imperative commands. An Operator acts as a control loop which continuously compares the desired state of resources with the actual state of resources and puts actions in place to bring reality in line with the desired state. Figure 4.2. Kubernetes cluster overview Table 4.2. Kubernetes Resources Resource Purpose Service Kubernetes uses services to expose a running application on a set of pods. ReplicaSets Kubernetes uses the ReplicaSets to maintain the constant pod number. Deployment A resource object that maintains the life cycle of an application. Kubernetes is a core component of an OpenShift Container Platform. You can use OpenShift Container Platform for developing and running containerized applications. With its foundation in Kubernetes, the OpenShift Container Platform incorporates the same technology that serves as the engine for massive telecommunications, streaming video, gaming, banking, and other applications. You can extend your containerized applications beyond a single cloud to on-premise and multi-cloud environments by using the OpenShift Container Platform. Figure 4.3. Architecture of Kubernetes A cluster is a single computational unit consisting of multiple nodes in a cloud environment. A Kubernetes cluster includes a control plane and worker nodes. You can run Kubernetes containers across various machines and environments. The control plane node controls and maintains the state of a cluster. You can run the Kubernetes application by using worker nodes. You can use the Kubernetes namespace to differentiate cluster resources in a cluster. Namespace scoping is applicable for resource objects, such as deployment, service, and pods. You cannot use namespace for cluster-wide resource objects such as storage class, nodes, and persistent volumes. 4.3. Kubernetes conceptual guidelines Before getting started with the OpenShift Container Platform, consider these conceptual guidelines of Kubernetes: Start with one or more worker nodes to run the container workloads. Manage the deployment of those workloads from one or more control plane nodes. Wrap containers in a deployment unit called a pod. By using pods provides extra metadata with the container and offers the ability to group several containers in a single deployment entity. Create special kinds of assets. For example, services are represented by a set of pods and a policy that defines how they are accessed. This policy allows containers to connect to the services that they need even if they do not have the specific IP addresses for the services. Replication controllers are another special asset that indicates how many pod replicas are required to run at a time. You can use this capability to automatically scale your application to adapt to its current demand. The API to OpenShift Container Platform cluster is 100% Kubernetes. Nothing changes between a container running on any other Kubernetes and running on OpenShift Container Platform. No changes to the application. OpenShift Container Platform brings added-value features to provide enterprise-ready enhancements to Kubernetes. OpenShift Container Platform CLI tool ( oc ) is compatible with kubectl . While the Kubernetes API is 100% accessible within OpenShift Container Platform, the kubectl command-line lacks many features that could make it more user-friendly. OpenShift Container Platform offers a set of features and command-line tool like oc . Although Kubernetes excels at managing your applications, it does not specify or manage platform-level requirements or deployment processes. Powerful and flexible platform management tools and processes are important benefits that OpenShift Container Platform offers. You must add authentication, networking, security, monitoring, and logs management to your containerization platform.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/about/kubernetes-overview
Chapter 11. Using hub templates in PolicyGenerator or PolicyGenTemplate CRs
Chapter 11. Using hub templates in PolicyGenerator or PolicyGenTemplate CRs Topology Aware Lifecycle Manager supports Red Hat Advanced Cluster Management (RHACM) hub cluster template functions in configuration policies used with GitOps Zero Touch Provisioning (ZTP). Hub-side cluster templates allow you to define configuration policies that can be dynamically customized to the target clusters. This reduces the need to create separate policies for many clusters with similar configurations but with different values. Important Policy templates are restricted to the same namespace as the namespace where the policy is defined. This means you must create the objects referenced in the hub template in the same namespace where the policy is created. Important Using PolicyGenTemplate CRs to manage and deploy policies to managed clusters will be deprecated in an upcoming OpenShift Container Platform release. Equivalent and improved functionality is available using Red Hat Advanced Cluster Management (RHACM) and PolicyGenerator CRs. For more information about PolicyGenerator resources, see the RHACM Policy Generator documentation. Additional resources Configuring managed cluster policies by using PolicyGenerator resources Comparing RHACM PolicyGenerator and PolicyGenTemplate resource patching RHACM support for template processing in configuration policies 11.1. Specifying group and site configurations in group PolicyGenerator or PolicyGentemplate CRs You can manage the configuration of fleets of clusters with ConfigMap CRs by using hub templates to populate the group and site values in the generated policies that get applied to the managed clusters. Using hub templates in site PolicyGenerator or PolicyGentemplate CRs means that you do not need to create a policy CR for each site. You can group the clusters in a fleet in various categories, depending on the use case, for example hardware type or region. Each cluster should have a label corresponding to the group or groups that the cluster is in. If you manage the configuration values for each group in different ConfigMap CRs, then you require only one group policy CR to apply the changes to all the clusters in the group by using hub templates. The following example shows you how to use three ConfigMap CRs and one PolicyGenerator CR to apply both site and group configuration to clusters grouped by hardware type and region. Note There is a 1 MiB size limit (Kubernetes documentation) for ConfigMap CRs. The effective size for the ConfigMap CRs is further limited by the last-applied-configuration annotation. To avoid the last-applied-configuration limitation, add the following annotation to the template ConfigMap : argocd.argoproj.io/sync-options: Replace=true Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the GitOps ZTP ArgoCD application. Procedure Create three ConfigMap CRs that contain the group and site configuration: Create a ConfigMap CR named group-hardware-types-configmap to hold the hardware-specific configuration. For example: apiVersion: v1 kind: ConfigMap metadata: name: group-hardware-types-configmap namespace: ztp-group annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: # SriovNetworkNodePolicy.yaml hardware-type-1-sriov-node-policy-pfNames-1: "[\"ens5f0\"]" hardware-type-1-sriov-node-policy-pfNames-2: "[\"ens7f0\"]" # PerformanceProfile.yaml hardware-type-1-cpu-isolated: "2-31,34-63" hardware-type-1-cpu-reserved: "0-1,32-33" hardware-type-1-hugepages-default: "1G" hardware-type-1-hugepages-size: "1G" hardware-type-1-hugepages-count: "32" 1 The argocd.argoproj.io/sync-options annotation is required only if the ConfigMap is larger than 1 MiB in size. Create a ConfigMap CR named group-zones-configmap to hold the regional configuration. For example: apiVersion: v1 kind: ConfigMap metadata: name: group-zones-configmap namespace: ztp-group data: # ClusterLogForwarder.yaml zone-1-cluster-log-fwd-outputs: "[{\"type\":\"kafka\", \"name\":\"kafka-open\", \"url\":\"tcp://10.46.55.190:9092/test\"}]" zone-1-cluster-log-fwd-pipelines: "[{\"inputRefs\":[\"audit\", \"infrastructure\"], \"labels\": {\"label1\": \"test1\", \"label2\": \"test2\", \"label3\": \"test3\", \"label4\": \"test4\"}, \"name\": \"all-to-default\", \"outputRefs\": [\"kafka-open\"]}]" Create a ConfigMap CR named site-data-configmap to hold the site-specific configuration. For example: apiVersion: v1 kind: ConfigMap metadata: name: site-data-configmap namespace: ztp-group data: # SriovNetwork.yaml du-sno-1-zone-1-sriov-network-vlan-1: "140" du-sno-1-zone-1-sriov-network-vlan-2: "150" Note Each ConfigMap CR must be in the same namespace as the policy to be generated from the group PolicyGenerator CR. Commit the ConfigMap CRs in Git, and then push to the Git repository being monitored by the Argo CD application. Apply the hardware type and region labels to the clusters. The following command applies to a single cluster named du-sno-1-zone-1 and the labels chosen are "hardware-type": "hardware-type-1" and "group-du-sno-zone": "zone-1" : USD oc patch managedclusters.cluster.open-cluster-management.io/du-sno-1-zone-1 --type merge -p '{"metadata":{"labels":{"hardware-type": "hardware-type-1", "group-du-sno-zone": "zone-1"}}}' Depending on your requirements, Create a group PolicyGenerator or PolicyGentemplate CR that uses hub templates to obtain the required data from the ConfigMap objects: Create a group PolicyGenerator CR. This example PolicyGenerator CR configures logging, VLAN IDs, NICs and Performance Profile for the clusters that match the labels listed the under policyDefaults.placement field: --- apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: group-du-sno-pgt placementBindingDefaults: name: group-du-sno-pgt-placement-binding policyDefaults: placement: labelSelector: matchExpressions: - key: group-du-sno-zone operator: In values: - zone-1 - key: hardware-type operator: In values: - hardware-type-1 remediationAction: inform severity: low namespaceSelector: exclude: - kube-* include: - '*' evaluationInterval: compliant: 10m noncompliant: 10s policies: - name: group-du-sno-pgt-group-du-sno-cfg-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: "10" manifests: - path: source-crs/ClusterLogForwarder.yaml patches: - spec: outputs: '{{hub fromConfigMap "" "group-zones-configmap" (printf "%s-cluster-log-fwd-outputs" (index .ManagedClusterLabels "group-du-sno-zone")) | toLiteral hub}}' pipelines: '{{hub fromConfigMap "" "group-zones-configmap" (printf "%s-cluster-log-fwd-pipelines" (index .ManagedClusterLabels "group-du-sno-zone")) | toLiteral hub}}' - path: source-crs/PerformanceProfile-MCP-master.yaml patches: - metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - rcupdate.rcu_normal_after_boot=0 - vfio_pci.enable_sriov=1 - vfio_pci.disable_idle_d3=1 - efi=runtime cpu: isolated: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-cpu-isolated" (index .ManagedClusterLabels "hardware-type")) hub}}' reserved: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-cpu-reserved" (index .ManagedClusterLabels "hardware-type")) hub}}' hugepages: defaultHugepagesSize: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-default" (index .ManagedClusterLabels "hardware-type")) hub}}' pages: - count: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-count" (index .ManagedClusterLabels "hardware-type")) | toInt hub}}' size: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-size" (index .ManagedClusterLabels "hardware-type")) hub}}' realTimeKernel: enabled: true - name: group-du-sno-pgt-group-du-sno-sriov-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: "100" manifests: - path: source-crs/SriovNetwork.yaml patches: - metadata: name: sriov-nw-du-fh spec: resourceName: du_fh vlan: '{{hub fromConfigMap "" "site-data-configmap" (printf "%s-sriov-network-vlan-1" .ManagedClusterName) | toInt hub}}' - path: source-crs/SriovNetworkNodePolicy-MCP-master.yaml patches: - metadata: name: sriov-nnp-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-sriov-node-policy-pfNames-1" (index .ManagedClusterLabels "hardware-type")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh - path: source-crs/SriovNetwork.yaml patches: - metadata: name: sriov-nw-du-mh spec: resourceName: du_mh vlan: '{{hub fromConfigMap "" "site-data-configmap" (printf "%s-sriov-network-vlan-2" .ManagedClusterName) | toInt hub}}' - path: source-crs/SriovNetworkNodePolicy-MCP-master.yaml patches: - metadata: name: sriov-nw-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-sriov-node-policy-pfNames-2" (index .ManagedClusterLabels "hardware-type")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh Create a group PolicyGenTemplate CR. This example PolicyGenTemplate CR configures logging, VLAN IDs, NICs and Performance Profile for the clusters that match the labels listed under spec.bindingRules : apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: group-du-sno-pgt namespace: ztp-group spec: bindingRules: # These policies will correspond to all clusters with these labels group-du-sno-zone: "zone-1" hardware-type: "hardware-type-1" mcp: "master" sourceFiles: - fileName: ClusterLogForwarder.yaml # wave 10 policyName: "group-du-sno-cfg-policy" spec: outputs: '{{hub fromConfigMap "" "group-zones-configmap" (printf "%s-cluster-log-fwd-outputs" (index .ManagedClusterLabels "group-du-sno-zone")) | toLiteral hub}}' pipelines: '{{hub fromConfigMap "" "group-zones-configmap" (printf "%s-cluster-log-fwd-pipelines" (index .ManagedClusterLabels "group-du-sno-zone")) | toLiteral hub}}' - fileName: PerformanceProfile.yaml # wave 10 policyName: "group-du-sno-cfg-policy" metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - rcupdate.rcu_normal_after_boot=0 - vfio_pci.enable_sriov=1 - vfio_pci.disable_idle_d3=1 - efi=runtime cpu: isolated: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-cpu-isolated" (index .ManagedClusterLabels "hardware-type")) hub}}' reserved: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-cpu-reserved" (index .ManagedClusterLabels "hardware-type")) hub}}' hugepages: defaultHugepagesSize: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-default" (index .ManagedClusterLabels "hardware-type")) hub}}' pages: - size: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-size" (index .ManagedClusterLabels "hardware-type")) hub}}' count: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-count" (index .ManagedClusterLabels "hardware-type")) | toInt hub}}' realTimeKernel: enabled: true - fileName: SriovNetwork.yaml # wave 100 policyName: "group-du-sno-sriov-policy" metadata: name: sriov-nw-du-fh spec: resourceName: du_fh vlan: '{{hub fromConfigMap "" "site-data-configmap" (printf "%s-sriov-network-vlan-1" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml # wave 100 policyName: "group-du-sno-sriov-policy" metadata: name: sriov-nnp-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-sriov-node-policy-pfNames-1" (index .ManagedClusterLabels "hardware-type")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh - fileName: SriovNetwork.yaml # wave 100 policyName: "group-du-sno-sriov-policy" metadata: name: sriov-nw-du-mh spec: resourceName: du_mh vlan: '{{hub fromConfigMap "" "site-data-configmap" (printf "%s-sriov-network-vlan-2" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml # wave 100 policyName: "group-du-sno-sriov-policy" metadata: name: sriov-nw-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-sriov-node-policy-pfNames-2" (index .ManagedClusterLabels "hardware-type")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh Note To retrieve site-specific configuration values, use the .ManagedClusterName field. This is a template context value set to the name of the target managed cluster. To retrieve group-specific configuration, use the .ManagedClusterLabels field. This is a template context value set to the value of the managed cluster's labels. Commit the site PolicyGenerator or PolicyGentemplate CR in Git and push to the Git repository that is monitored by the ArgoCD application. Note Subsequent changes to the referenced ConfigMap CR are not automatically synced to the applied policies. You need to manually sync the new ConfigMap changes to update existing PolicyGenerator CRs. See "Syncing new ConfigMap changes to existing PolicyGenerator or PolicyGenTemplate CRs". You can use the same PolicyGenerator or PolicyGentemplate CR for multiple clusters. If there is a configuration change, then the only modifications you need to make are to the ConfigMap objects that hold the configuration for each cluster and the labels of the managed clusters. 11.2. Syncing new ConfigMap changes to existing PolicyGenerator or PolicyGentemplate CRs Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a PolicyGenerator or PolicyGentemplate CR that pulls information from a ConfigMap CR using hub cluster templates. Procedure Update the contents of your ConfigMap CR, and apply the changes in the hub cluster. To sync the contents of the updated ConfigMap CR to the deployed policy, do either of the following: Option 1: Delete the existing policy. ArgoCD uses the PolicyGenerator or PolicyGentemplate CR to immediately recreate the deleted policy. For example, run the following command: USD oc delete policy <policy_name> -n <policy_namespace> Option 2: Apply a special annotation policy.open-cluster-management.io/trigger-update to the policy with a different value every time when you update the ConfigMap . For example: USD oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update="1" Note You must apply the updated policy for the changes to take effect. For more information, see Special annotation for reprocessing . Optional: If it exists, delete the ClusterGroupUpdate CR that contains the policy. For example: USD oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace> Create a new ClusterGroupUpdate CR that includes the policy to apply with the updated ConfigMap changes. For example, add the following YAML to the file cgr-example.yaml : apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: <cgr_name> namespace: <policy_namespace> spec: managedPolicies: - <managed_policy> enable: true clusters: - <managed_cluster_1> - <managed_cluster_2> remediationStrategy: maxConcurrency: 2 timeout: 240 Apply the updated policy: USD oc apply -f cgr-example.yaml
[ "argocd.argoproj.io/sync-options: Replace=true", "apiVersion: v1 kind: ConfigMap metadata: name: group-hardware-types-configmap namespace: ztp-group annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: # SriovNetworkNodePolicy.yaml hardware-type-1-sriov-node-policy-pfNames-1: \"[\\\"ens5f0\\\"]\" hardware-type-1-sriov-node-policy-pfNames-2: \"[\\\"ens7f0\\\"]\" # PerformanceProfile.yaml hardware-type-1-cpu-isolated: \"2-31,34-63\" hardware-type-1-cpu-reserved: \"0-1,32-33\" hardware-type-1-hugepages-default: \"1G\" hardware-type-1-hugepages-size: \"1G\" hardware-type-1-hugepages-count: \"32\"", "apiVersion: v1 kind: ConfigMap metadata: name: group-zones-configmap namespace: ztp-group data: # ClusterLogForwarder.yaml zone-1-cluster-log-fwd-outputs: \"[{\\\"type\\\":\\\"kafka\\\", \\\"name\\\":\\\"kafka-open\\\", \\\"url\\\":\\\"tcp://10.46.55.190:9092/test\\\"}]\" zone-1-cluster-log-fwd-pipelines: \"[{\\\"inputRefs\\\":[\\\"audit\\\", \\\"infrastructure\\\"], \\\"labels\\\": {\\\"label1\\\": \\\"test1\\\", \\\"label2\\\": \\\"test2\\\", \\\"label3\\\": \\\"test3\\\", \\\"label4\\\": \\\"test4\\\"}, \\\"name\\\": \\\"all-to-default\\\", \\\"outputRefs\\\": [\\\"kafka-open\\\"]}]\"", "apiVersion: v1 kind: ConfigMap metadata: name: site-data-configmap namespace: ztp-group data: # SriovNetwork.yaml du-sno-1-zone-1-sriov-network-vlan-1: \"140\" du-sno-1-zone-1-sriov-network-vlan-2: \"150\"", "oc patch managedclusters.cluster.open-cluster-management.io/du-sno-1-zone-1 --type merge -p '{\"metadata\":{\"labels\":{\"hardware-type\": \"hardware-type-1\", \"group-du-sno-zone\": \"zone-1\"}}}'", "--- apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: group-du-sno-pgt placementBindingDefaults: name: group-du-sno-pgt-placement-binding policyDefaults: placement: labelSelector: matchExpressions: - key: group-du-sno-zone operator: In values: - zone-1 - key: hardware-type operator: In values: - hardware-type-1 remediationAction: inform severity: low namespaceSelector: exclude: - kube-* include: - '*' evaluationInterval: compliant: 10m noncompliant: 10s policies: - name: group-du-sno-pgt-group-du-sno-cfg-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"10\" manifests: - path: source-crs/ClusterLogForwarder.yaml patches: - spec: outputs: '{{hub fromConfigMap \"\" \"group-zones-configmap\" (printf \"%s-cluster-log-fwd-outputs\" (index .ManagedClusterLabels \"group-du-sno-zone\")) | toLiteral hub}}' pipelines: '{{hub fromConfigMap \"\" \"group-zones-configmap\" (printf \"%s-cluster-log-fwd-pipelines\" (index .ManagedClusterLabels \"group-du-sno-zone\")) | toLiteral hub}}' - path: source-crs/PerformanceProfile-MCP-master.yaml patches: - metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - rcupdate.rcu_normal_after_boot=0 - vfio_pci.enable_sriov=1 - vfio_pci.disable_idle_d3=1 - efi=runtime cpu: isolated: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-cpu-isolated\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' reserved: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-cpu-reserved\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' hugepages: defaultHugepagesSize: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-default\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' pages: - count: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-count\" (index .ManagedClusterLabels \"hardware-type\")) | toInt hub}}' size: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-size\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' realTimeKernel: enabled: true - name: group-du-sno-pgt-group-du-sno-sriov-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"100\" manifests: - path: source-crs/SriovNetwork.yaml patches: - metadata: name: sriov-nw-du-fh spec: resourceName: du_fh vlan: '{{hub fromConfigMap \"\" \"site-data-configmap\" (printf \"%s-sriov-network-vlan-1\" .ManagedClusterName) | toInt hub}}' - path: source-crs/SriovNetworkNodePolicy-MCP-master.yaml patches: - metadata: name: sriov-nnp-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-sriov-node-policy-pfNames-1\" (index .ManagedClusterLabels \"hardware-type\")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh - path: source-crs/SriovNetwork.yaml patches: - metadata: name: sriov-nw-du-mh spec: resourceName: du_mh vlan: '{{hub fromConfigMap \"\" \"site-data-configmap\" (printf \"%s-sriov-network-vlan-2\" .ManagedClusterName) | toInt hub}}' - path: source-crs/SriovNetworkNodePolicy-MCP-master.yaml patches: - metadata: name: sriov-nw-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-sriov-node-policy-pfNames-2\" (index .ManagedClusterLabels \"hardware-type\")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: group-du-sno-pgt namespace: ztp-group spec: bindingRules: # These policies will correspond to all clusters with these labels group-du-sno-zone: \"zone-1\" hardware-type: \"hardware-type-1\" mcp: \"master\" sourceFiles: - fileName: ClusterLogForwarder.yaml # wave 10 policyName: \"group-du-sno-cfg-policy\" spec: outputs: '{{hub fromConfigMap \"\" \"group-zones-configmap\" (printf \"%s-cluster-log-fwd-outputs\" (index .ManagedClusterLabels \"group-du-sno-zone\")) | toLiteral hub}}' pipelines: '{{hub fromConfigMap \"\" \"group-zones-configmap\" (printf \"%s-cluster-log-fwd-pipelines\" (index .ManagedClusterLabels \"group-du-sno-zone\")) | toLiteral hub}}' - fileName: PerformanceProfile.yaml # wave 10 policyName: \"group-du-sno-cfg-policy\" metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - rcupdate.rcu_normal_after_boot=0 - vfio_pci.enable_sriov=1 - vfio_pci.disable_idle_d3=1 - efi=runtime cpu: isolated: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-cpu-isolated\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' reserved: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-cpu-reserved\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' hugepages: defaultHugepagesSize: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-default\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' pages: - size: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-size\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' count: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-count\" (index .ManagedClusterLabels \"hardware-type\")) | toInt hub}}' realTimeKernel: enabled: true - fileName: SriovNetwork.yaml # wave 100 policyName: \"group-du-sno-sriov-policy\" metadata: name: sriov-nw-du-fh spec: resourceName: du_fh vlan: '{{hub fromConfigMap \"\" \"site-data-configmap\" (printf \"%s-sriov-network-vlan-1\" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml # wave 100 policyName: \"group-du-sno-sriov-policy\" metadata: name: sriov-nnp-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-sriov-node-policy-pfNames-1\" (index .ManagedClusterLabels \"hardware-type\")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh - fileName: SriovNetwork.yaml # wave 100 policyName: \"group-du-sno-sriov-policy\" metadata: name: sriov-nw-du-mh spec: resourceName: du_mh vlan: '{{hub fromConfigMap \"\" \"site-data-configmap\" (printf \"%s-sriov-network-vlan-2\" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml # wave 100 policyName: \"group-du-sno-sriov-policy\" metadata: name: sriov-nw-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-sriov-node-policy-pfNames-2\" (index .ManagedClusterLabels \"hardware-type\")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh", "oc delete policy <policy_name> -n <policy_namespace>", "oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update=\"1\"", "oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace>", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: <cgr_name> namespace: <policy_namespace> spec: managedPolicies: - <managed_policy> enable: true clusters: - <managed_cluster_1> - <managed_cluster_2> remediationStrategy: maxConcurrency: 2 timeout: 240", "oc apply -f cgr-example.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/edge_computing/ztp-using-hub-cluster-templates-pgt
Part II. Set Up JVM Memory Management
Part II. Set Up JVM Memory Management
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/part-set_up_jvm_memory_management
2.5. Creating Control Groups
2.5. Creating Control Groups Use the cgcreate command to create cgroups. The syntax for cgcreate is: where: -t (optional) - specifies a user (by user ID, uid) and a group (by group ID, gid) to own the tasks pseudofile for this cgroup. This user can add tasks to the cgroup. Note Note that the only way to remove a task from a cgroup is to move it to a different cgroup. To move a task, the user has to have write access to the destination cgroup; write access to the source cgroup is not necessary. -a (optional) - specifies a user (by user ID, uid) and a group (by group ID, gid) to own all pseudofiles other than tasks for this cgroup. This user can modify access of the tasks in this cgroup to system resources. -g - specifies the hierarchy in which the cgroup should be created, as a comma‐separated list of subsystems associated with hierarchies. If the subsystems in this list are in different hierarchies, the group is created in each of these hierarchies. The list of hierarchies is followed by a colon and the path to the child group relative to the hierarchy. Do not include the hierarchy mount point in the path. For example, the cgroup located in the directory /cgroup/cpu_and_mem/lab1/ is called just lab1 - its path is already uniquely determined because there is at most one hierarchy for a given subsystem. Note also that the group is controlled by all the subsystems that exist in the hierarchies in which the cgroup is created, even though these subsystems have not been specified in the cgcreate command - refer to Example 2.5, "cgcreate usage" . Because all cgroups in the same hierarchy have the same controllers, the child group has the same controllers as its parent. Example 2.5. cgcreate usage Consider a system where the cpu and memory subsystems are mounted together in the cpu_and_mem hierarchy, and the net_cls controller is mounted in a separate hierarchy called net . Run the following command: The cgcreate command creates two groups named test-subgroup , one in the cpu_and_mem hierarchy and one in the net hierarchy. The test-subgroup group in the cpu_and_mem hierarchy is controlled by the memory subsystem, even though it was not specified in the cgcreate command. Alternative method To create a child of the cgroup directly, use the mkdir command: For example:
[ "cgcreate -t uid : gid -a uid : gid -g subsystems : path", "~]# cgcreate -g cpu,net_cls:/test-subgroup", "~]# mkdir /cgroup/ hierarchy / name / child_name", "~]# mkdir /cgroup/cpu_and_mem/group1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sec-Creating_Cgroups
Chapter 7. Deploying a RHEL for Edge image in a non-network-based environment
Chapter 7. Deploying a RHEL for Edge image in a non-network-based environment The RHEL for Edge Container ( .tar ) in combination with the RHEL for Edge Installer ( .iso ) image type result in a ISO image. The ISO image can be used in disconnected environments during the image deployment to a device. However, network access might require network access to build the different artifacts. Deploying a RHEL for Edge image in a non-network-based environment involves the following high-level steps: Download the RHEL for Edge Container. See Downloading a RHEL for Edge image for information about how to download the RHEL for Edge image. Load the RHEL for Edge Container image into Podman Run the RHEL for Edge Container image into Podman Load the RHEL for Edge Installer blueprint Build the RHEL for Edge Installer image Prepare a .qcow2 disk Boot the Virtual Machine (VM) Install the image 7.1. Creating a RHEL for Edge Container image for non-network-based deployments You can build a running container by loading the downloaded RHEL for Edge Container OSTree commit into Podman. For that, follow the steps: Prerequisites You created and downloaded a RHEL for Edge Container OSTree commit. You have installed Podman on your system. See the Red Hat Knowledgebase solution How do I install Podman in RHEL . Procedure Navigate to the directory where you have downloaded the RHEL for Edge Container OSTree commit. Load the RHEL for Edge Container OSTree commit into Podman . The command output provides the image ID, for example: @8e0d51f061ff1a51d157804362bc875b649b27f2ae1e66566a15e7e6530cec63 Tag the new RHEL for Edge Container image, using the image ID generated by the step. The podman tag command assigns an additional name to the local image. Run the container named edge-container . The podman run -d --name=edge-container command assigns a name to your container-based on the localhost/edge-container image. List containers: As a result, Podman runs a container that serves an OSTree repository with the RHEL for Edge Container commit. 7.2. Creating a RHEL for Edge Installer image for non-network-based deployments After you have built a running container to serve a repository with the RHEL for Edge Container commit, create an RHEL for Edge Installer (.iso) image. The RHEL for Edge Installer (.iso) pulls the commit served by RHEL for Edge Container (.tar) . After the RHEL for Edge Container commit is loaded in Podman, it exposes the OSTree in the URL format. To create the RHEL for Edge Installer image in the CLI, follow the steps: Prerequisites You created a blueprint for RHEL for Edge image. You created a RHEL for Edge Container image and deployed it using a web server. Procedure Begin to create the RHEL for Edge Installer image. Where, ref is the same value that customer used to build OSTree repository URL-OSTree-repository is the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo/. See Creating a RHEL for Edge Container image for non-network-based deployments . blueprint-name is the RHEL for Edge Installer blueprint name. image-type is edge-installer . A confirmation that the composer process has been added to the queue appears. It also shows a Universally Unique Identifier (UUID) number for the image created. Use the UUID number to track your build. Also keep the UUID number handy for further tasks. Check the image compose status. The command output displays the status in the following format: Note The image creation process takes a few minutes to complete. To interrupt the image creation process, run: To delete an existing image, run: RHEL image builder pulls the commit that is being served by the running container during the image build. After the image build is complete, you can download the resulting ISO image. Download the image. See Downloading a RHEL for Edge image. After the image is ready, you can use it for non-network deployments . See Installing the RHEL for Edge image for non-network-based deployments . Additional resources Creating a RHEL for Edge Installer image by using the command-line interface for non-network-based deployments 7.3. Installing the RHEL for Edge image for non-network-based deployments To install the RHEL for Edge image, follow the steps: Prerequisites You created a RHEL for Edge Installer ISO image. You stopped the running container. A disk image to install the commit you created. You installed the edk2-ovmf package. You installed the virt-viewer package. You customized your blueprint with a user account. See Creating a blueprint for a RHEL for Edge image using RHEL image builder in RHEL web console . Warning If you do not define a user account customization in your blueprint, you will not be able to login to the ISO image. Procedure Create a qcow VM disk file to install the ( .iso ) image. That is an image of a hard disk drive for the virtual machine (VM). For example: Use the virt-install command to boot the VM using the disk as a drive and the installer ISO image as a CD-ROM. For example: This command instructs virt-install to: Instructs the VM to use UEFI to boot, instead of the BIOS. Mount the installation ISO. Use the hard disk drive image created in the first step. It gives you an Anaconda Installer. The RHEL Installer starts, loads the Kickstart file from the ISO and executes the commands, including the command to install the RHEL for Edge image commit. Once the installation is complete, the RHEL installer prompts for login details. Note Anaconda is preconfigured to use the Container commit during the installation. However, you need to set up system configurations, such as disk partition, timezone, between others. Connect to Anaconda GUI with virt-viewer to setup the system configuration: Reboot the system to finish the installation. On the login screen, specify your user account credentials and click Enter . Verification Verify whether the RHEL for Edge image is successfully installed. The command output provides the image commit ID and shows that the installation is successful.
[ "sudo podman load -i UUID -container.tar", "sudo podman tag image-ID localhost/edge-container", "sudo podman run -d --name= edge-container -p 8080:8080 localhost/edge-container", "sudo podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2988198c4c4b ..../localhost/edge-container /bin/bash 3 seconds ago Up 2 seconds ago edge-container", "composer-cli compose start-ostree --ref rhel/8/x86_64/edge --url URL-OSTree-repository blueprint-name image-type", "composer-cli compose status", "<UUID> RUNNING date blueprint-name blueprint-version image-type", "composer-cli compose cancel <UUID>", "composer-cli compose delete <UUID>", "qemu-img create -f qcow2 diskfile .qcow2 20G", "virt-install --boot uefi --name VM_NAME --memory 2048 --vcpus 2 --disk path= diskfile.qcow2 --cdrom /var/lib/libvirt/images/ UUID -installer.iso --os-variant rhel9.0", "virt-viewer --connect qemu:///system --wait VM_NAME", "rpm-ostree status" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/composing_installing_and_managing_rhel_for_edge_images/assembly_deploying-a-non-network-rhel-for-edge-image_composing-installing-managing-rhel-for-edge-images
Chapter 2. Supported Configurations
Chapter 2. Supported Configurations For information on supported configurations, see Red Hat AMQ Broker 7 Supported Configurations . Minimum Java version At a minimum, AMQ Broker 7.12 requires Java version 11 to run. Openwire support AMQ 7 Broker has provided support for the Openwire protocol since its release in 2017 as a means to migrate client applications to AMQ 7. With the release of AMQ Broker 7.9.0 in 2021, the Openwire protocol was deprecated and customers were encouraged to migrate their existing Openwire client applications to one of the fully supported protocols of AMQ 7 (CORE, AMQP, MQTT, or STOMP). Starting with the AMQ Broker 8.0 release, the Openwire protocol will be removed from AMQ Broker.
null
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/release_notes_for_red_hat_amq_broker_7.12/supported_configurations
Chapter 12. Repository Notifications
Chapter 12. Repository Notifications Quay supports adding notifications to a repository for various events that occur in the repository's lifecycle. To add notifications, click the Settings tab while viewing a repository and select Create Notification . From the When this event occurs field, select the items for which you want to receive notifications: After selecting an event, further configure it by adding how you will be notified of that event. Note Adding notifications requires repository admin permission . The following are examples of repository events. 12.1. Repository Events 12.1.1. Repository Push A successful push of one or more images was made to the repository: 12.1.2. Dockerfile Build Queued Here is a sample response for a Dockerfile build has been queued into the build system. The response can differ based on the use of optional attributes. 12.1.3. Dockerfile Build Started Here is an example of a Dockerfile build being started by the build system. The response can differ based on some attributes being optional. 12.1.4. Dockerfile Build Successfully Completed Here is a sample response of a Dockerfile build that has been successfully completed by the build system. Note This event will occur simultaneously with a Repository Push event for the built image(s) 12.1.5. Dockerfile Build Failed A Dockerfile build has failed 12.1.6. Dockerfile Build Cancelled A Dockerfile build was cancelled 12.1.7. Vulnerability Detected A vulnerability was detected in the repository 12.2. Notification Actions 12.2.1. Quay Notification A notification will be added to the Quay.io notification area. The notification area can be found by clicking on the bell icon in the top right of any Quay.io page. Quay.io notifications can be setup to be sent to a User , Team , or the organization as a whole. 12.2.2. E-mail An e-mail will be sent to the specified address describing the event that occurred. Note All e-mail addresses will have to be verified on a per-repository basis 12.2.3. Webhook POST An HTTP POST call will be made to the specified URL with the event's data (see above for each event's data format). When the URL is HTTPS, the call will have an SSL client certificate set from Quay.io. Verification of this certificate will prove the call originated from Quay.io. Responses with status codes in the 2xx range are considered successful. Responses with any other status codes will be considered failures and result in a retry of the webhook notification. 12.2.4. Flowdock Notification Posts a message to Flowdock. 12.2.5. Hipchat Notification Posts a message to HipChat. 12.2.6. Slack Notification Posts a message to Slack.
[ "{ \"name\": \"repository\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"homepage\": \"https://quay.io/repository/dgangaia/repository\", \"updated_tags\": [ \"latest\" ] }", "{ \"build_id\": \"296ec063-5f86-4706-a469-f0a400bf9df2\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"repo\": \"test\", \"trigger_metadata\": { \"default_branch\": \"master\", \"commit\": \"b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"ref\": \"refs/heads/master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"date\": \"2019-03-06T12:48:24+11:00\", \"message\": \"adding 5\", \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional }, \"committer\": { \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" } } }, \"is_manual\": false, \"manual_user\": null, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/296ec063-5f86-4706-a469-f0a400bf9df2\" }", "{ \"build_id\": \"a8cc247a-a662-4fee-8dcb-7d7e822b71ba\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"50bc599\", \"trigger_metadata\": { //Optional \"commit\": \"50bc5996d4587fd4b2d8edc4af652d4cec293c42\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/50bc5996d4587fd4b2d8edc4af652d4cec293c42\", \"date\": \"2019-03-06T14:10:14+11:00\", \"message\": \"test build\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/a8cc247a-a662-4fee-8dcb-7d7e822b71ba\" }", "{ \"build_id\": \"296ec063-5f86-4706-a469-f0a400bf9df2\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"b7f7d2b\", \"image_id\": \"sha256:0339f178f26ae24930e9ad32751d6839015109eabdf1c25b3b0f2abf8934f6cb\", \"trigger_metadata\": { \"commit\": \"b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"date\": \"2019-03-06T12:48:24+11:00\", \"message\": \"adding 5\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/296ec063-5f86-4706-a469-f0a400bf9df2\", \"manifest_digests\": [ \"quay.io/dgangaia/test@sha256:2a7af5265344cc3704d5d47c4604b1efcbd227a7a6a6ff73d6e4e08a27fd7d99\", \"quay.io/dgangaia/test@sha256:569e7db1a867069835e8e97d50c96eccafde65f08ea3e0d5debaf16e2545d9d1\" ] }", "{ \"build_id\": \"5346a21d-3434-4764-85be-5be1296f293c\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"docker_url\": \"quay.io/dgangaia/test\", \"error_message\": \"Could not find or parse Dockerfile: unknown instruction: GIT\", \"namespace\": \"dgangaia\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"6ae9a86\", \"trigger_metadata\": { //Optional \"commit\": \"6ae9a86930fc73dd07b02e4c5bf63ee60be180ad\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/6ae9a86930fc73dd07b02e4c5bf63ee60be180ad\", \"date\": \"2019-03-06T14:18:16+11:00\", \"message\": \"failed build test\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/5346a21d-3434-4764-85be-5be1296f293c\" }", "{ \"build_id\": \"cbd534c5-f1c0-4816-b4e3-55446b851e70\", \"trigger_kind\": \"github\", \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"cbce83c\", \"trigger_metadata\": { \"commit\": \"cbce83c04bfb59734fc42a83aab738704ba7ec41\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { \"url\": \"https://github.com/dgangaia/test/commit/cbce83c04bfb59734fc42a83aab738704ba7ec41\", \"date\": \"2019-03-06T14:27:53+11:00\", \"message\": \"testing cancel build\", \"committer\": { \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" }, \"author\": { \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/cbd534c5-f1c0-4816-b4e3-55446b851e70\" }", "{ \"repository\": \"dgangaia/repository\", \"namespace\": \"dgangaia\", \"name\": \"repository\", \"docker_url\": \"quay.io/dgangaia/repository\", \"homepage\": \"https://quay.io/repository/dgangaia/repository\", \"tags\": [\"latest\", \"othertag\"], \"vulnerability\": { \"id\": \"CVE-1234-5678\", \"description\": \"This is a bad vulnerability\", \"link\": \"http://url/to/vuln/info\", \"priority\": \"Critical\", \"has_fix\": true } }" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/use_red_hat_quay/repository_notifications
Chapter 2. BrokerTemplateInstance [template.openshift.io/v1]
Chapter 2. BrokerTemplateInstance [template.openshift.io/v1] Description BrokerTemplateInstance holds the service broker-related state associated with a TemplateInstance. BrokerTemplateInstance is part of an experimental API. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object BrokerTemplateInstanceSpec describes the state of a BrokerTemplateInstance. 2.1.1. .spec Description BrokerTemplateInstanceSpec describes the state of a BrokerTemplateInstance. Type object Required templateInstance secret Property Type Description bindingIDs array (string) bindingids is a list of 'binding_id's provided during successive bind calls to the template service broker. secret ObjectReference secret is a reference to a Secret object residing in a namespace, containing the necessary template parameters. templateInstance ObjectReference templateinstance is a reference to a TemplateInstance object residing in a namespace. 2.2. API endpoints The following API endpoints are available: /apis/template.openshift.io/v1/brokertemplateinstances DELETE : delete collection of BrokerTemplateInstance GET : list or watch objects of kind BrokerTemplateInstance POST : create a BrokerTemplateInstance /apis/template.openshift.io/v1/watch/brokertemplateinstances GET : watch individual changes to a list of BrokerTemplateInstance. deprecated: use the 'watch' parameter with a list operation instead. /apis/template.openshift.io/v1/brokertemplateinstances/{name} DELETE : delete a BrokerTemplateInstance GET : read the specified BrokerTemplateInstance PATCH : partially update the specified BrokerTemplateInstance PUT : replace the specified BrokerTemplateInstance /apis/template.openshift.io/v1/watch/brokertemplateinstances/{name} GET : watch changes to an object of kind BrokerTemplateInstance. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/template.openshift.io/v1/brokertemplateinstances HTTP method DELETE Description delete collection of BrokerTemplateInstance Table 2.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind BrokerTemplateInstance Table 2.3. HTTP responses HTTP code Reponse body 200 - OK BrokerTemplateInstanceList schema 401 - Unauthorized Empty HTTP method POST Description create a BrokerTemplateInstance Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body BrokerTemplateInstance schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK BrokerTemplateInstance schema 201 - Created BrokerTemplateInstance schema 202 - Accepted BrokerTemplateInstance schema 401 - Unauthorized Empty 2.2.2. /apis/template.openshift.io/v1/watch/brokertemplateinstances HTTP method GET Description watch individual changes to a list of BrokerTemplateInstance. deprecated: use the 'watch' parameter with a list operation instead. Table 2.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/template.openshift.io/v1/brokertemplateinstances/{name} Table 2.8. Global path parameters Parameter Type Description name string name of the BrokerTemplateInstance HTTP method DELETE Description delete a BrokerTemplateInstance Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified BrokerTemplateInstance Table 2.11. HTTP responses HTTP code Reponse body 200 - OK BrokerTemplateInstance schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified BrokerTemplateInstance Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. HTTP responses HTTP code Reponse body 200 - OK BrokerTemplateInstance schema 201 - Created BrokerTemplateInstance schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified BrokerTemplateInstance Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.15. Body parameters Parameter Type Description body BrokerTemplateInstance schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK BrokerTemplateInstance schema 201 - Created BrokerTemplateInstance schema 401 - Unauthorized Empty 2.2.4. /apis/template.openshift.io/v1/watch/brokertemplateinstances/{name} Table 2.17. Global path parameters Parameter Type Description name string name of the BrokerTemplateInstance HTTP method GET Description watch changes to an object of kind BrokerTemplateInstance. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/template_apis/brokertemplateinstance-template-openshift-io-v1
9.3. NFS Server Configuration
9.3. NFS Server Configuration There are three ways to configure an NFS server under Red Hat Enterprise Linux: using the NFS Server Configuration Tool ( system-config-nfs ), manually editing its configuration file ( /etc/exports ), or using the /usr/sbin/exportfs command. For instructions on using NFS Server Configuration Tool , refer to the chapter titled Network File System (NFS) in the System Administrators Guide . The remainder of this section discusses manually editing /etc/exports and using the /usr/sbin/exportfs command to export NFS file systems. 9.3.1. The /etc/exports Configuration File The /etc/exports file controls which file systems are exported to remote hosts and specifies options. Blank lines are ignored, comments can be made by starting a line with the hash mark ( # ), and long lines can be wrapped with a backslash ( \ ). Each exported file system should be on its own individual line, and any lists of authorized hosts placed after an exported file system must be separated by space characters. Options for each of the hosts must be placed in parentheses directly after the host identifier, without any spaces separating the host and the first parenthesis. A line for an exported file system has the following structure: In this structure, replace <export> with the directory being exported, replace <host1> with the host or network to which the export is being shared, and replace <options> with the options for that host or network. Additional hosts can be specified in a space separated list. The following methods can be used to specify host names: single host - Where one particular host is specified with a fully qualified domain name, hostname, or IP address. wildcards - Where a * or ? character is used to take into account a grouping of fully qualified domain names that match a particular string of letters. Wildcards should not be used with IP addresses; however, it is possible for them to work accidentally if reverse DNS lookups fail. Be careful when using wildcards with fully qualified domain names, as they tend to be more exact than expected. For example, the use of *.example.com as a wildcard allows sales.example.com to access an exported file system, but not bob.sales.example.com. To match both possibilities both *.example.com and *.*.example.com must be specified. IP networks - Allows the matching of hosts based on their IP addresses within a larger network. For example, 192.168.0.0/28 allows the first 16 IP addresses, from 192.168.0.0 to 192.168.0.15, to access the exported file system, but not 192.168.0.16 and higher. netgroups - Permits an NIS netgroup name, written as @ <group-name> , to be used. This effectively puts the NIS server in charge of access control for this exported file system, where users can be added and removed from an NIS group without affecting /etc/exports . In its simplest form, the /etc/exports file only specifies the exported directory and the hosts permitted to access it, as in the following example: In the example, bob.example.com can mount /exported/directory/ . Because no options are specified in this example, the following default NFS options take effect: ro - Mounts of the exported file system are read-only. Remote hosts are not able to make changes to the data shared on the file system. To allow hosts to make changes to the file system, the read/write ( rw ) option must be specified. wdelay - Causes the NFS server to delay writing to the disk if it suspects another write request is imminent. This can improve performance by reducing the number of times the disk must be accessed by separate write commands, reducing write overhead. The no_wdelay option turns off this feature, but is only available when using the sync option. root_squash - Prevents root users connected remotely from having root privileges and assigns them the user ID for the user nfsnobody . This effectively "squashes" the power of the remote root user to the lowest local user, preventing unauthorized alteration of files on the remote server. Alternatively, the no_root_squash option turns off root squashing. To squash every remote user, including root, use the all_squash option. To specify the user and group IDs to use with remote users from a particular host, use the anonuid and anongid options, respectively. In this case, a special user account can be created for remote NFS users to share and specify (anonuid= <uid-value> ,anongid= <gid-value> ) , where <uid-value> is the user ID number and <gid-value> is the group ID number. Important By default, access control lists ( ACLs ) are supported by NFS under Red Hat Enterprise Linux. To disable this feature, specify the no_acl option when exporting the file system. For more about this feature, refer to the chapter titled Network File System (NFS) in the System Administrators Guide . Each default for every exported file system must be explicitly overridden. For example, if the rw option is not specified, then the exported file system is shared as read-only. The following is a sample line from /etc/exports which overrides two default options: In this example 192.168.0.3 can mount /another/exported/directory/ read/write and all transfers to disk are committed to the disk before the write request by the client is completed. Additionally, other options are available where no default value is specified. These include the ability to disable sub-tree checking, allow access from insecure ports, and allow insecure file locks (necessary for certain early NFS client implementations). Refer to the exports man page for details on these lesser used options. Warning The format of the /etc/exports file is very precise, particularly in regards to use of the space character. Remember to always separate exported file systems from hosts and hosts from one another with a space character. However, there should be no other space characters in the file except on comment lines. For example, the following two lines do not mean the same thing: The first line allows only users from bob.example.com read/write access to the /home directory. The second line allows users from bob.example.com to mount the directory as read-only (the default), while the rest of the world can mount it read/write. For detailed instructions on configuring an NFS server by editing /etc/exports , refer to the chapter titled Network File System (NFS) in the System Administrators Guide .
[ "<export> <host1> ( <options> ) <hostN> ( <options> )", "/exported/directory bob.example.com", "/another/exported/directory 192.168.0.3(rw,sync)", "/home bob.example.com(rw) /home bob.example.com (rw)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-nfs-server-export
15.5.7. Logging Options
15.5.7. Logging Options The following lists directives which affect vsftpd 's logging behavior. dual_log_enable - When enabled in conjunction with xferlog_enable , vsftpd writes two files simultaneously: a wu-ftpd -compatible log to the file specified in the xferlog_file directive ( /var/log/xferlog by default) and a standard vsftpd log file specified in the vsftpd_log_file directive ( /var/log/vsftpd.log by default). The default value is NO . log_ftp_protocol - When enabled in conjunction with xferlog_enable and with xferlog_std_format set to NO , all FTP commands and responses are logged. This directive is useful for debugging. The default value is NO . syslog_enable - When enabled in conjunction with xferlog_enable , all logging normally written to the standard vsftpd log file specified in the vsftpd_log_file directive ( /var/log/vsftpd.log by default) is sent to the system logger instead under the FTPD facility. The default value is NO . vsftpd_log_file - Specifies the vsftpd log file. For this file to be used, xferlog_enable must be enabled and xferlog_std_format must either be set to NO or, if xferlog_std_format is set to YES , dual_log_enable must be enabled. It is important to note that if syslog_enable is set to YES , the system log is used instead of the file specified in this directive. The default value is /var/log/vsftpd.log . xferlog_enable - When enabled, vsftpd logs connections ( vsftpd format only) and file transfer information to the log file specified in the vsftpd_log_file directive ( /var/log/vsftpd.log by default). If xferlog_std_format is set to YES , file transfer information is logged but connections are not, and the log file specified in xferlog_file ( /var/log/xferlog by default) is used instead. It is important to note that both log files and log formats are used if dual_log_enable is set to YES . The default value is NO . Note, in Red Hat Enterprise Linux, the value is set to YES . xferlog_file - Specifies the wu-ftpd -compatible log file. For this file to be used, xferlog_enable must be enabled and xferlog_std_format must be set to YES . It is also used if dual_log_enable is set to YES . The default value is /var/log/xferlog . xferlog_std_format - When enabled in conjunction with xferlog_enable , only a wu-ftpd -compatible file transfer log is written to the file specified in the xferlog_file directive ( /var/log/xferlog by default). It is important to note that this file only logs file transfers and does not log connections to the server. The default value is NO . Note, in Red Hat Enterprise Linux, the value is set to YES . Important To maintain compatibility with log files written by the older wu-ftpd FTP server, the xferlog_std_format directive is set to YES under Red Hat Enterprise Linux. However, this setting means that connections to the server are not logged. To both log connections in vsftpd format and maintain a wu-ftpd -compatible file transfer log, set dual_log_enable to YES . If maintaining a wu-ftpd -compatible file transfer log is not important, either set xferlog_std_format to NO , comment the line with a hash mark ( # ), or delete the line entirely.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-ftp-vsftpd-conf-opt-log
Preface
Preface Open Java Development Kit (OpenJDK) is a free and open-source implementation of the Java Platform, Standard Edition (Java SE). Eclipse Temurin is available in four LTS versions: OpenJDK 8u, OpenJDK 11u, OpenJDK 17u, and OpenJDK 21u. Binary files for Eclipse Temurin are available for macOS, Microsoft Windows, and multiple Linux x86 Operating Systems including Red Hat Enterprise Linux and Ubuntu.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.412_release_notes/pr01
Chapter 3. EgressFirewall [k8s.ovn.org/v1]
Chapter 3. EgressFirewall [k8s.ovn.org/v1] Description EgressFirewall describes the current egress firewall for a Namespace. Traffic from a pod to an IP address outside the cluster will be checked against each EgressFirewallRule in the pod's namespace's EgressFirewall, in order. If no rule matches (or no EgressFirewall is present) then the traffic will be allowed by default. Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of EgressFirewall. status object Observed status of EgressFirewall 3.1.1. .spec Description Specification of the desired behavior of EgressFirewall. Type object Required egress Property Type Description egress array a collection of egress firewall rule objects egress[] object EgressFirewallRule is a single egressfirewall rule object 3.1.2. .spec.egress Description a collection of egress firewall rule objects Type array 3.1.3. .spec.egress[] Description EgressFirewallRule is a single egressfirewall rule object Type object Required to type Property Type Description ports array ports specify what ports and protocols the rule applies to ports[] object EgressFirewallPort specifies the port to allow or deny traffic to to object to is the target that traffic is allowed/denied to type string type marks this as an "Allow" or "Deny" rule 3.1.4. .spec.egress[].ports Description ports specify what ports and protocols the rule applies to Type array 3.1.5. .spec.egress[].ports[] Description EgressFirewallPort specifies the port to allow or deny traffic to Type object Required port protocol Property Type Description port integer port that the traffic must match protocol string protocol (tcp, udp, sctp) that the traffic must match. 3.1.6. .spec.egress[].to Description to is the target that traffic is allowed/denied to Type object Property Type Description cidrSelector string cidrSelector is the CIDR range to allow/deny traffic to. If this is set, dnsName and nodeSelector must be unset. dnsName string dnsName is the domain name to allow/deny traffic to. If this is set, cidrSelector and nodeSelector must be unset. nodeSelector object nodeSelector will allow/deny traffic to the Kubernetes node IP of selected nodes. If this is set, cidrSelector and DNSName must be unset. 3.1.7. .spec.egress[].to.nodeSelector Description nodeSelector will allow/deny traffic to the Kubernetes node IP of selected nodes. If this is set, cidrSelector and DNSName must be unset. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.8. .spec.egress[].to.nodeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.9. .spec.egress[].to.nodeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.10. .status Description Observed status of EgressFirewall Type object Property Type Description status string 3.2. API endpoints The following API endpoints are available: /apis/k8s.ovn.org/v1/egressfirewalls GET : list objects of kind EgressFirewall /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls DELETE : delete collection of EgressFirewall GET : list objects of kind EgressFirewall POST : create an EgressFirewall /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls/{name} DELETE : delete an EgressFirewall GET : read the specified EgressFirewall PATCH : partially update the specified EgressFirewall PUT : replace the specified EgressFirewall 3.2.1. /apis/k8s.ovn.org/v1/egressfirewalls Table 3.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind EgressFirewall Table 3.2. HTTP responses HTTP code Reponse body 200 - OK EgressFirewallList schema 401 - Unauthorized Empty 3.2.2. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls Table 3.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of EgressFirewall Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind EgressFirewall Table 3.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.8. HTTP responses HTTP code Reponse body 200 - OK EgressFirewallList schema 401 - Unauthorized Empty HTTP method POST Description create an EgressFirewall Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.10. Body parameters Parameter Type Description body EgressFirewall schema Table 3.11. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 201 - Created EgressFirewall schema 202 - Accepted EgressFirewall schema 401 - Unauthorized Empty 3.2.3. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the EgressFirewall namespace string object name and auth scope, such as for teams and projects Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an EgressFirewall Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified EgressFirewall Table 3.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.18. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified EgressFirewall Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body Patch schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified EgressFirewall Table 3.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.23. Body parameters Parameter Type Description body EgressFirewall schema Table 3.24. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 201 - Created EgressFirewall schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/network_apis/egressfirewall-k8s-ovn-org-v1
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/net/8.0/html/release_notes_for_.net_8.0_containers/making-open-source-more-inclusive
Deploying and Managing Streams for Apache Kafka on OpenShift
Deploying and Managing Streams for Apache Kafka on OpenShift Red Hat Streams for Apache Kafka 2.7 Deploy and manage Streams for Apache Kafka 2.7 on OpenShift Container Platform
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: CustomResourceDefinition metadata: 1 name: kafkatopics.kafka.strimzi.io labels: app: strimzi spec: 2 group: kafka.strimzi.io versions: v1beta2 scope: Namespaced names: # singular: kafkatopic plural: kafkatopics shortNames: - kt 3 additionalPrinterColumns: 4 # subresources: status: {} 5 validation: 6 openAPIV3Schema: properties: spec: type: object properties: partitions: type: integer minimum: 1 replicas: type: integer minimum: 1 maximum: 32767 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic 1 metadata: name: my-topic labels: strimzi.io/cluster: my-cluster 2 spec: 3 partitions: 1 replicas: 1 config: retention.ms: 7200000 segment.bytes: 1073741824 status: conditions: 4 lastTransitionTime: \"2019-08-20T11:37:00.706Z\" status: \"True\" type: Ready observedGeneration: 1 /", "get k NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS my-cluster 3 3", "get strimzi NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS kafka.kafka.strimzi.io/my-cluster 3 3 NAME PARTITIONS REPLICATION FACTOR kafkatopic.kafka.strimzi.io/kafka-apps 3 3 NAME AUTHENTICATION AUTHORIZATION kafkauser.kafka.strimzi.io/my-user tls simple", "get strimzi -o name kafka.kafka.strimzi.io/my-cluster kafkatopic.kafka.strimzi.io/kafka-apps kafkauser.kafka.strimzi.io/my-user", "delete USD(oc get strimzi -o name) kafka.kafka.strimzi.io \"my-cluster\" deleted kafkatopic.kafka.strimzi.io \"kafka-apps\" deleted kafkauser.kafka.strimzi.io \"my-user\" deleted", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"tls\")].bootstrapServers}{\"\\n\"}' my-cluster-kafka-bootstrap.myproject.svc:9093", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: spec: # status: clusterId: XP9FP2P-RByvEy0W4cOEUA 1 conditions: 2 - lastTransitionTime: '2023-01-20T17:56:29.396588Z' status: 'True' type: Ready 3 kafkaMetadataState: KRaft 4 kafkaVersion: 3.7.0 5 kafkaNodePools: 6 - name: broker - name: controller listeners: 7 - addresses: - host: my-cluster-kafka-bootstrap.prm-project.svc port: 9092 bootstrapServers: 'my-cluster-kafka-bootstrap.prm-project.svc:9092' name: plain - addresses: - host: my-cluster-kafka-bootstrap.prm-project.svc port: 9093 bootstrapServers: 'my-cluster-kafka-bootstrap.prm-project.svc:9093' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: tls - addresses: - host: >- 2054284155.us-east-2.elb.amazonaws.com port: 9095 bootstrapServers: >- 2054284155.us-east-2.elb.amazonaws.com:9095 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external3 - addresses: - host: ip-10-0-172-202.us-east-2.compute.internal port: 31644 bootstrapServers: 'ip-10-0-172-202.us-east-2.compute.internal:31644' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external4 observedGeneration: 3 8 operatorLastSuccessfulVersion: 2.7 9", "get kafka <kafka_resource_name> -o jsonpath='{.status}' | jq", "sed -i 's/namespace: .*/namespace: <my_namespace>/' install/cluster-operator/*RoleBinding*.yaml", "create secret docker-registry <pull_secret_name> --docker-server=registry.redhat.io --docker-username=<user_name> --docker-password=<password> --docker-email=<email>", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: # env: - name: STRIMZI_IMAGE_PULL_SECRETS value: \"<pull_secret_name>\"", "create -f install/strimzi-admin", "create clusterrolebinding strimzi-admin --clusterrole=strimzi-admin --user= user1 --user= user2", "sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "create -f install/cluster-operator -n my-cluster-operator-namespace", "get deployments -n my-cluster-operator-namespace", "NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1", "sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: watched-namespace-1,watched-namespace-2,watched-namespace-3", "create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> create -f install/cluster-operator/023-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n <watched_namespace>", "create -f install/cluster-operator -n my-cluster-operator-namespace", "get deployments -n my-cluster-operator-namespace", "NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1", "sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "apiVersion: apps/v1 kind: Deployment spec: # template: spec: # serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: \"*\" #", "create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator create clusterrolebinding strimzi-cluster-operator-watched --clusterrole=strimzi-cluster-operator-watched --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator", "create -f install/cluster-operator -n my-cluster-operator-namespace", "get deployments -n my-cluster-operator-namespace", "NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1", "apply -f examples/kafka/kraft/kafka-with-dual-role-nodes.yaml", "apply -f examples/kafka/kraft/kafka.yaml", "apply -f examples/kafka/kraft/kafka-ephemeral.yaml", "apply -f examples/kafka/kafka-with-node-pools.yaml", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-4 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: version: 3.7.0 # config: # log.message.format.version: \"3.7\" inter.broker.protocol.version: \"3.7\" #", "apply -f examples/kafka/kafka-ephemeral.yaml", "apply -f examples/kafka/kafka-persistent.yaml", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-kafka-0 1/1 Running 0 my-cluster-kafka-1 1/1 Running 0 my-cluster-kafka-2 1/1 Running 0 my-cluster-zookeeper-0 1/1 Running 0 my-cluster-zookeeper-1 1/1 Running 0 my-cluster-zookeeper-2 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} userOperator: {}", "apply -f <kafka_configuration_file>", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} userOperator: {}", "apply -f <kafka_configuration_file>", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0", "exec -ti my-cluster -zookeeper-0 -- bin/zookeeper-shell.sh localhost:12181 ls /", "apply -f examples/connect/kafka-connect.yaml", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-connect-cluster-connect-<pod_id> 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 # build: output: 2 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 3 - name: connector-1 artifacts: - type: tgz url: <url_to_download_connector_1_artifact> sha512sum: <SHA-512_checksum_of_connector_1_artifact> - name: connector-2 artifacts: - type: jar url: <url_to_download_connector_2_artifact> sha512sum: <SHA-512_checksum_of_connector_2_artifact> #", "oc apply -f <kafka_connect_configuration_file>", "FROM registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 USER root:root COPY ./ my-plugins / /opt/kafka/plugins/ USER 1001", "tree ./ my-plugins / ./ my-plugins / ├── debezium-connector-mongodb │ ├── bson-<version>.jar │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mongodb-<version>.jar │ ├── debezium-core-<version>.jar │ ├── LICENSE.txt │ ├── mongodb-driver-core-<version>.jar │ ├── README.md │ └── # ├── debezium-connector-mysql │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mysql-<version>.jar │ ├── debezium-core-<version>.jar │ ├── LICENSE.txt │ ├── mysql-binlog-connector-java-<version>.jar │ ├── mysql-connector-java-<version>.jar │ ├── README.md │ └── # └── debezium-connector-postgres ├── CHANGELOG.md ├── CONTRIBUTE.md ├── COPYRIGHT.txt ├── debezium-connector-postgres-<version>.jar ├── debezium-core-<version>.jar ├── LICENSE.txt ├── postgresql-<version>.jar ├── protobuf-java-<version>.jar ├── README.md └── #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 # image: my-new-container-image 2 config: 3 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" spec: #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 autoRestart: 5 enabled: true config: 6 file: \"/opt/kafka/LICENSE\" 7 topic: my-topic 8 #", "apply -f examples/connect/source-connector.yaml", "touch examples/connect/sink-connector.yaml", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-sink-connector labels: strimzi.io/cluster: my-connect spec: class: org.apache.kafka.connect.file.FileStreamSinkConnector 1 tasksMax: 2 config: 2 file: \"/tmp/my-file\" 3 topics: my-topic 4", "apply -f examples/connect/sink-connector.yaml", "get kctr --selector strimzi.io/cluster=<my_connect_cluster> -o name my-source-connector my-sink-connector", "exec <my_kafka_cluster>-kafka-0 -i -t -- bin/kafka-console-consumer.sh --bootstrap-server <my_kafka_cluster>-kafka-bootstrap. NAMESPACE .svc:9092 --topic my-topic --from-beginning", "curl -X POST http://my-connect-cluster-connect-api:8083/connectors -H 'Content-Type: application/json' -d '{ \"name\": \"my-source-connector\", \"config\": { \"connector.class\":\"org.apache.kafka.connect.file.FileStreamSourceConnector\", \"file\": \"/opt/kafka/LICENSE\", \"topic\":\"my-topic\", \"tasksMax\": \"4\", \"type\": \"source\" } }'", "selector: strimzi.io/cluster: my-connect-cluster 1 strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect 2 #", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: my-custom-connect-network-policy spec: ingress: - from: - podSelector: 1 matchLabels: app: my-connector-manager ports: - port: 8083 protocol: TCP podSelector: matchLabels: strimzi.io/cluster: my-connect-cluster strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect policyTypes: - Ingress", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" spec: # jvmOptions: javaSystemProperties: - name: org.apache.kafka.disallowed.login.modules value: com.sun.security.auth.module.JndiLoginModule, org.apache.kafka.common.security.kerberos.KerberosLoginModule", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: connector.client.config.override.policy: None", "apply -f examples/mirror-maker/kafka-mirror-maker.yaml", "apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-mirror-maker-mirror-maker-<pod_id> 1/1 Running 1 my-mm2-cluster-mirrormaker2-<pod_id> 1/1 Running 1", "apply -f examples/bridge/kafka-bridge.yaml", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-bridge-bridge-<pod_id> 1/1 Running 0", "get pods -o name pod/kafka-consumer pod/my-bridge-bridge-<pod_id>", "port-forward pod/my-bridge-bridge-<pod_id> 8080:8080 &", "selector: strimzi.io/cluster: kafka-bridge-name 1 strimzi.io/kind: KafkaBridge #", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # template: # spec: # containers: - name: strimzi-topic-operator # env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS 3 value: \"strimzi.io/cluster=my-cluster\" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 4 value: \"120000\" - name: STRIMZI_LOG_LEVEL 5 value: INFO - name: STRIMZI_TLS_ENABLED 6 value: \"false\" - name: STRIMZI_JAVA_OPTS 7 value: \"-Xmx=512M -Xms=256M\" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 8 value: \"-Djavax.net.debug=verbose -DpropertyName=value\" - name: STRIMZI_PUBLIC_CA 9 value: \"false\" - name: STRIMZI_TLS_AUTH_ENABLED 10 value: \"false\" - name: STRIMZI_SASL_ENABLED 11 value: \"false\" - name: STRIMZI_SASL_USERNAME 12 value: \"admin\" - name: STRIMZI_SASL_PASSWORD 13 value: \"password\" - name: STRIMZI_SASL_MECHANISM 14 value: \"scram-sha-512\" - name: STRIMZI_SECURITY_PROTOCOL 15 value: \"SSL\" - name: STRIMZI_USE_FINALIZERS value: \"false\" 16", ". env: - name: STRIMZI_TRUSTSTORE_LOCATION 1 value: \"/path/to/truststore.p12\" - name: STRIMZI_TRUSTSTORE_PASSWORD 2 value: \" TRUSTSTORE-PASSWORD \" - name: STRIMZI_KEYSTORE_LOCATION 3 value: \"/path/to/keystore.p12\" - name: STRIMZI_KEYSTORE_PASSWORD 4 value: \" KEYSTORE-PASSWORD \"", "get deployments", "NAME READY UP-TO-DATE AVAILABLE strimzi-topic-operator 1/1 1 1", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # template: # spec: # containers: - name: strimzi-topic-operator # env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS value: \"strimzi.io/cluster=my-cluster\" - name: STRIMZI_ZOOKEEPER_CONNECT 1 value: my-cluster-zookeeper-client:2181 - name: STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS 2 value: \"18000\" - name: STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS 3 value: \"6\" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: \"120000\" - name: STRIMZI_LOG_LEVEL value: INFO - name: STRIMZI_TLS_ENABLED value: \"false\" - name: STRIMZI_JAVA_OPTS value: \"-Xmx=512M -Xms=256M\" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES value: \"-Djavax.net.debug=verbose -DpropertyName=value\" - name: STRIMZI_PUBLIC_CA value: \"false\" - name: STRIMZI_TLS_AUTH_ENABLED value: \"false\" - name: STRIMZI_SASL_ENABLED value: \"false\" - name: STRIMZI_SASL_USERNAME value: \"admin\" - name: STRIMZI_SASL_PASSWORD value: \"password\" - name: STRIMZI_SASL_MECHANISM value: \"scram-sha-512\" - name: STRIMZI_SECURITY_PROTOCOL value: \"SSL\"", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-user-operator labels: app: strimzi spec: # template: # spec: # containers: - name: strimzi-user-operator # env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_CA_CERT_NAME 3 value: my-cluster-clients-ca-cert - name: STRIMZI_CA_KEY_NAME 4 value: my-cluster-clients-ca - name: STRIMZI_LABELS 5 value: \"strimzi.io/cluster=my-cluster\" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: \"120000\" - name: STRIMZI_WORK_QUEUE_SIZE 7 value: 10000 - name: STRIMZI_CONTROLLER_THREAD_POOL_SIZE 8 value: 10 - name: STRIMZI_USER_OPERATIONS_THREAD_POOL_SIZE 9 value: 4 - name: STRIMZI_LOG_LEVEL 10 value: INFO - name: STRIMZI_GC_LOG_ENABLED 11 value: \"true\" - name: STRIMZI_CA_VALIDITY 12 value: \"365\" - name: STRIMZI_CA_RENEWAL 13 value: \"30\" - name: STRIMZI_JAVA_OPTS 14 value: \"-Xmx=512M -Xms=256M\" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 15 value: \"-Djavax.net.debug=verbose -DpropertyName=value\" - name: STRIMZI_SECRET_PREFIX 16 value: \"kafka-\" - name: STRIMZI_ACLS_ADMIN_API_SUPPORTED 17 value: \"true\" - name: STRIMZI_MAINTENANCE_TIME_WINDOWS 18 value: '* * 8-10 * * ?;* * 14-15 * * ?' - name: STRIMZI_KAFKA_ADMIN_CLIENT_CONFIGURATION 19 value: | default.api.timeout.ms=120000 request.timeout.ms=60000", ". env: - name: STRIMZI_CLUSTER_CA_CERT_SECRET_NAME 1 value: my-cluster-cluster-ca-cert - name: STRIMZI_EO_KEY_SECRET_NAME 2 value: my-cluster-entity-operator-certs ...\"", "create -f install/user-operator", "get deployments", "NAME READY UP-TO-DATE AVAILABLE strimzi-user-operator 1/1 1 1", "env: - name: STRIMZI_FEATURE_GATES value: +FeatureGate1,-FeatureGate2", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: controller labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false resources: requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\"", "annotate kafka my-cluster strimzi.io/kraft=\"migration\" --overwrite", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: my-project annotations: strimzi.io/kraft=\"migration\"", "get pods -n my-project", "NAME READY STATUS RESTARTS my-cluster-kafka-0 1/1 Running 0 my-cluster-kafka-1 1/1 Running 0 my-cluster-kafka-2 1/1 Running 0 my-cluster-controller-3 1/1 Running 0 my-cluster-controller-4 1/1 Running 0 my-cluster-controller-5 1/1 Running 0", "get kafka my-cluster -n my-project -w", "NAME ... METADATA STATE my-cluster ... Zookeeper my-cluster ... KRaftMigration my-cluster ... KRaftDualWriting my-cluster ... KRaftPostMigration", "annotate kafka my-cluster strimzi.io/kraft=\"enabled\" --overwrite", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: my-project annotations: strimzi.io/kraft=\"enabled\"", "get kafka my-cluster -n my-project -w", "NAME ... METADATA STATE my-cluster ... Zookeeper my-cluster ... KRaftMigration my-cluster ... KRaftDualWriting my-cluster ... KRaftPostMigration my-cluster ... PreKRaft my-cluster ... KRaft", "annotate kafka my-cluster strimzi.io/kraft=\"rollback\" --overwrite", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: my-project annotations: strimzi.io/kraft=\"rollback\"", "delete KafkaNodePool controller -n my-project", "annotate kafka my-cluster strimzi.io/kraft=\"disabled\" --overwrite", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: my-project annotations: strimzi.io/kraft=\"disabled\"", "apply -f <kafka_configuration_file>", "examples ├── user 1 ├── topic 2 ├── security 3 │ ├── tls-auth │ ├── scram-sha-512-auth │ └── keycloak-authorization ├── mirror-maker 4 ├── metrics 5 ├── kafka 6 │ └── nodepools 7 ├── cruise-control 8 ├── connect 9 └── bridge 10", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 1 version: 3.7.0 2 logging: 3 type: inline loggers: kafka.root.logger.level: INFO resources: 4 requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\" readinessProbe: 5 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: 6 -Xms: 8192m -Xmx: 8192m image: my-org/my-image:latest 7 listeners: 8 - name: plain 9 port: 9092 10 type: internal 11 tls: false 12 configuration: useServiceDnsDomain: true 13 - name: tls port: 9093 type: internal tls: true authentication: 14 type: tls - name: external1 15 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: 16 secretName: my-secret certificate: my-certificate.crt key: my-key.key authorization: 17 type: simple config: 18 auto.create.topics.enable: \"false\" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 inter.broker.protocol.version: \"3.7\" storage: 19 type: persistent-claim 20 size: 10000Gi rack: 21 topologyKey: topology.kubernetes.io/zone metricsConfig: 22 type: jmxPrometheusExporter valueFrom: configMapKeyRef: 23 name: my-config-map key: my-key # zookeeper: 24 replicas: 3 25 logging: 26 type: inline loggers: zookeeper.root.logger: INFO resources: requests: memory: 8Gi cpu: \"2\" limits: memory: 8Gi cpu: \"2\" jvmOptions: -Xms: 4096m -Xmx: 4096m storage: type: persistent-claim size: 1000Gi metricsConfig: # entityOperator: 27 tlsSidecar: 28 resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: 29 type: inline loggers: rootLogger.level: INFO resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\" userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: 30 type: inline loggers: rootLogger.level: INFO resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\" kafkaExporter: 31 # cruiseControl: 32 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # config: client.quota.callback.class: io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce: 1000000 2 client.quota.callback.static.fetch: 1000000 3 client.quota.callback.static.storage.soft: 400000000000 4 client.quota.callback.static.storage.hard: 500000000000 5 client.quota.callback.static.storage.check-interval: 5 6", "apply -f <kafka_configuration_file>", "annotate pod <cluster_name>-kafka-<index_number> strimzi.io/delete-pod-and-pvc=\"true\"", "annotate pod <cluster_name>-zookeeper-<index_number> strimzi.io/delete-pod-and-pvc=\"true\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kraft-dual-role 1 labels: strimzi.io/cluster: my-cluster 2 spec: replicas: 3 3 roles: 4 - controller - broker storage: 5 type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false resources: 6 requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker 1 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false resources: requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: config: reserved.broker.max.id: 10000 #", "annotate kafkanodepool pool-a strimzi.io/next-node-ids=\"[0,1,2,10-20,30]\"", "annotate kafkanodepool pool-b strimzi.io/remove-node-ids=\"[60-50,9,8,7]\"", "annotate kafkanodepool pool-a strimzi.io/next-node-ids-", "annotate kafkanodepool pool-b strimzi.io/remove-node-ids-", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0", "scale kafkanodepool pool-a --replicas=4", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-a-3 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: add-brokers brokers: [3]", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-a-3 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [3]", "scale kafkanodepool pool-a --replicas=3", "NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-0 1/1 Running 0 my-cluster-pool-b-kafka-1 1/1 Running 0 my-cluster-pool-b-kafka-2 1/1 Running 0", "scale kafkanodepool pool-a --replicas=4", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-4 1/1 Running 0 my-cluster-pool-a-7 1/1 Running 0 my-cluster-pool-b-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0 my-cluster-pool-b-6 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [6]", "scale kafkanodepool pool-b --replicas=3", "NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-2 1/1 Running 0 my-cluster-pool-b-kafka-3 1/1 Running 0 my-cluster-pool-b-kafka-5 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false #", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false #", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [0, 1, 2]", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # --- apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false #", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false #", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [3, 4, 5]", "delete kafkanodepool pool-b -n <my_cluster_operator_namespace>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: gp2-ebs #", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: roles: - broker replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 1Ti class: gp3-ebs #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [0, 1, 2]", "delete kafkanodepool pool-a", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: all-zones provisioner: kubernetes.io/my-storage parameters: type: ssd volumeBindingMode: WaitForFirstConsumer", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-zone-1 labels: strimzi.io/cluster: my-cluster spec: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: all-zones template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - zone-1 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-zone-2 labels: strimzi.io/cluster: my-cluster spec: replicas: 4 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: all-zones template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - zone-2 #", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-pool-zone-1-kafka-0 1/1 Running 0 my-cluster-pool-zone-1-kafka-1 1/1 Running 0 my-cluster-pool-zone-1-kafka-2 1/1 Running 0 my-cluster-pool-zone-2-kafka-3 1/1 Running 0 my-cluster-pool-zone-2-kafka-4 1/1 Running 0 my-cluster-pool-zone-2-kafka-5 1/1 Running 0 my-cluster-pool-zone-2-kafka-6 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kafka labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false", "apply -f <node_pool_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster annotations: strimzi.io/node-pools: enabled spec: kafka: # zookeeper: #", "apply -f <kafka_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {}", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 resources: requests: cpu: \"1\" memory: 500Mi limits: cpu: \"1\" memory: 500Mi #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # userOperator: watchedNamespace: my-user-namespace reconciliationIntervalSeconds: 60 resources: requests: cpu: \"1\" memory: 500Mi limits: cpu: \"1\" memory: 500Mi #", "env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace", "env: - name: STRIMZI_OPERATOR_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace", "env: - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2", "env: - name: STRIMZI_LABELS_EXCLUSION_PATTERN value: \"^key1.*\"", "env: - name: STRIMZI_CUSTOM_RESOURCE_SELECTOR value: label1=value1,label2=value2", "env: - name: STRIMZI_KUBERNETES_VERSION value: | major=1 minor=16 gitVersion=v1.16.2 gitCommit=c97fe5036ef3df2967d086711e6c0c405941e14b gitTreeState=clean buildDate=2019-10-15T19:09:08Z goVersion=go1.12.10 compiler=gc platform=linux/amd64", "<cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc. cluster.local", "# env: # - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2 #", "# env: # - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: \"120000\" #", "annotate <kind_of_custom_resource> <name_of_custom_resource> strimzi.io/pause-reconciliation=\"true\"", "annotate KafkaConnect my-connect strimzi.io/pause-reconciliation=\"true\"", "describe <kind_of_custom_resource> <name_of_custom_resource>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: annotations: strimzi.io/pause-reconciliation: \"true\" strimzi.io/use-connector-resources: \"true\" creationTimestamp: 2021-03-12T10:47:11Z # spec: # status: conditions: - lastTransitionTime: 2021-03-12T10:47:41.689249Z status: \"True\" type: ReconciliationPaused", "env: - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace", "env: - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 3", "spec containers: - name: strimzi-cluster-operator # env: - name: STRIMZI_LEADER_ELECTION_ENABLED value: \"true\" - name: STRIMZI_LEADER_ELECTION_LEASE_NAME value: \"my-strimzi-cluster-operator\" - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi rules: - apiGroups: - coordination.k8s.io resourceNames: - my-strimzi-cluster-operator", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi subjects: - kind: ServiceAccount name: my-strimzi-cluster-operator namespace: myproject", "create -f install/cluster-operator -n myproject", "get deployments -n myproject", "NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 3/3 3 3", "apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: # env: # - name: \"HTTP_PROXY\" value: \"http://proxy.com\" 1 - name: \"HTTPS_PROXY\" value: \"https://proxy.com\" 2 - name: \"NO_PROXY\" value: \"internal.com, other.domain.com\" 3 #", "edit deployment strimzi-cluster-operator", "create -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml", "apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: # env: # - name: \"FIPS_MODE\" value: \"disabled\" 1 #", "edit deployment strimzi-cluster-operator", "apply -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect 1 metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" 2 spec: replicas: 3 3 authentication: 4 type: tls certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source bootstrapServers: my-cluster-kafka-bootstrap:9092 5 tls: 6 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt config: 7 group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 build: 8 output: 9 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 10 - name: connector-1 artifacts: - type: tgz url: <url_to_download_connector_1_artifact> sha512sum: <SHA-512_checksum_of_connector_1_artifact> - name: connector-2 artifacts: - type: jar url: <url_to_download_connector_2_artifact> sha512sum: <SHA-512_checksum_of_connector_2_artifact> externalConfiguration: 11 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey resources: 12 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 13 type: inline loggers: log4j.rootLogger: INFO readinessProbe: 14 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 15 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 16 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" image: my-org/my-image:latest 17 rack: topologyKey: topology.kubernetes.io/zone 18 template: 19 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" connectContainer: 20 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry 21", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # authorization: type: simple acls: # access to offset.storage.topic - resource: type: topic name: connect-cluster-offsets patternType: literal operations: - Create - Describe - Read - Write host: \"*\" # access to status.storage.topic - resource: type: topic name: connect-cluster-status patternType: literal operations: - Create - Describe - Read - Write host: \"*\" # access to config.storage.topic - resource: type: topic name: connect-cluster-configs patternType: literal operations: - Create - Describe - Read - Write host: \"*\" # cluster group - resource: type: group name: connect-cluster patternType: literal operations: - Read host: \"*\"", "apply -f KAFKA-USER-CONFIG-FILE", "get KafkaConnector", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector tasksMax: 2 config: file: \"/opt/kafka/LICENSE\" topic: my-topic state: stopped #", "get KafkaConnector", "annotate KafkaConnector <kafka_connector_name> strimzi.io/restart=\"true\"", "get KafkaConnector", "describe KafkaConnector <kafka_connector_name>", "annotate KafkaConnector <kafka_connector_name> strimzi.io/restart-task=\"0\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.7.0 connectCluster: \"my-cluster-target\" clusters: - alias: \"my-cluster-source\" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: \"my-cluster-target\" bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: {}", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.7.0 1 replicas: 3 2 connectCluster: \"my-cluster-target\" 3 clusters: 4 - alias: \"my-cluster-source\" 5 authentication: 6 certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source type: tls bootstrapServers: my-cluster-source-kafka-bootstrap:9092 7 tls: 8 trustedCertificates: - certificate: ca.crt secretName: my-cluster-source-cluster-ca-cert - alias: \"my-cluster-target\" 9 authentication: 10 certificateAndKey: certificate: target.crt key: target.key secretName: my-user-target type: tls bootstrapServers: my-cluster-target-kafka-bootstrap:9092 11 config: 12 config.storage.replication.factor: 1 offset.storage.replication.factor: 1 status.storage.replication.factor: 1 tls: 13 trustedCertificates: - certificate: ca.crt secretName: my-cluster-target-cluster-ca-cert mirrors: 14 - sourceCluster: \"my-cluster-source\" 15 targetCluster: \"my-cluster-target\" 16 sourceConnector: 17 tasksMax: 10 18 autoRestart: 19 enabled: true config replication.factor: 1 20 offset-syncs.topic.replication.factor: 1 21 sync.topic.acls.enabled: \"false\" 22 refresh.topics.interval.seconds: 60 23 replication.policy.class: \"org.apache.kafka.connect.mirror.IdentityReplicationPolicy\" 24 heartbeatConnector: 25 autoRestart: enabled: true config: heartbeats.topic.replication.factor: 1 26 replication.policy.class: \"org.apache.kafka.connect.mirror.IdentityReplicationPolicy\" checkpointConnector: 27 autoRestart: enabled: true config: checkpoints.topic.replication.factor: 1 28 refresh.groups.interval.seconds: 600 29 sync.group.offsets.enabled: true 30 sync.group.offsets.interval.seconds: 60 31 emit.checkpoints.interval.seconds: 60 32 replication.policy.class: \"org.apache.kafka.connect.mirror.IdentityReplicationPolicy\" topicsPattern: \"topic1|topic2|topic3\" 33 groupsPattern: \"group1|group2|group3\" 34 resources: 35 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 36 type: inline loggers: connect.root.logger.level: INFO readinessProbe: 37 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: 38 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" image: my-org/my-image:latest 39 rack: topologyKey: topology.kubernetes.io/zone 40 template: 41 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" connectContainer: 42 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry 43 externalConfiguration: 44 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: connectCluster: \"my-cluster-target\" clusters: - alias: \"my-cluster-target\" config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.7.0 # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 5 config: producer.override.batch.size: 327680 producer.override.linger.ms: 100 producer.request.timeout.ms: 30000 consumer.fetch.max.bytes: 52428800 # checkpointConnector: config: producer.override.request.timeout.ms: 30000 consumer.max.poll.interval.ms: 300000 # heartbeatConnector: config: producer.override.request.timeout.ms: 30000 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 10 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" checkpointConnector: tasksMax: 10 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-source-cluster spec: kafka: version: 3.7.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: \"3.7\" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {}", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-target-cluster spec: kafka: version: 3.7.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: \"3.7\" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {}", "apply -f <kafka_configuration_file> -n <namespace>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-source-user labels: strimzi.io/cluster: my-source-cluster spec: authentication: type: tls authorization: type: simple acls: # MirrorSourceConnector - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Create - DescribeConfigs - Read - Write - resource: # Needed for every topic which is mirrored type: topic name: \"*\" operations: - DescribeConfigs - Read # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: # Needed for every group for which offsets are synced type: group name: \"*\" operations: - Describe - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Read", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-target-user labels: strimzi.io/cluster: my-target-cluster spec: authentication: type: tls authorization: type: simple acls: # cluster group - resource: type: group name: mirrormaker2-cluster operations: - Read # access to config.storage.topic - resource: type: topic name: mirrormaker2-cluster-configs operations: - Create - Describe - DescribeConfigs - Read - Write # access to status.storage.topic - resource: type: topic name: mirrormaker2-cluster-status operations: - Create - Describe - DescribeConfigs - Read - Write # access to offset.storage.topic - resource: type: topic name: mirrormaker2-cluster-offsets operations: - Create - Describe - DescribeConfigs - Read - Write # MirrorSourceConnector - resource: # Needed for every topic which is mirrored type: topic name: \"*\" operations: - Create - Alter - AlterConfigs - Write # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: type: topic name: my-source-cluster.checkpoints.internal operations: - Create - Describe - Read - Write - resource: # Needed for every group for which the offset is synced type: group name: \"*\" operations: - Read - Describe # MirrorHeartbeatConnector - resource: type: topic name: heartbeats operations: - Create - Describe - Write", "apply -f <kafka_user_configuration_file> -n <namespace>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker-2 spec: version: 3.7.0 replicas: 1 connectCluster: \"my-target-cluster\" clusters: - alias: \"my-source-cluster\" bootstrapServers: my-source-cluster-kafka-bootstrap:9093 tls: 1 trustedCertificates: - secretName: my-source-cluster-cluster-ca-cert certificate: ca.crt authentication: 2 type: tls certificateAndKey: secretName: my-source-user certificate: user.crt key: user.key - alias: \"my-target-cluster\" bootstrapServers: my-target-cluster-kafka-bootstrap:9093 tls: 3 trustedCertificates: - secretName: my-target-cluster-cluster-ca-cert certificate: ca.crt authentication: 4 type: tls certificateAndKey: secretName: my-target-user certificate: user.crt key: user.key config: # -1 means it will use the default replication factor configured in the broker config.storage.replication.factor: -1 offset.storage.replication.factor: -1 status.storage.replication.factor: -1 mirrors: - sourceCluster: \"my-source-cluster\" targetCluster: \"my-target-cluster\" sourceConnector: config: replication.factor: 1 offset-syncs.topic.replication.factor: 1 sync.topic.acls.enabled: \"false\" heartbeatConnector: config: heartbeats.topic.replication.factor: 1 checkpointConnector: config: checkpoints.topic.replication.factor: 1 sync.group.offsets.enabled: \"true\" topicsPattern: \"topic1|topic2|topic3\" groupsPattern: \"group1|group2|group3\"", "apply -f <mirrormaker2_configuration_file> -n <namespace_of_target_cluster>", "get KafkaMirrorMaker2", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.7.0 replicas: 3 connectCluster: \"my-cluster-target\" clusters: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 10 autoRestart: enabled: true state: stopped #", "get KafkaMirrorMaker2", "describe KafkaMirrorMaker2 <mirrormaker_cluster_name>", "annotate KafkaMirrorMaker2 <mirrormaker_cluster_name> \"strimzi.io/restart-connector=<mirrormaker_connector_name>\"", "annotate KafkaMirrorMaker2 my-mirror-maker-2 \"strimzi.io/restart-connector=my-connector\"", "get KafkaMirrorMaker2", "describe KafkaMirrorMaker2 <mirrormaker_cluster_name>", "annotate KafkaMirrorMaker2 <mirrormaker_cluster_name> \"strimzi.io/restart-connector-task=<mirrormaker_connector_name>:<task_id>\"", "annotate KafkaMirrorMaker2 my-mirror-maker-2 \"strimzi.io/restart-connector-task=my-connector:0\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: replicas: 3 1 consumer: bootstrapServers: my-source-cluster-kafka-bootstrap:9092 2 groupId: \"my-group\" 3 numStreams: 2 4 offsetCommitInterval: 120000 5 tls: 6 trustedCertificates: - secretName: my-source-cluster-ca-cert certificate: ca.crt authentication: 7 type: tls certificateAndKey: secretName: my-source-secret certificate: public.crt key: private.key config: 8 max.poll.records: 100 receive.buffer.bytes: 32768 producer: bootstrapServers: my-target-cluster-kafka-bootstrap:9092 abortOnSendFailure: false 9 tls: trustedCertificates: - secretName: my-target-cluster-ca-cert certificate: ca.crt authentication: type: tls certificateAndKey: secretName: my-target-secret certificate: public.crt key: private.key config: compression.type: gzip batch.size: 8192 include: \"my-topic|other-topic\" 10 resources: 11 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 12 type: inline loggers: mirrormaker.root.logger: INFO readinessProbe: 13 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 14 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 15 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" image: my-org/my-image:latest 16 template: 17 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" mirrorMakerContainer: 18 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: 19 type: opentelemetry", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: replicas: 3 1 bootstrapServers: <cluster_name> -cluster-kafka-bootstrap:9092 2 tls: 3 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt authentication: 4 type: tls certificateAndKey: secretName: my-secret certificate: public.crt key: private.key http: 5 port: 8080 cors: 6 allowedOrigins: \"https://strimzi.io\" allowedMethods: \"GET,POST,PUT,DELETE,OPTIONS,PATCH\" consumer: 7 config: auto.offset.reset: earliest producer: 8 config: delivery.timeout.ms: 300000 resources: 9 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 10 type: inline loggers: logger.bridge.level: INFO # enabling DEBUG just for send operation logger.send.name: \"http.openapi.operation.send\" logger.send.level: DEBUG jvmOptions: 11 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" readinessProbe: 12 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 image: my-org/my-image:latest 13 template: 14 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" bridgeContainer: 15 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry 16", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: ephemeral # zookeeper: storage: type: ephemeral #", "/var/lib/kafka/data/kafka-log IDX", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # zookeeper: storage: type: persistent-claim size: 1000Gi #", "storage: type: persistent-claim size: 500Gi class: my-storage-class", "storage: type: persistent-claim size: 1Gi selector: hdd-type: ssd deleteClaim: true", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: # kafka: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # # zookeeper: replicas: 3 storage: deleteClaim: true size: 100Gi type: persistent-claim class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c #", "/var/lib/kafka/data/kafka-log IDX", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # storage: type: persistent-claim size: 2000Gi class: my-storage-class # zookeeper: #", "apply -f <kafka_configuration_file>", "get pv", "NAME CAPACITY CLAIM pvc-0ca459ce-... 2000Gi my-project/data-my-cluster-kafka-2 pvc-6e1810be-... 2000Gi my-project/data-my-cluster-kafka-0 pvc-82dc78c9-... 2000Gi my-project/data-my-cluster-kafka-1", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false #", "/var/lib/kafka/data- id /kafka-log idx", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # zookeeper: #", "apply -f <kafka_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # zookeeper: #", "apply -f <kafka_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: tieredStorage: type: custom 1 remoteStorageManager: 2 className: com.example.kafka.tiered.storage.s3.S3RemoteStorageManager classPath: /opt/kafka/plugins/tiered-storage-s3/* config: storage.bucket.name: my-bucket 3 # config: rlmm.config.remote.log.metadata.topic.replication.factor: 1 4 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME -kafka topologyKey: \"kubernetes.io/hostname\" # zookeeper: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME -zookeeper topologyKey: \"kubernetes.io/hostname\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: \"kubernetes.io/hostname\" # zookeeper: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: \"kubernetes.io/hostname\" #", "apply -f <kafka_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" # zookeeper: #", "apply -f <kafka_configuration_file>", "label node NAME-OF-NODE node-type=fast-network", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # zookeeper: #", "apply -f <kafka_configuration_file>", "adm taint node NAME-OF-NODE dedicated=Kafka:NoSchedule", "label node NAME-OF-NODE dedicated=Kafka", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: tolerations: - key: \"dedicated\" operator: \"Equal\" value: \"Kafka\" effect: \"NoSchedule\" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: dedicated operator: In values: - Kafka # zookeeper: #", "apply -f <kafka_configuration_file>", "logging: type: inline loggers: kafka.root.logger.level: INFO", "logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key", "kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j.properties: kafka.root.logger.level=\"INFO\"", "create configmap logging-configmap --from-file=log4j.properties", "Define the logger kafka.root.logger.level=\"INFO\"", "logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j.properties", "apply -f <kafka_configuration_file>", "create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml", "edit configmap strimzi-cluster-operator", "rootLogger.level=\"INFO\" appender.console.filter.filter1.type=MarkerFilter 1 appender.console.filter.filter1.onMatch=ACCEPT 2 appender.console.filter.filter1.onMismatch=DENY 3 appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster) 4", "appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster-1) appender.console.filter.filter2.type=MarkerFilter appender.console.filter.filter2.onMatch=ACCEPT appender.console.filter.filter2.onMismatch=DENY appender.console.filter.filter2.marker=Kafka(my-namespace/my-kafka-cluster-2)", "kind: ConfigMap apiVersion: v1 metadata: name: strimzi-cluster-operator data: log4j2.properties: # appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster)", "edit configmap strimzi-cluster-operator", "create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml", "kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j2.properties: rootLogger.level=\"INFO\" appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic)", "create configmap logging-configmap --from-file=log4j2.properties", "Define the logger rootLogger.level=\"INFO\" Set the filters appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic)", "spec: # entityOperator: topicOperator: logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j2.properties", "create -f install/cluster-operator -n my-cluster-operator-namespace", "logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: env config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider #", "apiVersion: v1 kind: ConfigMap metadata: name: my-connector-configuration data: option1: value1 option2: value2", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: secrets,configmaps 1 config.providers.configmaps.class: io.strimzi.kafka.KubernetesConfigMapConfigProvider 2 config.providers.secrets.class: io.strimzi.kafka.KubernetesSecretConfigProvider 3 #", "apply -f <kafka_connect_configuration_file>", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: connector-configuration-role rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"my-connector-configuration\"] verbs: [\"get\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: connector-configuration-role-binding subjects: - kind: ServiceAccount name: my-connect-connect namespace: my-project roleRef: kind: Role name: connector-configuration-role apiGroup: rbac.authorization.k8s.io", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # config: option: USD{configmaps:my-project/my-connector-configuration:option1} #", "apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE=", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: env 1 config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider 2 # externalConfiguration: env: - name: AWS_ACCESS_KEY_ID 3 valueFrom: secretKeyRef: name: aws-creds 4 key: awsAccessKey 5 - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey #", "apply -f <kafka_connect_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # config: option: USD{env:AWS_ACCESS_KEY_ID} option: USD{env:AWS_SECRET_ACCESS_KEY} #", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- 1 dbUsername: my-username 2 dbPassword: my-password", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: config.providers: file 1 config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2 # externalConfiguration: volumes: - name: connector-config 3 secret: secretName: mysecret 4", "apply -f <kafka_connect_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: database.hostname: 192.168.99.1 database.port: \"3306\" database.user: \"USD{file:/opt/kafka/external-configuration/connector-config/mysecret:dbUsername}\" database.password: \"USD{file:/opt/kafka/external-configuration/connector-config/mysecret:dbPassword}\" database.server.id: \"184054\" #", "apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: config.providers: directory 1 config.providers.directory.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider 2 # externalConfiguration: volumes: 3 - name: cluster-ca 4 secret: secretName: my-cluster-cluster-ca-cert 5 - name: my-user secret: secretName: my-user 6", "apply -f <kafka_connect_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: # database.history.producer.security.protocol: SSL database.history.producer.ssl.truststore.type: PEM database.history.producer.ssl.truststore.certificates: \"USD{directory:/opt/kafka/external-configuration/cluster-ca:ca.crt}\" database.history.producer.ssl.keystore.type: PEM database.history.producer.ssl.keystore.certificate.chain: \"USD{directory:/opt/kafka/external-configuration/my-user:user.crt}\" database.history.producer.ssl.keystore.key: \"USD{directory:/opt/kafka/external-configuration/my-user:user.key}\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster labels: app: my-cluster spec: kafka: # template: pod: metadata: labels: mylabel: myvalue #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # template: pod: terminationGracePeriodSeconds: 120 # #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: topic-name-1 labels: strimzi.io/cluster: my-cluster spec: topicName: topic-name-1", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic-1 1 spec: topicName: My.Topic.1 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic---c55e57fe2546a33f9e603caf57165db4072e827e #", "run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic-1 labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2", "apply -f <topic_config_file>", "get kafkatopics -o wide -w -n <namespace>", "NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-1 my-cluster 10 3 True my-topic-2 my-cluster 10 3 my-topic-3 my-cluster 10 3 True", "get kafkatopics my-topic-2 -o yaml", "status: conditions: - lastTransitionTime: \"2022-06-13T10:14:43.351550Z\" message: Number of partitions cannot be decreased reason: PartitionDecreaseException status: \"True\" type: NotReady", "get kafkatopics my-topic-2 -o wide -w -n <namespace>", "NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-2 my-cluster 10 3 True", "get kafkatopics my-topic-2 -o yaml", "status: conditions: - lastTransitionTime: '2022-06-13T10:15:03.761084Z' status: 'True' type: Ready", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 10 1 replicas: 3 2 config: min.insync.replicas: 2 3 #", "annotate kafkatopic my-topic-1 strimzi.io/managed=\"false\"", "get kafkatopics my-topic-1 -o yaml", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 124 name: my-topic-1 finalizer: strimzi.io/topic-operator labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2 status: observedGeneration: 124 1 topicName: my-topic-1 conditions: - type: Ready status: True lastTransitionTime: 20230301T103000Z", "delete kafkatopic <kafka_topic_name>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic-1 labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2", "apply -f <topic_configuration_file>", "get kafkatopics my-topic-1 -o yaml", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 1 name: my-topic-1 labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2 status: observedGeneration: 1 1 topicName: my-topic-1 conditions: - type: Ready status: True lastTransitionTime: 20230301T103000Z", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 1 name: my-topic-1 finalizers: - strimzi.io/topic-operator labels: strimzi.io/cluster: my-cluster", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 1 name: my-topic-1 finalizers: - strimzi.io/topic-operator labels: strimzi.io/cluster: my-cluster deletionTimestamp: 20230301T000000.000", "delete USD(oc get kt -n <namespace_name> -o name | grep strimzi-store-topic) && oc delete USD(oc get kt -n <namespace_name> -o name | grep strimzi-topic-operator)", "annotate USD(oc get kt -n <namespace_name> -o name | grep consumer-offsets) strimzi.io/managed=\"false\" && oc annotate USD(oc get kt -n <namespace_name> -o name | grep transaction-state) strimzi.io/managed=\"false\"", "delete USD(oc get kt -n <namespace_name> -o name | grep consumer-offsets) && oc delete USD(oc get kt -n <namespace_name> -o name | grep transaction-state)", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} userOperator: {} template: topicOperatorContainer: env: - name: STRIMZI_USE_FINALIZERS value: \"false\"", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator spec: template: spec: containers: - name: STRIMZI_USE_FINALIZERS value: \"false\"", "get kt -o=json | jq '.items[].metadata.finalizers = null' | oc apply -f -", "get kt <topic_name> -o=json | jq '.metadata.finalizers = null' | oc apply -f -", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user-1 labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls authorization: type: simple acls: # Example consumer Acls for topic my-topic using consumer group my-group - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read host: \"*\" - resource: type: group name: my-group patternType: literal operations: - Read host: \"*\" # Example Producer Acls for topic my-topic - resource: type: topic name: my-topic patternType: literal operations: - Create - Describe - Write host: \"*\"", "apply -f <user_config_file>", "get kafkausers -o wide -w -n <namespace>", "NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-1 my-cluster tls simple True my-user-2 my-cluster tls simple my-user-3 my-cluster tls simple True", "get kafkausers my-user-2 -o yaml", "status: conditions: - lastTransitionTime: \"2022-06-10T10:07:37.238065Z\" message: Simple authorization ACL rules are configured but not supported in the Kafka cluster configuration. reason: InvalidResourceException status: \"True\" type: NotReady", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # authorization: type: simple", "get kafkausers my-user-2 -o wide -w -n <namespace>", "NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-2 my-cluster tls simple True", "get kafkausers my-user-2 -o yaml", "status: conditions: - lastTransitionTime: \"2022-06-10T10:33:40.166846Z\" status: \"True\" type: Ready", "run kafka-producer -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server cluster-name -kafka-bootstrap:9092 --topic my-topic", "run kafka-consumer -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server cluster-name -kafka-bootstrap:9092 --topic my-topic --from-beginning", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-certificate.crt key: my-key.key #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # listeners: 1 - name: external1 2 port: 9094 3 type: <listener_type> 4 tls: true 5 authentication: type: tls 6 configuration: 7 # authorization: 8 type: simple superUsers: - super-user-name 9 #", "apply -f <kafka_configuration_file>", "get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==\" <listener_name> \")].bootstrapServers}{\"\\n\"}'", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external\")].bootstrapServers}{\"\\n\"}'", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster 1 spec: authentication: type: tls 2 authorization: type: simple acls: 3 - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read - resource: type: group name: my-group patternType: literal operations: - Read", "apply -f USER-CONFIG-FILE", "apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store", "get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt", "get secret <user_name> -o jsonpath='{.data.user\\.crt}' | base64 -d > user.crt", "get secret <user_name> -o jsonpath='{.data.user\\.key}' | base64 -d > user.key", "props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, \" <hostname>:<port> \");", "props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, \"SSL\"); props.put(SslConfigs.SSL_TRUSTSTORE_TYPE_CONFIG, \"PEM\"); props.put(SslConfigs.SSL_TRUSTSTORE_CERTIFICATES_CONFIG, \" <ca.crt_file_content> \");", "props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, \"SSL\"); props.put(SslConfigs.SSL_KEYSTORE_TYPE_CONFIG, \"PEM\"); props.put(SslConfigs.SSL_KEYSTORE_CERTIFICATE_CHAIN_CONFIG, \" <user.crt_file_content> \"); props.put(SslConfigs.SSL_KEYSTORE_KEY_CONFIG, \" <user.key_file_content> \");", "props.put(SslConfigs.SSL_KEYSTORE_CERTIFICATE_CHAIN_CONFIG, \"-----BEGIN CERTIFICATE----- \\n <user_certificate_content_line_1> \\n <user_certificate_content_line_n> \\n-----END CERTIFICATE---\"); props.put(SslConfigs.SSL_KEYSTORE_KEY_CONFIG, \"----BEGIN PRIVATE KEY-----\\n <user_key_content_line_1> \\n <user_key_content_line_n> \\n-----END PRIVATE KEY-----\");", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # listeners: - name: external4 port: 9094 type: nodeport tls: true authentication: type: tls # # zookeeper: #", "apply -f <kafka_configuration_file>", "NAME TYPE CLUSTER-IP PORT(S) my-cluster-kafka-external4-0 NodePort 172.30.55.13 9094:31789/TCP my-cluster-kafka-external4-1 NodePort 172.30.250.248 9094:30028/TCP my-cluster-kafka-external4-2 NodePort 172.30.115.81 9094:32650/TCP my-cluster-kafka-external4-bootstrap NodePort 172.30.30.23 9094:32650/TCP", "status: clusterId: Y_RJQDGKRXmNF7fEcWldJQ conditions: - lastTransitionTime: '2023-01-31T14:59:37.113630Z' status: 'True' type: Ready kafkaVersion: 3.7.0 listeners: # - addresses: - host: ip-10-0-224-199.us-west-2.compute.internal port: 32650 bootstrapServers: 'ip-10-0-224-199.us-west-2.compute.internal:32650' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external4 observedGeneration: 2 operatorLastSuccessfulVersion: 2.7 #", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external4\")].bootstrapServers}{\"\\n\"}' ip-10-0-224-199.us-west-2.compute.internal:32650", "get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # listeners: - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls # # zookeeper: #", "apply -f <kafka_configuration_file>", "NAME TYPE CLUSTER-IP PORT(S) my-cluster-kafka-external3-0 LoadBalancer 172.30.204.234 9094:30011/TCP my-cluster-kafka-external3-1 LoadBalancer 172.30.164.89 9094:32544/TCP my-cluster-kafka-external3-2 LoadBalancer 172.30.73.151 9094:32504/TCP my-cluster-kafka-external3-bootstrap LoadBalancer 172.30.30.228 9094:30371/TCP NAME EXTERNAL-IP (loadbalancer) my-cluster-kafka-external3-0 a8a519e464b924000b6c0f0a05e19f0d-1132975133.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-1 ab6adc22b556343afb0db5ea05d07347-611832211.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-2 a9173e8ccb1914778aeb17eca98713c0-777597560.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-bootstrap a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com", "status: clusterId: Y_RJQDGKRXmNF7fEcWldJQ conditions: - lastTransitionTime: '2023-01-31T14:59:37.113630Z' status: 'True' type: Ready kafkaVersion: 3.7.0 listeners: # - addresses: - host: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com port: 9094 bootstrapServers: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external3 observedGeneration: 2 operatorLastSuccessfulVersion: 2.7 #", "status: loadBalancer: ingress: - hostname: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com #", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external3\")].bootstrapServers}{\"\\n\"}' a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094", "get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # listeners: - name: external1 port: 9094 type: route tls: true 1 authentication: type: tls # # zookeeper: #", "apply -f <kafka_configuration_file>", "NAME HOST/PORT SERVICES PORT TERMINATION my-cluster-kafka-external1-0 my-cluster-kafka-external1-0-my-project.router.com my-cluster-kafka-external1-0 9094 passthrough my-cluster-kafka-external1-1 my-cluster-kafka-external1-1-my-project.router.com my-cluster-kafka-external1-1 9094 passthrough my-cluster-kafka-external1-2 my-cluster-kafka-external1-2-my-project.router.com my-cluster-kafka-external1-2 9094 passthrough my-cluster-kafka-external1-bootstrap my-cluster-kafka-external1-bootstrap-my-project.router.com my-cluster-kafka-external1-bootstrap 9094 passthrough", "status: ingress: - host: >- my-cluster-kafka-external1-bootstrap-my-project.router.com #", "openssl s_client -connect my-cluster-kafka-external1-0-my-project.router.com:443 -servername my-cluster-kafka-external1-0-my-project.router.com -showcerts", "Certificate chain 0 s:O = io.strimzi, CN = my-cluster-kafka i:O = io.strimzi, CN = cluster-ca v0", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external1\")].bootstrapServers}{\"\\n\"}' my-cluster-kafka-external1-bootstrap-my-project.router.com:443", "get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt", "apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { \"port\" : 9092, \"tls\" : false, \"protocol\" : \"kafka\", \"auth\" : \"scram-sha-512\" }, { \"port\" : 9093, \"tls\" : true, \"protocol\" : \"kafka\", \"auth\" : \"tls\" } ] labels: strimzi.io/cluster: my-cluster strimzi.io/discovery: \"true\" strimzi.io/kind: Kafka strimzi.io/name: my-cluster-kafka-bootstrap name: my-cluster-kafka-bootstrap spec: #", "apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { \"port\" : 8080, \"tls\" : false, \"auth\" : \"none\", \"protocol\" : \"http\" } ] labels: strimzi.io/cluster: my-bridge strimzi.io/discovery: \"true\" strimzi.io/kind: KafkaBridge strimzi.io/name: my-bridge-bridge-service", "get service -l strimzi.io/discovery=true", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3 - CN=client_4,OU=my_ou,O=my_org,L=my_location,ST=my_state,C=US - CN=client_5,OU=my_ou,O=my_org,C=GB - CN=client_6,O=my_org #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls #", "apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store", "bootstrap.servers= <kafka_cluster_name> -kafka-bootstrap:9093 1 security.protocol=SSL 2 ssl.truststore.location=/tmp/ca.p12 3 ssl.truststore.password= <truststore_password> 4 ssl.keystore.location=/tmp/user.p12 5 ssl.keystore.password= <keystore_password> 6", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls-external #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: scram-sha-512 #", "apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: password: Z2VuZXJhdGVkcGFzc3dvcmQ= 1 sasl.jaas.config: b3JnLmFwYWNoZS5rYWZrYS5jb21tb24uc2VjdXJpdHkuc2NyYW0uU2NyYW1Mb2dpbk1vZHVsZSByZXF1aXJlZCB1c2VybmFtZT0ibXktdXNlciIgcGFzc3dvcmQ9ImdlbmVyYXRlZHBhc3N3b3JkIjsK 2", "echo \"Z2VuZXJhdGVkcGFzc3dvcmQ=\" | base64 --decode", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: scram-sha-512 password: valueFrom: secretKeyRef: name: my-secret 1 key: my-password 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # quotas: producerByteRate: 1048576 1 consumerByteRate: 2097152 2 requestPercentage: 55 3 controllerMutationRate: 10 4", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # authorization: 1 type: simple superUsers: 2 - CN=client_1 - user_2 - CN=client_3 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls 3 # zookeeper: #", "apply -f <kafka_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: 1 type: tls authorization: type: simple 2 acls: - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read - resource: type: group name: my-group patternType: literal operations: - Read", "apply -f <user_config_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - podSelector: matchLabels: app: kafka-client # zookeeper: #", "apply -f your-file", "create secret generic my-secret --from-file= my-listener-key.key --from-file= my-listener-certificate.crt", "listeners: - name: plain port: 9092 type: internal tls: false - name: external3 port: 9094 type: loadbalancer tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key", "listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key", "apply -f kafka.yaml", "//Kafka brokers *. <cluster-name> -kafka-brokers *. <cluster-name> -kafka-brokers. <namespace> .svc // Bootstrap service <cluster-name> -kafka-bootstrap <cluster-name> -kafka-bootstrap. <namespace> .svc", "// Kafka brokers <cluster-name> -kafka-0. <cluster-name> -kafka-brokers <cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc <cluster-name> -kafka-1. <cluster-name> -kafka-brokers <cluster-name> -kafka-1. <cluster-name> -kafka-brokers. <namespace> .svc // Bootstrap service <cluster-name> -kafka-bootstrap <cluster-name> -kafka-bootstrap. <namespace> .svc", "// Kafka brokers <cluster-name> -kafka- <listener-name> -0 <cluster-name> -kafka- <listener-name> -0. <namespace> .svc <cluster-name> -kafka- <listener-name> -1 <cluster-name> -kafka- <listener-name> -1. <namespace> .svc // Bootstrap service <cluster-name> -kafka- <listener-name> -bootstrap <cluster-name> -kafka- <listener-name> -bootstrap. <namespace> .svc", "authentication: type: oauth # enableOauthBearer: true", "authentication: type: oauth # enablePlain: true tokenEndpointUri: https:// OAUTH-SERVER-ADDRESS /auth/realms/external/protocol/openid-connect/token", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth #", "listeners: - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth validIssuerUri: <https://<auth_server_address>/auth/realms/tls> jwksEndpointUri: <https://<auth_server_address>/auth/realms/tls/protocol/openid-connect/certs> userNameClaim: preferred_username maxSecondsWithoutReauthentication: 3600 tlsTrustedCertificates: - secretName: oauth-server-cert certificate: ca.crt #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth clientId: kafka-broker clientSecret: secretName: my-cluster-oauth key: clientSecret validIssuerUri: <https://<auth_server_-_address>/auth/realms/tls> introspectionEndpointUri: <https://<auth_server_address>/auth/realms/tls/protocol/openid-connect/token/introspect> userNameClaim: preferred_username maxSecondsWithoutReauthentication: 3600 tlsTrustedCertificates: - secretName: oauth-server-cert certificate: ca.crt", "edit kafka my-cluster", "# - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth 1 validIssuerUri: https://<auth_server_address>/auth/realms/external 2 jwksEndpointUri: https://<auth_server_address>/auth/realms/external/protocol/openid-connect/certs 3 userNameClaim: preferred_username 4 maxSecondsWithoutReauthentication: 3600 5 tlsTrustedCertificates: 6 - secretName: oauth-server-cert certificate: ca.crt disableTlsHostnameVerification: true 7 jwksExpirySeconds: 360 8 jwksRefreshSeconds: 300 9 jwksMinRefreshPauseSeconds: 1 10", "- name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth validIssuerUri: https://<auth_server_address>/auth/realms/external introspectionEndpointUri: https://<auth_server_address>/auth/realms/external/protocol/openid-connect/token/introspect 1 clientId: kafka-broker 2 clientSecret: 3 secretName: my-cluster-oauth key: clientSecret userNameClaim: preferred_username 4 maxSecondsWithoutReauthentication: 3600 5", "authentication: type: oauth # checkIssuer: false 1 checkAudience: true 2 fallbackUserNameClaim: client_id 3 fallbackUserNamePrefix: client-account- 4 validTokenType: bearer 5 userInfoEndpointUri: https://<auth_server_address>/auth/realms/external/protocol/openid-connect/userinfo 6 enableOauthBearer: false 7 enablePlain: true 8 tokenEndpointUri: https://<auth_server_address>/auth/realms/external/protocol/openid-connect/token 9 customClaimCheck: \"@.custom == 'custom-value'\" 10 clientAudience: audience 11 clientScope: scope 12 connectTimeoutSeconds: 60 13 readTimeoutSeconds: 60 14 httpRetries: 2 15 httpRetryPauseMs: 300 16 groupsClaim: \"USD.groups\" 17 groupsClaimDelimiter: \",\" 18 includeAcceptHeader: false 19", "logs -f USD{POD_NAME} -c USD{CONTAINER_NAME} get pod -w", "<dependency> <groupId>io.strimzi</groupId> <artifactId>kafka-oauth-client</artifactId> <version>0.15.0.redhat-00007</version> </dependency>", "security.protocol=SASL_SSL 1 sasl.mechanism=OAUTHBEARER 2 ssl.truststore.location=/tmp/truststore.p12 3 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" \\ 4 oauth.client.id=\"<client_id>\" \\ 5 oauth.client.secret=\"<client_secret>\" \\ 6 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" \\ 7 oauth.ssl.truststore.password=\"USDSTOREPASS\" \\ 8 oauth.ssl.truststore.type=\"PKCS12\" \\ 9 oauth.scope=\"<scope>\" \\ 10 oauth.audience=\"<audience>\" ; 11 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.client.id=\"<client_id>\" \\ 1 oauth.client.secret=\"<client_secret>\" \\ 2 oauth.password.grant.username=\"<username>\" \\ 3 oauth.password.grant.password=\"<password>\" \\ 4 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.scope=\"<scope>\" oauth.audience=\"<audience>\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.access.token=\"<access_token>\" \\ 1 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.client.id=\"<client_id>\" \\ 1 oauth.client.secret=\"<client_secret>\" \\ 2 oauth.refresh.token=\"<refresh_token>\" \\ 3 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "Properties props = new Properties(); try (FileReader reader = new FileReader(\"client.properties\", StandardCharsets.UTF_8)) { props.load(reader); }", "apiVersion: kafka.strimzi.io/v1beta2 kind: Secret metadata: name: my-bridge-oauth type: Opaque data: clientSecret: MGQ1OTRmMzYtZTllZS00MDY2LWI5OGEtMTM5MzM2NjdlZjQw 1", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # authentication: type: oauth 1 tokenEndpointUri: https://<auth-server-address>/auth/realms/master/protocol/openid-connect/token 2 clientId: kafka-bridge clientSecret: secretName: my-bridge-oauth key: clientSecret tlsTrustedCertificates: 3 - secretName: oauth-server-cert certificate: tls.crt", "spec: # authentication: # disableTlsHostnameVerification: true 1 checkAccessTokenType: false 2 accessTokenIsJwt: false 3 scope: any 4 audience: kafka 5 connectTimeoutSeconds: 60 6 readTimeoutSeconds: 60 7 httpRetries: 2 8 httpRetryPauseMs: 300 9 includeAcceptHeader: false 10", "apply -f your-file", "logs -f USD{POD_NAME} -c USD{CONTAINER_NAME} get pod -w", "edit kafka my-cluster", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # authorization: type: keycloak 1 tokenEndpointUri: < https://<auth-server-address>/auth/realms/external/protocol/openid-connect/token > 2 clientId: kafka 3 delegateToKafkaAcls: false 4 disableTlsHostnameVerification: false 5 superUsers: 6 - CN=fred - sam - CN=edward tlsTrustedCertificates: 7 - secretName: oauth-server-cert certificate: ca.crt grantsRefreshPeriodSeconds: 60 8 grantsRefreshPoolSize: 5 9 grantsMaxIdleSeconds: 300 10 grantsGcPeriodSeconds: 300 11 grantsAlwaysLatest: false 12 connectTimeoutSeconds: 60 13 readTimeoutSeconds: 60 14 httpRetries: 2 15 enableMetrics: false 16 includeAcceptHeader: false 17 #", "logs -f USD{POD_NAME} -c kafka get pod -w", "Topic:my-topic Topic:orders-* Group:orders-* Cluster:*", "kafka-cluster:my-cluster,Topic:* kafka-cluster:*,Group:b_*", "bin/kafka-topics.sh --create --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties", "bin/kafka-topics.sh --list --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties", "bin/kafka-topics.sh --describe --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties", "bin/kafka-console-producer.sh --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --producer.config=/tmp/config.properties", "Topic:my-topic Group:my-group-*", "bin/kafka-console-consumer.sh --topic my-topic --group my-group-1 --from-beginning --bootstrap-server my-cluster-kafka-bootstrap:9092 --consumer.config /tmp/config.properties", "Topic:my-topic Cluster:kafka-cluster", "bin/kafka-console-producer.sh --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --producer.config=/tmp/config.properties --producer-property enable.idempotence=true --request-required-acks -1", "bin/kafka-consumer-groups.sh --list --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties", "bin/kafka-consumer-groups.sh --describe --group my-group-1 --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties", "bin/kafka-topics.sh --alter --topic my-topic --partitions 2 --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties", "bin/kafka-configs.sh --entity-type brokers --entity-name 0 --describe --all --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties", "bin/kafka-configs --entity-type brokers --entity-name 0 --alter --add-config log.cleaner.threads=2 --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties", "bin/kafka-topics.sh --delete --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties", "bin/kafka-leader-election.sh --topic my-topic --partition 0 --election-type PREFERRED / --bootstrap-server my-cluster-kafka-bootstrap:9092 --admin.config /tmp/config.properties", "bin/kafka-reassign-partitions.sh --topics-to-move-json-file /tmp/topics-to-move.json --broker-list \"0,1\" --generate --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties > /tmp/partition-reassignment.json", "bin/kafka-reassign-partitions.sh --reassignment-json-file /tmp/partition-reassignment.json --execute --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties", "bin/kafka-reassign-partitions.sh --reassignment-json-file /tmp/partition-reassignment.json --verify --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties", "NS=sso get ingress keycloak -n USDNS", "get -n USDNS pod keycloak-0 -o yaml | less", "SECRET_NAME=credential-keycloak get -n USDNS secret USDSECRET_NAME -o yaml | grep PASSWORD | awk '{print USD2}' | base64 -D", "Dev Team A can write to topics that start with x_ on any cluster Dev Team B can read from topics that start with x_ on any cluster Dev Team B can update consumer group offsets that start with x_ on any cluster ClusterManager of my-cluster Group has full access to cluster config on my-cluster ClusterManager of my-cluster Group has full access to consumer groups on my-cluster ClusterManager of my-cluster Group has full access to topics on my-cluster", "SSO_HOST= SSO-HOSTNAME SSO_HOST_PORT=USDSSO_HOST:443 STOREPASS=storepass echo \"Q\" | openssl s_client -showcerts -connect USDSSO_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/sso.pem", "split -p \"-----BEGIN CERTIFICATE-----\" sso.pem sso- for f in USD(ls sso-*); do mv USDf USDf.pem; done cp USD(ls sso-* | sort -r | head -n 1) sso-ca.crt", "create secret generic oauth-server-cert --from-file=/tmp/sso-ca.crt -n USDNS", "SSO_HOST= SSO-HOSTNAME", "cat examples/security/keycloak-authorization/kafka-ephemeral-oauth-single-keycloak-authz.yaml | sed -E 's#\\USD{SSO_HOST}'\"#USDSSO_HOST#\" | oc create -n USDNS -f -", "NS=sso run -ti --restart=Never --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 kafka-cli -n USDNS -- /bin/sh", "attach -ti kafka-cli -n USDNS", "SSO_HOST= SSO-HOSTNAME SSO_HOST_PORT=USDSSO_HOST:443 STOREPASS=storepass echo \"Q\" | openssl s_client -showcerts -connect USDSSO_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/sso.pem", "split -p \"-----BEGIN CERTIFICATE-----\" sso.pem sso- for f in USD(ls sso-*); do mv USDf USDf.pem; done cp USD(ls sso-* | sort -r | head -n 1) sso-ca.crt", "keytool -keystore /tmp/truststore.p12 -storetype pkcs12 -alias sso -storepass USDSTOREPASS -import -file /tmp/sso-ca.crt -noprompt", "KAFKA_HOST_PORT=my-cluster-kafka-bootstrap:9093 STOREPASS=storepass echo \"Q\" | openssl s_client -showcerts -connect USDKAFKA_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/my-cluster-kafka.pem", "split -p \"-----BEGIN CERTIFICATE-----\" /tmp/my-cluster-kafka.pem kafka- for f in USD(ls kafka-*); do mv USDf USDf.pem; done cp USD(ls kafka-* | sort -r | head -n 1) my-cluster-kafka-ca.crt", "keytool -keystore /tmp/truststore.p12 -storetype pkcs12 -alias my-cluster-kafka -storepass USDSTOREPASS -import -file /tmp/my-cluster-kafka-ca.crt -noprompt", "SSO_HOST= SSO-HOSTNAME cat > /tmp/team-a-client.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id=\"team-a-client\" oauth.client.secret=\"team-a-client-secret\" oauth.ssl.truststore.location=\"/tmp/truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.token.endpoint.uri=\"https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF", "cat > /tmp/team-b-client.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id=\"team-b-client\" oauth.client.secret=\"team-b-client-secret\" oauth.ssl.truststore.location=\"/tmp/truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.token.endpoint.uri=\"https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF", "USERNAME=alice PASSWORD=alice-password GRANT_RESPONSE=USD(curl -X POST \"https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token\" -H 'Content-Type: application/x-www-form-urlencoded' -d \"grant_type=password&username=USDUSERNAME&password=USDPASSWORD&client_id=kafka-cli&scope=offline_access\" -s -k) REFRESH_TOKEN=USD(echo USDGRANT_RESPONSE | awk -F \"refresh_token\\\":\\\"\" '{printf USD2}' | awk -F \"\\\"\" '{printf USD1}')", "cat > /tmp/alice.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.refresh.token=\"USDREFRESH_TOKEN\" oauth.client.id=\"kafka-cli\" oauth.ssl.truststore.location=\"/tmp/truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.token.endpoint.uri=\"https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF", "bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic my-topic --producer.config=/tmp/team-a-client.properties First message", "bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --producer.config /tmp/team-a-client.properties First message Second message", "logs my-cluster-kafka-0 -f -n USDNS", "bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --from-beginning --consumer.config /tmp/team-a-client.properties", "bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --from-beginning --consumer.config /tmp/team-a-client.properties --group a_consumer_group_1", "bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list", "bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list", "bin/kafka-configs.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --entity-type brokers --describe --entity-default", "bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --producer.config /tmp/team-b-client.properties Message 1", "bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic b_messages --producer.config /tmp/team-b-client.properties Message 1 Message 2 Message 3", "bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-b-client.properties Message 1", "bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-a-client.properties Message 1", "bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --topic x_messages --create --replication-factor 1 --partitions 1", "bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --list bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-b-client.properties --list", "bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-a-client.properties Message 1 Message 2 Message 3", "bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-b-client.properties Message 4 Message 5", "bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/team-b-client.properties --group x_consumer_group_b", "bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/team-a-client.properties --group x_consumer_group_a", "bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/team-a-client.properties --group a_consumer_group_a", "bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/alice.properties", "bin/kafka-configs.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --entity-type brokers --describe --entity-default", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # template: clusterCaCert: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: clusterCa: generateSecretOwnerReference: false clientsCa: generateSecretOwnerReference: false", "Not Before Not After | | |<--------------- validityDays --------------->| <--- renewalDays --->|", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: clusterCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true clientsCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true", "annotate secret my-cluster-cluster-ca-cert -n my-project strimzi.io/force-renew=\"true\"", "annotate secret my-cluster-clients-ca-cert -n my-project strimzi.io/force-renew=\"true\"", "get secret my-cluster-cluster-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates", "get secret my-cluster-clients-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates", "delete secret my-cluster-cluster-ca-cert -n my-project", "delete secret my-cluster-clients-ca-cert -n my-project", "get secret my-cluster-cluster-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates", "get secret my-cluster-clients-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates", "get <resource_type> --all-namespaces | grep <kafka_cluster_name>", "kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/p12 env: - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: my-secret key: my-password volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert", "kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/crt volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert", "get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.p12}' | base64 -d > ca.p12", "get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.password}' | base64 -d > ca.password", "get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt", "openssl pkcs12 -export -in ca.crt -nokeys -out ca.p12 -password pass:<P12_password> -caname ca.crt", "create secret generic <cluster_name>-clients-ca-cert --from-file=ca.crt=ca.crt", "create secret generic <cluster_name>-cluster-ca-cert --from-file=ca.crt=ca.crt --from-file=ca.p12=ca.p12 --from-literal=ca.password= P12-PASSWORD", "create secret generic <ca_key_secret> --from-file=ca.key=ca.key", "label secret <ca_certificate_secret> strimzi.io/kind=Kafka strimzi.io/cluster=\"<cluster_name>\"", "label secret <ca_key_secret> strimzi.io/kind=Kafka strimzi.io/cluster=\"<cluster_name>\"", "annotate secret <ca_certificate_secret> strimzi.io/ca-cert-generation=\"<ca_certificate_generation>\"", "annotate secret <ca_key_secret> strimzi.io/ca-key-generation=\"<ca_key_generation>\"", "kind: Kafka version: kafka.strimzi.io/v1beta2 spec: # clusterCa: generateCertificateAuthority: false", "edit secret <ca_certificate_secret_name>", "apiVersion: v1 kind: Secret data: ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: \"0\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque", "cat <path_to_new_certificate> | base64", "apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: \"1\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque", "annotate Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation=\"true\"", "annotate Kafka my-cluster strimzi.io/pause-reconciliation=\"true\"", "describe Kafka <name_of_custom_resource>", "edit Kafka <name_of_custom_resource>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: clusterCa: generateCertificateAuthority: false 1 clientsCa: generateCertificateAuthority: false 2", "edit secret <ca_certificate_secret_name>", "apiVersion: v1 kind: Secret data: ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: \"0\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque", "cat <path_to_new_certificate> | base64", "apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... 1 ca-2023-01-26T17-32-00Z.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 2 metadata: annotations: strimzi.io/ca-cert-generation: \"1\" 3 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque", "edit secret <ca_key_name>", "apiVersion: v1 kind: Secret data: ca.key: SA1cKF1GFDzOIiPOIUQBHDNFGDFS... 1 metadata: annotations: strimzi.io/ca-key-generation: \"0\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca # type: Opaque", "cat <path_to_new_key> | base64", "apiVersion: v1 kind: Secret data: ca.key: AB0cKF1GFDzOIiPOIUQWERZJQ0F... 1 metadata: annotations: strimzi.io/ca-key-generation: \"1\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca # type: Opaque", "annotate --overwrite Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation=\"false\"", "annotate Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation-", "edit secret <ca_certificate_secret_name>", "apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F metadata: annotations: strimzi.io/ca-cert-generation: \"1\" labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 # config: # default.replication.factor: 3 min.insync.replicas: 2 #", "annotate Kafka my-kafka-cluster strimzi.io/skip-broker-scaledown-check=\"true\"", "annotate Kafka my-kafka-cluster strimzi.io/skip-broker-scaledown-check-", "RackAwareGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; CpuCapacityGoal", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: # Note that `default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal #", "RackAwareGoal; MinTopicLeadersPerBrokerGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; CpuCapacityGoal; ReplicaDistributionGoal; PotentialNwOutGoal; DiskUsageDistributionGoal; NetworkInboundUsageDistributionGoal; NetworkOutboundUsageDistributionGoal; CpuUsageDistributionGoal; TopicReplicaDistributionGoal; LeaderReplicaDistributionGoal; LeaderBytesInDistributionGoal; PreferredLeaderElectionGoal", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: # Note that `default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal #", "KafkaRebalance.spec.goals", "describe kafkarebalance <kafka_rebalance_resource_name> -n <namespace>", "get kafkarebalance -o json | jq <jq_query> .", "Name: my-rebalance Namespace: myproject Labels: strimzi.io/cluster=my-cluster Annotations: API Version: kafka.strimzi.io/v1alpha1 Kind: KafkaRebalance Metadata: Status: Conditions: Last Transition Time: 2022-04-05T14:36:11.900Z Status: ProposalReady Type: State Observed Generation: 1 Optimization Result: Data To Move MB: 0 Excluded Brokers For Leadership: Excluded Brokers For Replica Move: Excluded Topics: Intra Broker Data To Move MB: 12 Monitored Partitions Percentage: 100 Num Intra Broker Replica Movements: 0 Num Leader Movements: 24 Num Replica Movements: 55 On Demand Balancedness Score After: 82.91290759174306 On Demand Balancedness Score Before: 78.01176356230222 Recent Windows: 5 Session Id: a4f833bd-2055-4213-bfdd-ad21f95bf184", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster annotations: strimzi.io/rebalance-auto-approval: \"true\" spec: mode: # any mode #", "describe configmaps <my_rebalance_configmap_name> -n <namespace>", "get configmaps <my_rebalance_configmap_name> -o json | jq '.[\"data\"][\"brokerLoad.json\"]|fromjson|.'", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: brokerCapacity: 1 inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s overrides: 2 - brokers: [0] inboundNetwork: 20000KiB/s outboundNetwork: 20000KiB/s - brokers: [1, 2] inboundNetwork: 30000KiB/s outboundNetwork: 30000KiB/s # config: 3 # Note that `default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > 4 com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal # hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal # cpu.balance.threshold: 1.1 metadata.max.age.ms: 300000 send.buffer.bytes: 131072 webserver.http.cors.enabled: true 5 webserver.http.cors.origin: \"*\" webserver.http.cors.exposeheaders: \"User-Task-ID,Content-Type\" # resources: 6 requests: cpu: 1 memory: 512Mi limits: cpu: 2 memory: 2Gi logging: 7 type: inline loggers: rootLogger.level: INFO template: 8 pod: metadata: labels: label1: value1 securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 readinessProbe: 9 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 10 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: cruise-control-metrics key: metrics-config.yml", "apply -f <kafka_configuration_file>", "get deployments -n <my_cluster_operator_namespace>", "NAME READY UP-TO-DATE AVAILABLE my-cluster-cruise-control 1/1 1 1", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: {}", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: mode: full", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: mode: add-brokers brokers: [3, 4] 1", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: mode: remove-brokers brokers: [3, 4] 1", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - ReplicaCapacityGoal", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - ReplicaCapacityGoal skipHardGoalCheck: true", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster annotations: strimzi.io/rebalance-auto-approval: \"true\" spec: goals: - RackAwareGoal - ReplicaCapacityGoal skipHardGoalCheck: true", "apply -f <kafka_rebalance_configuration_file>", "get kafkarebalance -o wide -w -n <namespace>", "describe kafkarebalance <kafka_rebalance_resource_name>", "Status: Conditions: Last Transition Time: 2020-05-19T13:50:12.533Z Status: ProposalReady Type: State Observed Generation: 1 Optimization Result: Data To Move MB: 0 Excluded Brokers For Leadership: Excluded Brokers For Replica Move: Excluded Topics: Intra Broker Data To Move MB: 0 Monitored Partitions Percentage: 100 Num Intra Broker Replica Movements: 0 Num Leader Movements: 0 Num Replica Movements: 26 On Demand Balancedness Score After: 81.8666802863978 On Demand Balancedness Score Before: 78.01176356230222 Recent Windows: 1 Session Id: 05539377-ca7b-45ef-b359-e13564f1458c", "com.linkedin.kafka.cruisecontrol.exception.OptimizationFailureException: [CpuCapacityGoal] Insufficient capacity for cpu (Utilization 615.21, Allowed Capacity 420.00, Threshold: 0.70). Add at least 3 brokers with the same cpu capacity (100.00) as broker-0. Add at least 3 brokers with the same cpu capacity (100.00) as broker-0.", "annotate kafkarebalance <kafka_rebalance_resource_name> strimzi.io/rebalance=\"refresh\"", "get kafkarebalance -o wide -w -n <namespace>", "annotate kafkarebalance <kafka_rebalance_resource_name> strimzi.io/rebalance=\"approve\"", "get kafkarebalance -o wide -w -n <namespace>", "annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=\"stop\"", "describe kafkarebalance rebalance-cr-name", "describe kafkarebalance rebalance-cr-name", "annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=\"refresh\"", "describe kafkarebalance rebalance-cr-name", "run helper-pod -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- bash", "{ \"version\": 1, 1 \"partitions\": [ 2 { \"topic\": \"example-topic-1\", 3 \"partition\": 0, 4 \"replicas\": [1, 2, 3] 5 }, { \"topic\": \"example-topic-1\", \"partition\": 1, \"replicas\": [2, 3, 4] }, { \"topic\": \"example-topic-2\", \"partition\": 0, \"replicas\": [3, 4, 5] } ] }", "{ \"version\": 1, \"topics\": [ { \"topic\": \"my-topic\"} ] }", "{ \"version\": 1, \"partitions\": [ { \"topic\": \"example-topic-1\", \"partition\": 0, \"replicas\": [1, 2, 3] \"log_dirs\": [\"/var/lib/kafka/data-0/kafka-log1\", \"any\", \"/var/lib/kafka/data-1/kafka-log2\"] }, { \"topic\": \"example-topic-1\", \"partition\": 1, \"replicas\": [2, 3, 4] \"log_dirs\": [\"any\", \"/var/lib/kafka/data-2/kafka-log3\", \"/var/lib/kafka/data-3/kafka-log4\"] }, { \"topic\": \"example-topic-2\", \"partition\": 0, \"replicas\": [3, 4, 5] \"log_dirs\": [\"/var/lib/kafka/data-4/kafka-log5\", \"any\", \"/var/lib/kafka/data-5/kafka-log6\"] } ] }", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: # - name: tls port: 9093 type: internal tls: true 1 authentication: type: tls 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 3 config: retention.ms: 7200000 segment.bytes: 1073741824 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: 1 type: tls authorization: type: simple 2 acls: # access to the topic - resource: type: topic name: my-topic operations: - Create - Describe - Read - AlterConfigs host: \"*\" # access to the cluster - resource: type: cluster operations: - Alter - AlterConfigs host: \"*\" # #", "get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.p12}' | base64 -d > ca.p12", "get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.password}' | base64 -d > ca.password", "run --restart=Never --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 <interactive_pod_name> -- /bin/sh -c \"sleep 3600\"", "cp ca.p12 <interactive_pod_name> :/tmp", "get secret <kafka_user> -o jsonpath='{.data.user\\.p12}' | base64 -d > user.p12", "get secret <kafka_user> -o jsonpath='{.data.user\\.password}' | base64 -d > user.password", "cp user.p12 <interactive_pod_name> :/tmp", "bootstrap.servers= <kafka_cluster_name> -kafka-bootstrap:9093 1 security.protocol=SSL 2 ssl.truststore.location=/tmp/ca.p12 3 ssl.truststore.password= <truststore_password> 4 ssl.keystore.location=/tmp/user.p12 5 ssl.keystore.password= <keystore_password> 6", "cp config.properties <interactive_pod_name> :/tmp/config.properties", "{ \"version\": 1, \"topics\": [ { \"topic\": \"my-topic\"} ] }", "cp topics.json <interactive_pod_name> :/tmp/topics.json", "exec -n <namespace> -ti <interactive_pod_name> /bin/bash", "bin/kafka-reassign-partitions.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/config.properties --topics-to-move-json-file /tmp/topics.json --broker-list 0,1,2,3,4 --generate", "cp reassignment.json <interactive_pod_name> :/tmp/reassignment.json", "exec -n <namespace> -ti <interactive_pod_name> /bin/bash", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --throttle 5000000 --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --throttle 10000000 --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --verify", "cp reassignment.json <interactive_pod_name> :/tmp/reassignment.json", "exec -n <namespace> -ti <interactive_pod_name> /bin/bash", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --throttle 5000000 --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --throttle 10000000 --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --verify", "exec my-cluster-kafka-0 -c kafka -it -- /bin/bash -c \"ls -l /var/lib/kafka/kafka-log_<n>_ | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\\.[a-z0-9]+-deleteUSD'\"", "{ \"version\": 1, \"topics\": [ { \"topic\": \"my-topic\"} ] }", "Current partition replica assignment {\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[3,4,2,0],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[0,2,3,1],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[1,3,0,4],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]}]} Proposed partition reassignment configuration {\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[0,1,2,3],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[1,2,3,4],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[2,3,4,0],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]}]}", "jq '.partitions[].replicas |= del(.[-1])' reassignment.json > reassignment.json", "{\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[0,1,2],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[1,2,3],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[2,3,4],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]}]}", "cp reassignment.json <interactive_pod_name> :/tmp/reassignment.json", "exec -n <namespace> -ti <interactive_pod_name> /bin/bash", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --verify", "bin/kafka-topics.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --describe", "my-topic Partition: 0 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2 my-topic Partition: 1 Leader: 2 Replicas: 1,2,3 Isr: 1,2,3 my-topic Partition: 2 Leader: 3 Replicas: 2,3,4 Isr: 2,3,4", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 3 replicas: 3", "metrics ├── grafana-dashboards 1 │ ├── strimzi-cruise-control.json │ ├── strimzi-kafka-bridge.json │ ├── strimzi-kafka-connect.json │ ├── strimzi-kafka-exporter.json │ ├── strimzi-kafka-mirror-maker-2.json │ ├── strimzi-kafka.json │ ├── strimzi-operators.json │ └── strimzi-zookeeper.json ├── grafana-install │ └── grafana.yaml 2 ├── prometheus-additional-properties │ └── prometheus-additional.yaml 3 ├── prometheus-alertmanager-config │ └── alert-manager-config.yaml 4 ├── prometheus-install │ ├── alert-manager.yaml 5 │ ├── prometheus-rules.yaml 6 │ ├── prometheus.yaml 7 │ └── strimzi-pod-monitor.yaml 8 ├── kafka-bridge-metrics.yaml 9 ├── kafka-connect-metrics.yaml 10 ├── kafka-cruise-control-metrics.yaml 11 ├── kafka-metrics.yaml 12 └── kafka-mirror-maker-2-metrics.yaml 13", "apply -f kafka-metrics.yaml", "edit kafka <kafka_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # metricsConfig: 1 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: kafka-metrics key: kafka-metrics-config.yml --- kind: ConfigMap 2 apiVersion: v1 metadata: name: kafka-metrics labels: app: strimzi data: kafka-metrics-config.yml: | # metrics configuration", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # kafkaExporter: image: my-registry.io/my-org/my-exporter-cluster:latest 1 groupRegex: \".*\" 2 topicRegex: \".*\" 3 groupExcludeRegex: \"^excluded-.*\" 4 topicExcludeRegex: \"^excluded-.*\" 5 resources: 6 requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi logging: debug 7 enableSaramaLogging: true 8 template: 9 pod: metadata: labels: label1: value1 imagePullSecrets: - name: my-docker-credentials securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 readinessProbe: 10 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: 11 initialDelaySeconds: 15 timeoutSeconds: 5", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # bootstrapServers: my-cluster-kafka:9092 http: # enableMetrics: true #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # listeners: - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth enableMetrics: true configuration: # authorization: type: keycloak enableMetrics: true #", "get pods -n openshift-user-workload-monitoring", "NAME READY STATUS RESTARTS AGE prometheus-operator-5cc59f9bc6-kgcq8 1/1 Running 0 25s prometheus-user-workload-0 5/5 Running 1 14s prometheus-user-workload-1 5/5 Running 1 14s thanos-ruler-user-workload-0 3/3 Running 0 14s thanos-ruler-user-workload-1 3/3 Running 0 14s", "apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: cluster-operator-metrics labels: app: strimzi spec: selector: matchLabels: strimzi.io/kind: cluster-operator namespaceSelector: matchNames: - <project-name> 1 podMetricsEndpoints: - path: /metrics port: http", "apply -f strimzi-pod-monitor.yaml -n MY-PROJECT", "apply -f prometheus-rules.yaml -n MY-PROJECT", "create sa grafana-service-account -n my-project", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: grafana-cluster-monitoring-binding labels: app: strimzi subjects: - kind: ServiceAccount name: grafana-service-account namespace: my-project roleRef: kind: ClusterRole name: cluster-monitoring-view apiGroup: rbac.authorization.k8s.io", "apply -f grafana-cluster-monitoring-binding.yaml -n my-project", "apiVersion: v1 kind: Secret metadata: name: secret-sa annotations: kubernetes.io/service-account.name: \"grafana-service-account\" 1 type: kubernetes.io/service-account-token 2", "create -f <secret_configuration>.yaml", "describe sa/grafana-service-account | grep Tokens: describe secret grafana-service-account-token-mmlp9 | grep token:", "apiVersion: 1 datasources: - name: Prometheus type: prometheus url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 access: proxy basicAuth: false withCredentials: false isDefault: true jsonData: timeInterval: 5s tlsSkipVerify: true httpHeaderName1: \"Authorization\" secureJsonData: httpHeaderValue1: \"Bearer USD{ GRAFANA-ACCESS-TOKEN }\" 1 editable: true", "create configmap grafana-config --from-file=datasource.yaml -n MY-PROJECT", "apiVersion: apps/v1 kind: Deployment metadata: name: grafana labels: app: strimzi spec: replicas: 1 selector: matchLabels: name: grafana template: metadata: labels: name: grafana spec: serviceAccountName: grafana-service-account containers: - name: grafana image: grafana/grafana:10.4.2 ports: - name: grafana containerPort: 3000 protocol: TCP volumeMounts: - name: grafana-data mountPath: /var/lib/grafana - name: grafana-logs mountPath: /var/log/grafana - name: grafana-config mountPath: /etc/grafana/provisioning/datasources/datasource.yaml readOnly: true subPath: datasource.yaml readinessProbe: httpGet: path: /api/health port: 3000 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: httpGet: path: /api/health port: 3000 initialDelaySeconds: 15 periodSeconds: 20 volumes: - name: grafana-data emptyDir: {} - name: grafana-logs emptyDir: {} - name: grafana-config configMap: name: grafana-config --- apiVersion: v1 kind: Service metadata: name: grafana labels: app: strimzi spec: ports: - name: grafana port: 3000 targetPort: 3000 protocol: TCP selector: name: grafana type: ClusterIP", "apply -f <grafana-application> -n <my-project>", "create route edge <my-grafana-route> --service=grafana --namespace= KAFKA-NAMESPACE", "get routes NAME HOST/PORT PATH SERVICES MY-GRAFANA-ROUTE MY-GRAFANA-ROUTE-amq-streams.net grafana", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # template: mirrorMakerContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # template: bridgeContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #", "apply -f <resource_configuration_file>", "<dependency> <groupId>io.opentelemetry.semconv</groupId> <artifactId>opentelemetry-semconv</artifactId> <version>1.21.0-alpha</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-otlp</artifactId> <version>1.34.1</version> <exclusions> <exclusion> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-okhttp</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-grpc-managed-channel</artifactId> <version>1.34.1</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk-extension-autoconfigure</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-kafka-clients-2.6</artifactId> <version>1.32.0-alpha</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-jdk</artifactId> <version>1.34.1-alpha</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.grpc</groupId> <artifactId>grpc-netty-shaded</artifactId> <version>1.61.0</version> </dependency>", "OpenTelemetry ot = GlobalOpenTelemetry.get();", "GlobalTracer.register(tracer);", "// Producer instance Producer < String, String > op = new KafkaProducer < > ( configs, new StringSerializer(), new StringSerializer() ); Producer < String, String > producer = tracing.wrap(op); KafkaTracing tracing = KafkaTracing.create(GlobalOpenTelemetry.get()); producer.send(...); //consumer instance Consumer<String, String> oc = new KafkaConsumer<>( configs, new StringDeserializer(), new StringDeserializer() ); Consumer<String, String> consumer = tracing.wrap(oc); consumer.subscribe(Collections.singleton(\"mytopic\")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);", "senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); producer.send(...);", "consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); consumer.subscribe(Collections.singletonList(\"messages\")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);", "private class TracingKafkaClientSupplier extends DefaultKafkaClientSupplier { @Override public Producer<byte[], byte[]> getProducer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getProducer(config)); } @Override public Consumer<byte[], byte[]> getConsumer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getConsumer(config)); } @Override public Consumer<byte[], byte[]> getRestoreConsumer(Map<String, Object> config) { return this.getConsumer(config); } @Override public Consumer<byte[], byte[]> getGlobalConsumer(Map<String, Object> config) { return this.getConsumer(config); } }", "KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start();", "props.put(StreamsConfig.PRODUCER_PREFIX + ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); props.put(StreamsConfig.CONSUMER_PREFIX + ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName());", "io.opentelemetry:opentelemetry-exporter-zipkin", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-zipkin-service - name: OTEL_EXPORTER_ZIPKIN_ENDPOINT value: http://zipkin-exporter-host-name:9411/api/v2/spans 1 - name: OTEL_TRACES_EXPORTER value: zipkin 2 tracing: type: opentelemetry #", "//Defines attribute extraction for a producer private static class ProducerAttribExtractor implements AttributesExtractor < ProducerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey(\"prod_start\"), \"prod1\"); } @Override public void onEnd(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey(\"prod_end\"), \"prod2\"); } } //Defines attribute extraction for a consumer private static class ConsumerAttribExtractor implements AttributesExtractor < ConsumerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey(\"con_start\"), \"con1\"); } @Override public void onEnd(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey(\"con_end\"), \"con2\"); } } //Extracts the attributes public static void main(String[] args) throws Exception { Map < String, Object > configs = new HashMap < > (Collections.singletonMap(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, \"localhost:9092\")); System.setProperty(\"otel.traces.exporter\", \"jaeger\"); System.setProperty(\"otel.service.name\", \"myapp1\"); KafkaTracing tracing = KafkaTracing.newBuilder(GlobalOpenTelemetry.get()) .addProducerAttributesExtractors(new ProducerAttribExtractor()) .addConsumerAttributesExtractors(new ConsumerAttribExtractor()) .build();", "apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration webhooks: - name: strimzi-drain-cleaner.strimzi.io rules: - apiGroups: [\"\"] apiVersions: [\"v1\"] operations: [\"CREATE\"] resources: [\"pods/eviction\"] scope: \"Namespaced\" clientConfig: service: namespace: \"strimzi-drain-cleaner\" name: \"strimzi-drain-cleaner\" path: /drainer port: 443 caBundle: Cg== #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # min.insync.replicas: 2 #", "apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-drain-cleaner containers: - name: strimzi-drain-cleaner # env: - name: STRIMZI_DENY_EVICTION value: \"true\" - name: STRIMZI_DRAIN_KAFKA value: \"true\" - name: STRIMZI_DRAIN_ZOOKEEPER value: \"false\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: template: podDisruptionBudget: maxUnavailable: 0 # zookeeper: template: podDisruptionBudget: maxUnavailable: 0 #", "apply -f <kafka_configuration_file>", "apply -f ./install/drain-cleaner/openshift", "get nodes drain <name-of-node> --delete-emptydir-data --ignore-daemonsets --timeout=6000s --force", "INFO ... Received eviction webhook for Pod my-cluster-zookeeper-2 in namespace my-project INFO ... Pod my-cluster-zookeeper-2 in namespace my-project will be annotated for restart INFO ... Pod my-cluster-zookeeper-2 in namespace my-project found and annotated for restart INFO ... Received eviction webhook for Pod my-cluster-kafka-0 in namespace my-project INFO ... Pod my-cluster-kafka-0 in namespace my-project will be annotated for restart INFO ... Pod my-cluster-kafka-0 in namespace my-project found and annotated for restart", "INFO PodOperator:68 - Reconciliation #13(timer) Kafka(my-project/my-cluster): Rolling Pod my-cluster-zookeeper-2 INFO PodOperator:68 - Reconciliation #13(timer) Kafka(my-project/my-cluster): Rolling Pod my-cluster-kafka-0 INFO AbstractOperator:500 - Reconciliation #13(timer) Kafka(my-project/my-cluster): reconciled", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-drain-cleaner labels: app: strimzi-drain-cleaner namespace: strimzi-drain-cleaner spec: # spec: serviceAccountName: strimzi-drain-cleaner containers: - name: strimzi-drain-cleaner # env: - name: STRIMZI_DRAIN_KAFKA value: \"true\" - name: STRIMZI_DRAIN_ZOOKEEPER value: \"true\" - name: STRIMZI_CERTIFICATE_WATCH_ENABLED value: \"true\" - name: STRIMZI_CERTIFICATE_WATCH_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_CERTIFICATE_WATCH_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name #", "./report.sh --namespace=<cluster_namespace> --cluster=<cluster_name> --out-dir=<local_output_directory>", "./report.sh --namespace=my-amq-streams-namespace --cluster=my-kafka-cluster --bridge=my-bridge-component --secrets=all --out-dir=~/reports", "env: - name: STRIMZI_FEATURE_GATES value: -ControlPlaneListener", "env: - name: STRIMZI_FEATURE_GATES value: +ControlPlaneListener", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # min.insync.replicas: 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # template: podDisruptionBudget: maxUnavailable: 0", "annotate pod my-cluster-pool-a-1 strimzi.io/manual-rolling-update=\"true\"", "sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "replace -f install/cluster-operator", "get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'", "registry.redhat.io/amq-streams/strimzi-kafka-37-rhel9:2.7.0", "get kafka <kafka_cluster_name> -n <namespace> -o jsonpath='{.status.conditions}'", "edit kafka <kafka_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 metadataVersion: 3.6-IV2 version: 3.6.0 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 metadataVersion: 3.6-IV2 1 version: 3.7.0 2 #", "get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 metadataVersion: 3.7-IV2 version: 3.7.0 #", "edit kafka <kafka_configuration_file>", "kind: Kafka spec: # kafka: version: 3.6.0 config: log.message.format.version: \"3.6\" inter.broker.protocol.version: \"3.6\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.7.0 1 config: log.message.format.version: \"3.6\" 2 inter.broker.protocol.version: \"3.6\" 3 #", "get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.7.0 config: log.message.format.version: \"3.6\" inter.broker.protocol.version: \"3.7\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.7.0 config: log.message.format.version: \"3.7\" inter.broker.protocol.version: \"3.7\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: spec: # status: # kafkaVersion: 3.7.0 operatorLastSuccessfulVersion: 2.7 kafkaMetadataVersion: 3.7", "sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "replace -f install/cluster-operator", "get pod my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'", "edit kafka <kafka_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 metadataVersion: 3.6-IV2 1 version: 3.7.0 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 metadataVersion: 3.6-IV2 1 version: 3.6.0 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.7.0 config: log.message.format.version: \"3.6\" #", "edit kafka <kafka_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # kafka: version: 3.7.0 1 config: inter.broker.protocol.version: \"3.6\" 2 log.message.format.version: \"3.6\" #", "get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # kafka: version: 3.6.0 1 config: inter.broker.protocol.version: \"3.6\" 2 log.message.format.version: \"3.6\" #", "run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete", "get <resource_type> --all-namespaces | grep <kafka_cluster_name>", "delete subscription amq-streams -n openshift-operators", "delete csv amqstreams. <version> -n openshift-operators", "get crd -l app=strimzi -o name | xargs oc delete", "get <resource_type> --all-namespaces | grep <kafka_cluster_name>", "delete -f install/cluster-operator", "delete <resource_type> <resource_name> -n <namespace>", "delete secret my-cluster-clients-ca-cert -n my-project", "-n kafka get events --field-selector reportingController=strimzi.io/cluster-operator", "LAST SEEN TYPE REASON OBJECT MESSAGE 2m Normal CaCertRenewed pod/strimzi-cluster-kafka-0 CA certificate renewed 58m Normal PodForceRestartOnError pod/strimzi-cluster-kafka-1 Pod needs to be forcibly restarted due to an error 5m47s Normal ManualRollingUpdate pod/strimzi-cluster-kafka-2 Pod was manually annotated to be rolled", "-n kafka get events --field-selector reportingController=strimzi.io/cluster-operator,reason=PodForceRestartOnError", "-n kafka get events --field-selector reportingController=strimzi.io/cluster-operator,reason=PodForceRestartOnError -o yaml", "apiVersion: v1 items: - action: StrimziInitiatedPodRestart apiVersion: v1 eventTime: \"2022-05-13T00:22:34.168086Z\" firstTimestamp: null involvedObject: kind: Pod name: strimzi-cluster-kafka-1 namespace: kafka kind: Event lastTimestamp: null message: Pod needs to be forcibly restarted due to an error metadata: creationTimestamp: \"2022-05-13T00:22:34Z\" generateName: strimzi-event name: strimzi-eventwppk6 namespace: kafka resourceVersion: \"432961\" uid: 29fcdb9e-f2cf-4c95-a165-a5efcd48edfc reason: PodForceRestartOnError reportingController: strimzi.io/cluster-operator reportingInstance: strimzi-cluster-operator-6458cfb4c6-6bpdp source: {} type: Normal kind: List metadata: resourceVersion: \"\" selfLink: \"\"", "maintenanceTimeWindows: - \"* * 0-1 ? * SUN,MON,TUE,WED,THU *\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # maintenanceTimeWindows: - \"* * 8-10 * * ?\" - \"* * 14-15 * * ?\"", "apply -f <kafka_configuration_file>", "annotate strimzipodset <cluster_name>-kafka strimzi.io/manual-rolling-update=\"true\" annotate strimzipodset <cluster_name>-zookeeper strimzi.io/manual-rolling-update=\"true\" annotate strimzipodset <cluster_name>-connect strimzi.io/manual-rolling-update=\"true\" annotate strimzipodset <cluster_name>-mirrormaker2 strimzi.io/manual-rolling-update=\"true\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # min.insync.replicas: 2 #", "annotate pod <cluster_name>-kafka-<index_number> strimzi.io/manual-rolling-update=\"true\" annotate pod <cluster_name>-zookeeper-<index_number> strimzi.io/manual-rolling-update=\"true\" annotate pod <cluster_name>-connect-<index_number> strimzi.io/manual-rolling-update=\"true\" annotate pod <cluster_name>-mirrormaker2-<index_number> strimzi.io/manual-rolling-update=\"true\"", "apiVersion: v1 kind: PersistentVolume spec: # persistentVolumeReclaimPolicy: Retain", "apiVersion: v1 kind: StorageClass metadata: name: gp2-retain parameters: # reclaimPolicy: Retain", "apiVersion: v1 kind: PersistentVolume spec: # storageClassName: gp2-retain", "get pv", "NAME RECLAIMPOLICY CLAIM pvc-5e9c5c7f-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-my-cluster-zookeeper-1 pvc-5e9cc72d-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-0 pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-2 pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-kafka-0 pvc-7e21042e-3317-11ea-9786-02deaf9aa87e ... Retain ... myproject/data-0-my-cluster-kafka-1 pvc-7e226978-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-kafka-2", "create namespace myproject", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-0-my-cluster-kafka-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2-retain volumeMode: Filesystem volumeName: pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c", "apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: \"yes\" pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs creationTimestamp: \"<date>\" finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: eu-west-1 failure-domain.beta.kubernetes.io/zone: eu-west-1c name: pvc-7e226978-3317-11ea-97b0-0aef8816c7ea resourceVersion: \"39431\" selfLink: /api/v1/persistentvolumes/pvc-7e226978-3317-11ea-97b0-0aef8816c7ea uid: 7efe6b0d-3317-11ea-a650-06e1eadd9a4c spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: xfs volumeID: aws://eu-west-1c/vol-09db3141656d1c258 capacity: storage: 100Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - eu-west-1c - key: failure-domain.beta.kubernetes.io/region operator: In values: - eu-west-1 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem", "claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea", "create -f install/cluster-operator -n my-project", "apply -f kafka.yaml", "run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} 1 #", "get KafkaTopic", "2018-03-04 17:09:24 WARNING AbstractClusterOperations:290 - Failed to acquire lock for kafka cluster lock::kafka::myproject::my-cluster", "Caused by: java.security.cert.CertificateException: No subject alternative names matching IP address 168.72.15.231 found at sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:168) at sun.security.util.HostnameChecker.match(HostnameChecker.java:94) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1501) ... 17 more", "ssl.endpoint.identification.algorithm=", "props.put(\"ssl.endpoint.identification.algorithm\", \"\");", "com.company=Red_Hat rht.prod_name=Red_Hat_Application_Foundations rht.prod_ver=2024.Q2 rht.comp=AMQ_Streams rht.comp_ver=2.7 rht.subcomp=entity-operator rht.subcomp_t=infrastructure", "com.company=Red_Hat rht.prod_name=Red_Hat_Application_Foundations rht.prod_ver=2024.Q2 rht.comp=AMQ_Streams rht.comp_ver=2.7 rht.subcomp=kafka-bridge rht.subcomp_t=application", "dnf install <package_name>", "dnf install <path_to_download_package>" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html-single/deploying_and_managing_streams_for_apache_kafka_on_openshift/index
Chapter 4. Installing a three-node cluster on Nutanix
Chapter 4. Installing a three-node cluster on Nutanix In OpenShift Container Platform version 4.14, you can install a three-node cluster on Nutanix. A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. 4.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... 4.2. steps Installing a cluster on Nutanix
[ "apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_nutanix/installing-nutanix-three-node
22.3. Understanding NTP
22.3. Understanding NTP The version of NTP used by Red Hat Enterprise Linux is as described in RFC 1305 Network Time Protocol (Version 3) Specification, Implementation and Analysis and RFC 5905 Network Time Protocol Version 4: Protocol and Algorithms Specification This implementation of NTP enables sub-second accuracy to be achieved. Over the Internet, accuracy to 10s of milliseconds is normal. On a Local Area Network (LAN), 1 ms accuracy is possible under ideal conditions. This is because clock drift is now accounted and corrected for, which was not done in earlier, simpler, time protocol systems. A resolution of 233 picoseconds is provided by using 64-bit time stamps. The first 32-bits of the time stamp is used for seconds, the last 32-bits are used for fractions of seconds. NTP represents the time as a count of the number of seconds since 00:00 (midnight) 1 January, 1900 GMT. As 32-bits is used to count the seconds, this means the time will " roll over " in 2036. However NTP works on the difference between time stamps so this does not present the same level of problem as other implementations of time protocols have done. If a hardware clock that is within 68 years of the correct time is available at boot time then NTP will correctly interpret the current date. The NTP4 specification provides for an " Era Number " and an " Era Offset " which can be used to make software more robust when dealing with time lengths of more than 68 years. Note, please do not confuse this with the Unix Year 2038 problem. The NTP protocol provides additional information to improve accuracy. Four time stamps are used to allow the calculation of round-trip time and server response time. In order for a system in its role as NTP client to synchronize with a reference time server, a packet is sent with an " originate time stamp " . When the packet arrives, the time server adds a " receive time stamp " . After processing the request for time and date information and just before returning the packet, it adds a " transmit time stamp " . When the returning packet arrives at the NTP client, a " receive time stamp " is generated. The client can now calculate the total round trip time and by subtracting the processing time derive the actual traveling time. By assuming the outgoing and return trips take equal time, the single-trip delay in receiving the NTP data is calculated. The full NTP algorithm is much more complex than presented here. When a packet containing time information is received it is not immediately responded to, but is first subject to validation checks and then processed together with several other time samples to arrive at an estimate of the time. This is then compared to the system clock to determine the time offset, the difference between the system clock's time and what ntpd has determined the time should be. The system clock is adjusted slowly, at most at a rate of 0.5ms per second, to reduce this offset by changing the frequency of the counter being used. It will take at least 2000 seconds to adjust the clock by 1 second using this method. This slow change is referred to as slewing and cannot go backwards. If the time offset of the clock is more than 128ms (the default setting), ntpd can " step " the clock forwards or backwards. If the time offset at system start is greater than 1000 seconds then the user, or an installation script, should make a manual adjustment. See Chapter 2, Date and Time Configuration . With the -g option to the ntpd command (used by default), any offset at system start will be corrected, but during normal operation only offsets of up to 1000 seconds will be corrected. Some software may fail or produce an error if the time is changed backwards. For systems that are sensitive to step changes in the time, the threshold can be changed to 600s instead of 128ms using the -x option (unrelated to the -g option). Using the -x option to increase the stepping limit from 0.128s to 600s has a drawback because a different method of controlling the clock has to be used. It disables the kernel clock discipline and may have a negative impact on the clock accuracy. The -x option can be added to the /etc/sysconfig/ntpd configuration file.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-understanding_ntp
5.65. fence-agents
5.65. fence-agents 5.65.1. RHBA-2012:1439 - fence-agents bug fix update Updated fence-agents packages that fix one bug are now available for Red Hat Enterprise Linux 6. The fence-agents packages provide the Red Hat fence agents to handle remote power management for cluster devices. The fence-agents allow failed or unreachable nodes to be forcibly restarted and removed from the cluster. Bug Fix BZ# 872620 The speed of fencing is critical because otherwise, broken nodes have more time to corrupt data. Prior to this update, the operation of the fence_vmware_soap fencing agent was slow and could corrupt data when used on the VMWare vSphere platform with hundreds of virtual machines. This update fixes a problem with virtual machines that do not have a valid UUID, which can be created during failed P2V (Physical-to-Virtual) processes. Now, the fencing process is also much faster and it does not terminate if a virtual machines without an UUID is encountered. All users of fence-agents are advised to upgrade to these updated packages, which fix this bug. 5.65.2. RHBA-2012:0943 - fence-agents bug fix and enhancement update Updated fence-agents packages that fix various bugs and add an enhancement are now available for Red Hat Enterprise Linux 6. The fence-agents package contains a collection of scripts to handle remote power management for cluster devices. They allow failed or unreachable cluster nodes to be forcibly restarted and removed from the cluster. Bug Fixes BZ# 769681 The fence_rhevm fencing agent uses the Red Hat Enterprise Virtualization API to check the power status ("on" or "off") of a virtual machine. In addition to the "up" and "down" states, the API includes number of other states. Previously, only if the machine was in the "up" state, the "on" power status was returned. The "off" status was returned for all other states even if the machine was running. This allowed for successful fencing before the machine was really powered off. With this update, the fence_rhevm agent detects the power status of a cluster node more conservatively, and the "off" status is returned only if the machine is actually powered off, that is in the "down" state. BZ# 772597 Previously, the fence_soap_vmware fence agent was not able to work with more than one hundred machines in a cluster. Consequently, fencing a cluster node running in a virtual machine on VMWare with the fence_soap_vmware fence agent failed with the "KeyError: 'config.uuid'" error message. With this update, the underlying code has been fixed to support fencing on such clusters. BZ# 740484 Previously, the fence_ipmilan agent failed to handle passwd_script argument values that contained space characters. Consequently, it was impossible to use a password script that required additional parameters. This update ensures that fence_ipmilan accepts and properly parses values for the passwd_script argument with spaces. BZ# 771211 Previously, the fence_vmware_soap fence agent did not expose the proper virtual machine path for fencing. With this update, fence_vmware_soap has been fixed to support this virtual machine identification. BZ# 714841 Previously, certain fence agents did not generate correct metadata output. As a result, it was not possible to use the metadata for automatic generation of manual pages and user interfaces. With this update, all fence agents generate their metadata as expected. BZ# 771936 Possible buffer overflow and null dereference defects were found by automatic tools. With this update, these problems have been fixed. BZ# 785091 Fence agents that use an identity file for SSH terminated unexpectedly when a password was expected but was not provided. This bug has been fixed and proper error messages are returned in the described scenario. BZ# 787706 The fence_ipmilan fence agent did not respect the power_wait option and did not wait after sending the power-off signal to a device. Consequently, the device could terminate its shutdown sequence. This bug has been fixed and fence_ipmilan now waits before shutting down a machine as expected. BZ# 741339 The fence_scsi agent creates the fence_scsi.dev file that contains a list of devices that the node registered with during an unfence operation. This file was unlinked for every unfence action. Consequently, if multiple fence device entries were used in the cluster.conf file, fence_scsi.dev only contained the devices that the node registered with during the most recent unfence action. Now, instead of the unlink call, if the device currently being registered does not exists in fence_scsi.dev, it is added to the file. BZ# 804169 If the "delay" option was set to more than 5 seconds while a fence device was connected via the telnet_ssl utility, the connection timed out and the fence device failed. Now, the "delay" option is applied before the connection is opened, thus fixing this bug. BZ# 806883 Previously, XML metadata returned by a fence agent incorrectly listed all attributes as "unique". This update fixes this problem and the attributes are now marked as unique only when this information is valid. BZ# 806912 This update fixes a typographical error in an error message in the fence_ipmilan agent. BZ# 806897 Prior to this update, the fence agent for IPMI (Intelligent Platform Management Interface) could return an invalid return code when the "-M cycle" option was used. This invalid return code could cause invalid interpretation of a fence action, eventually causing the cluster to become unresponsive. This bug has been fixed and only predefined return codes are now returned in the described scenario. BZ# 804805 Previously, the fence_brocade fence agent did not distinguish the "action" option from the standard "option" option. Consequently, the "action" option was ignored and the node was always fenced. This bug has been fixed and both options are now properly recognized and acted upon. Enhancement BZ# 742003 This updates adds the feature to access Fujitsu RSB fencing device using secure shell. Users of fence-agents are advised to upgrade to these updated packages, which fix these bugs and add this enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/fence-agents
Chapter 4. Managing rules
Chapter 4. Managing rules The MTA plugin comes with a core set of System rules for analyzing projects and identifying migration and modernization issues. You can create and import custom rulesets. 4.1. Viewing rules You can view system and custom rules, if any, for the MTA plugin. Prerequisites To view system rules, the MTA server must be running. Procedure Click the Rulesets tab. Expand System to view system rulesets or Custom to view custom rulesets. Expand a ruleset. Double-click a rule to open it in a viewer. Click the Source tab to view the XML source of the rule. 4.2. Creating a custom ruleset You can create a custom ruleset in the MTA perspective. See the Rule Development Guide to learn more about creating custom XML rules. Procedure Click the Rulesets tab. Click the Create Ruleset icon ( ). Select a project and a directory for the ruleset. Enter the file name. Note The file must have the extension .windup.xml . Enter a ruleset ID, for example, my-ruleset-id . Optional: Select Generate quickstart template to add basic rule templates to the file. Click Finish . The ruleset file opens in an editor and you can add and edit rules in the file. Click the Source tab to edit the XML source of the ruleset file. You can select the new ruleset when you create a run configuration. 4.3. Importing a custom ruleset You can import a custom ruleset into the MTA plugin to analyze your projects. Prerequisites Custom ruleset file with a .windup.xml extension. See the Rule Development Guide for information about creating rulesets. Procedure Click the Rulesets tab. Click the Import Ruleset icon ( ). Browse to and select the XML rule file to import. The custom ruleset is displayed when you expand Custom on the Rulesets tab. 4.4. Submitting a custom ruleset You can submit your custom ruleset for inclusion in the official MTA rule repository. This allows your custom rules to be reviewed and included in subsequent releases of MTA. Procedure Click the Rulesets tab. Click the Arrow icon ( ) and select Submit Ruleset . Complete the following fields: Summary : Describe the purpose of the rule. This becomes the title of the submission. Code Sample : Enter an example of the source code that the rule should run against. Description : Enter a brief description of the rule. Click Choose Files and select the ruleset file. Click Submit .
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/eclipse_plugin_guide/managing-rules_eclipse-code-ready-studio-guide
Chapter 2. Using machine config objects to configure nodes
Chapter 2. Using machine config objects to configure nodes You can use the tasks in this section to create MachineConfig objects that modify files, systemd unit files, and other operating system features running on OpenShift Container Platform nodes. For more ideas on working with machine configs, see content related to updating SSH authorized keys, verifying image signatures , enabling SCTP , and configuring iSCSI initiatornames for OpenShift Container Platform. OpenShift Container Platform supports Ignition specification version 3.2 . All new machine configs you create going forward should be based on Ignition specification version 3.2. If you are upgrading your OpenShift Container Platform cluster, any existing Ignition specification version 2.x machine configs will be translated automatically to specification version 3.2. There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift . The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated. For more information on configuration drift, see Understanding configuration drift detection . Tip Use the following "Configuring chrony time service" procedure as a model for how to go about adding other configuration files to OpenShift Container Platform nodes. 2.1. Configuring chrony time service You can set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.18.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Note For all-machine to all-machine communication, the Network Time Protocol (NTP) on UDP is port 123 . If an external NTP time server is configured, you must open UDP port 123 . Alternately, you can specify any of the following NTP servers: 1.rhel.pool.ntp.org , 2.rhel.pool.ntp.org , or 3.rhel.pool.ntp.org . Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml Additional resources Creating machine configs with Butane 2.2. Disabling the chrony time service You can disable the chrony time service ( chronyd ) for nodes with a specific role by using a MachineConfig custom resource (CR). Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create the MachineConfig CR that disables chronyd for the specified node role. Save the following YAML in the disable-chronyd.yaml file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: disable-chronyd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=NTP client/server Documentation=man:chronyd(8) man:chrony.conf(5) After=ntpdate.service sntp.service ntpd.service Conflicts=ntpd.service systemd-timesyncd.service ConditionCapability=CAP_SYS_TIME [Service] Type=forking PIDFile=/run/chrony/chronyd.pid EnvironmentFile=-/etc/sysconfig/chronyd ExecStart=/usr/sbin/chronyd USDOPTIONS ExecStartPost=/usr/libexec/chrony-helper update-daemon PrivateTmp=yes ProtectHome=yes ProtectSystem=full [Install] WantedBy=multi-user.target enabled: false name: "chronyd.service" - name: "kubelet-dependencies.target" contents: | [Unit] Description=Dependencies necessary to run kubelet Documentation=https://github.com/openshift/machine-config-operator/ Requires=basic.target network-online.target Wants=NetworkManager-wait-online.service crio-wipe.service Wants=rpc-statd.service 1 Node role where you want to disable chronyd , for example, master . Create the MachineConfig CR by running the following command: USD oc create -f disable-chronyd.yaml 2.3. Adding kernel arguments to nodes In some special cases, you might want to add kernel arguments to a set of nodes in your cluster. This should only be done with caution and clear understanding of the implications of the arguments you set. Warning Improper use of kernel arguments can result in your systems becoming unbootable. Examples of kernel arguments you could set include: nosmt : Disables symmetric multithreading (SMT) in the kernel. Multithreading allows multiple logical threads for each CPU. You could consider nosmt in multi-tenant environments to reduce risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance. systemd.unified_cgroup_hierarchy : Enables Linux control group version 2 (cgroup v2). cgroup v2 is the version of the kernel control group and offers multiple improvements. Important cgroup v1 is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. enforcing=0 : Configures Security Enhanced Linux (SELinux) to run in permissive mode. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not supported for production systems, permissive mode can be helpful for debugging. Warning Disabling SELinux on RHCOS in production is not supported. Once SELinux has been disabled on a node, it must be re-provisioned before re-inclusion in a production cluster. See Kernel.org kernel parameters for a list and descriptions of kernel arguments. In the following procedure, you create a MachineConfig object that identifies: A set of machines to which you want to add the kernel argument. In this case, machines with a worker role. Kernel arguments that are appended to the end of the existing kernel arguments. A label that indicates where in the list of machine configs the change is applied. Prerequisites Have administrative privilege to a working OpenShift Container Platform cluster. Procedure List existing MachineConfig objects for your OpenShift Container Platform cluster to determine how to label your machine config: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Create a MachineConfig object file that identifies the kernel argument (for example, 05-worker-kernelarg-selinuxpermissive.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3 1 Applies the new kernel argument only to worker nodes. 2 Named to identify where it fits among the machine configs (05) and what it does (adds a kernel argument to configure SELinux permissive mode). 3 Identifies the exact kernel argument as enforcing=0 . Create the new machine config: USD oc create -f 05-worker-kernelarg-selinuxpermissive.yaml Check the machine configs to see that the new one was added: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Check the nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.31.3 ip-10-0-136-243.ec2.internal Ready master 34m v1.31.3 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.31.3 ip-10-0-142-249.ec2.internal Ready master 34m v1.31.3 ip-10-0-153-11.ec2.internal Ready worker 28m v1.31.3 ip-10-0-153-150.ec2.internal Ready master 34m v1.31.3 You can see that scheduling on each worker node is disabled as the change is being applied. Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16... coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit You should see the enforcing=0 argument added to the other kernel arguments. 2.4. Enabling multipathing with kernel arguments on RHCOS Important Enabling multipathing during installation is supported and recommended for nodes provisioned in OpenShift Container Platform. In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time. For more information about enabling multipathing during installation time, see "Enabling multipathing post installation" in the Installing on bare metal documentation. Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Postinstallation support is available by activating multipathing via the machine config. Important On IBM Z(R) and IBM(R) LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE . Important When an OpenShift Container Platform cluster is installed or configured as a postinstallation activity on a single VIOS host with "vSCSI" storage on IBM Power(R) with multipath configured, the CoreOS nodes with multipath enabled fail to boot. This behavior is expected, as only one path is available to the node. Prerequisites You have a running OpenShift Container Platform cluster. You are logged in to the cluster as a user with administrative privileges. You have confirmed that the disk is enabled for multipathing. Multipathing is only supported on hosts that are connected to a SAN via an HBA adapter. Procedure To enable multipathing postinstallation on control plane nodes: Create a machine config file, such as 99-master-kargs-mpath.yaml , that instructs the cluster to add the master label and that identifies the multipath kernel argument, for example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "master" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' To enable multipathing postinstallation on worker nodes: Create a machine config file, such as 99-worker-kargs-mpath.yaml , that instructs the cluster to add the worker label and that identifies the multipath kernel argument, for example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' Create the new machine config by using either the master or worker YAML file you previously created: USD oc create -f ./99-worker-kargs-mpath.yaml Check the machine configs to see that the new one was added: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-kargs-mpath 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 105s 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Check the nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.31.3 ip-10-0-136-243.ec2.internal Ready master 34m v1.31.3 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.31.3 ip-10-0-142-249.ec2.internal Ready master 34m v1.31.3 ip-10-0-153-11.ec2.internal Ready worker 28m v1.31.3 ip-10-0-153-150.ec2.internal Ready master 34m v1.31.3 You can see that scheduling on each worker node is disabled as the change is being applied. Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline ... rd.multipath=default root=/dev/disk/by-label/dm-mpath-root ... sh-4.2# exit You should see the added kernel arguments. Additional resources See Enabling multipathing with kernel arguments on RHCOS for more information about enabling multipathing during installation time. 2.5. Adding a real-time kernel to nodes Some OpenShift Container Platform workloads require a high degree of determinism.While Linux is not a real-time operating system, the Linux real-time kernel includes a preemptive scheduler that provides the operating system with real-time characteristics. If your OpenShift Container Platform workloads require these real-time characteristics, you can switch your machines to the Linux real-time kernel. For OpenShift Container Platform, 4.18 you can make this switch using a MachineConfig object. Although making the change is as simple as changing a machine config kernelType setting to realtime , there are a few other considerations before making the change: Currently, real-time kernel is supported only on worker nodes, and only for radio access network (RAN) use. The following procedure is fully supported with bare metal installations that use systems that are certified for Red Hat Enterprise Linux for Real Time 8. Real-time support in OpenShift Container Platform is limited to specific subscriptions. The following procedure is also supported for use with Google Cloud Platform. Prerequisites Have a running OpenShift Container Platform cluster (version 4.4 or later). Log in to the cluster as a user with administrative privileges. Procedure Create a machine config for the real-time kernel: Create a YAML file (for example, 99-worker-realtime.yaml ) that contains a MachineConfig object for the realtime kernel type. This example tells the cluster to use a real-time kernel for all worker nodes: USD cat << EOF > 99-worker-realtime.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-realtime spec: kernelType: realtime EOF Add the machine config to the cluster. Type the following to add the machine config to the cluster: USD oc create -f 99-worker-realtime.yaml Check the real-time kernel: Once each impacted node reboots, log in to the cluster and run the following commands to make sure that the real-time kernel has replaced the regular kernel for the set of nodes you configured: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.31.3 ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.31.3 ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.31.3 USD oc debug node/ip-10-0-143-147.us-east-2.compute.internal Example output Starting pod/ip-10-0-143-147us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` sh-4.4# uname -a Linux <worker_node> 4.18.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux The kernel name contains rt and text "PREEMPT RT" indicates that this is a real-time kernel. To go back to the regular kernel, delete the MachineConfig object: USD oc delete -f 99-worker-realtime.yaml 2.6. Configuring journald settings If you need to configure settings for the journald service on OpenShift Container Platform nodes, you can do that by modifying the appropriate configuration file and passing the file to the appropriate pool of nodes as a machine config. This procedure describes how to modify journald rate limiting settings in the /etc/systemd/journald.conf file and apply them to worker nodes. See the journald.conf man page for information on how to use that file. Prerequisites Have a running OpenShift Container Platform cluster. Log in to the cluster as a user with administrative privileges. Procedure Create a Butane config file, 40-worker-custom-journald.bu , that includes an /etc/systemd/journald.conf file with the required settings. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.18.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/systemd/journald.conf mode: 0644 overwrite: true contents: inline: | # Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s Use Butane to generate a MachineConfig object file, 40-worker-custom-journald.yaml , containing the configuration to be delivered to the worker nodes: USD butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml Apply the machine config to the pool: USD oc apply -f 40-worker-custom-journald.yaml Check that the new machine config is applied and that the nodes are not in a degraded state. It might take a few minutes. The worker pool will show the updates in progress, as each node successfully has the new machine config applied: USD oc get machineconfigpool NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m To check that the change was applied, you can log in to a worker node: USD oc get node | grep worker ip-10-0-0-1.us-east-2.compute.internal Ready worker 39m v0.0.0-master+USDFormat:%hUSD USD oc debug node/ip-10-0-0-1.us-east-2.compute.internal Starting pod/ip-10-0-141-142us-east-2computeinternal-debug ... ... sh-4.2# chroot /host sh-4.4# cat /etc/systemd/journald.conf # Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s sh-4.4# exit Additional resources Creating machine configs with Butane 2.7. Adding extensions to RHCOS RHCOS is a minimal container-oriented RHEL operating system, designed to provide a common set of capabilities to OpenShift Container Platform clusters across all platforms. Although adding software packages to RHCOS systems is generally discouraged, the MCO provides an extensions feature you can use to add a minimal set of features to RHCOS nodes. Currently, the following extensions are available: usbguard : The usbguard extension protects RHCOS systems from attacks by intrusive USB devices. For more information, see USBGuard for details. kerberos : The kerberos extension provides a mechanism that allows both users and machines to identify themselves to the network to receive defined, limited access to the areas and services that an administrator has configured. For more information, see Using Kerberos for details, including how to set up a Kerberos client and mount a Kerberized NFS share. sandboxed-containers : The sandboxed-containers extension contains RPMs for Kata, QEMU, and its dependencies. For more information, see OpenShift Sandboxed Containers . ipsec : The ipsec extension contains RPMs for libreswan and NetworkManager-libreswan. wasm : The wasm extension enables Developer Preview functionality in OpenShift Container Platform for users who want to use WASM-supported workloads. sysstat : Adding the sysstat extension provides additional performance monitoring for OpenShift Container Platform nodes, including the system activity reporter ( sar ) command for collecting and reporting information. kernel-devel : The kernel-devel extension provides kernel headers and makefiles sufficient to build modules against the kernel package. The following procedure describes how to use a machine config to add one or more extensions to your RHCOS nodes. Prerequisites Have a running OpenShift Container Platform cluster (version 4.6 or later). Log in to the cluster as a user with administrative privileges. Procedure Create a machine config for extensions: Create a YAML file (for example, 80-extensions.yaml ) that contains a MachineConfig extensions object. This example tells the cluster to add the usbguard extension. USD cat << EOF > 80-extensions.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 80-worker-extensions spec: config: ignition: version: 3.2.0 extensions: - usbguard EOF Add the machine config to the cluster. Type the following to add the machine config to the cluster: USD oc create -f 80-extensions.yaml This sets all worker nodes to have rpm packages for usbguard installed. Check that the extensions were applied: USD oc get machineconfig 80-worker-extensions Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 80-worker-extensions 3.2.0 57s Check that the new machine config is now applied and that the nodes are not in a degraded state. It may take a few minutes. The worker pool will show the updates in progress, as each machine successfully has the new machine config applied: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m Check the extensions. To check that the extension was applied, run: USD oc get node | grep worker Example output NAME STATUS ROLES AGE VERSION ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.31.3 USD oc debug node/ip-10-0-169-2.us-east-2.compute.internal Example output ... To use host binaries, run `chroot /host` sh-4.4# chroot /host sh-4.4# rpm -q usbguard usbguard-0.7.4-4.el8.x86_64.rpm 2.8. Loading custom firmware blobs in the machine config manifest Because the default location for firmware blobs in /usr/lib is read-only, you can locate a custom firmware blob by updating the search path. This enables you to load local firmware blobs in the machine config manifest when the blobs are not managed by RHCOS. Procedure Create a Butane config file, 98-worker-firmware-blob.bu , that updates the search path so that it is root-owned and writable to local storage. The following example places the custom blob file from your local workstation onto nodes under /var/lib/firmware . Note See "Creating machine configs with Butane" for information about Butane. Butane config file for custom firmware blob variant: openshift version: 4.18.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-worker-firmware-blob storage: files: - path: /var/lib/firmware/<package_name> 1 contents: local: <package_name> 2 mode: 0644 3 openshift: kernel_arguments: - 'firmware_class.path=/var/lib/firmware' 4 1 Sets the path on the node where the firmware package is copied to. 2 Specifies a file with contents that are read from a local file directory on the system running Butane. The path of the local file is relative to a files-dir directory, which must be specified by using the --files-dir option with Butane in the following step. 3 Sets the permissions for the file on the RHCOS node. It is recommended to set 0644 permissions. 4 The firmware_class.path parameter customizes the kernel search path of where to look for the custom firmware blob that was copied from your local workstation onto the root file system of the node. This example uses /var/lib/firmware as the customized path. Run Butane to generate a MachineConfig object file that uses a copy of the firmware blob on your local workstation named 98-worker-firmware-blob.yaml . The firmware blob contains the configuration to be delivered to the nodes. The following example uses the --files-dir option to specify the directory on your workstation where the local file or files are located: USD butane 98-worker-firmware-blob.bu -o 98-worker-firmware-blob.yaml --files-dir <directory_including_package_name> Apply the configurations to the nodes in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f 98-worker-firmware-blob.yaml A MachineConfig object YAML file is created for you to finish configuring your machines. Save the Butane config in case you need to update the MachineConfig object in the future. Additional resources Creating machine configs with Butane 2.9. Changing the core user password for node access By default, Red Hat Enterprise Linux CoreOS (RHCOS) creates a user named core on the nodes in your cluster. You can use the core user to access the node through a cloud provider serial console or a bare metal baseboard controller manager (BMC). This can be helpful, for example, if a node is down and you cannot access that node by using SSH or the oc debug node command. However, by default, there is no password for this user, so you cannot log in without creating one. You can create a password for the core user by using a machine config. The Machine Config Operator (MCO) assigns the password and injects the password into the /etc/shadow file, allowing you to log in with the core user. The MCO does not examine the password hash. As such, the MCO cannot report if there is a problem with the password. Note The password works only through a cloud provider serial console or a BMC. It does not work with SSH. If you have a machine config that includes an /etc/shadow file or a systemd unit that sets a password, it takes precedence over the password hash. You can change the password, if needed, by editing the machine config you used to create the password. Also, you can remove the password by deleting the machine config. Deleting the machine config does not remove the user account. Procedure Using a tool that is supported by your operating system, create a hashed password. For example, create a hashed password using mkpasswd by running the following command: USD mkpasswd -m SHA-512 testpass Example output USD USD6USDCBZwA6s6AVFOtiZeUSDaUKDWpthhJEyR3nnhM02NM1sKCpHn9XN.NPrJNQ3HYewioaorpwL3mKGLxvW0AOb4pJxqoqP4nFX77y0p00.8. Create a machine config file that contains the core username and the hashed password: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: set-core-user-password spec: config: ignition: version: 3.2.0 passwd: users: - name: core 1 passwordHash: <password> 2 1 This must be core . 2 The hashed password to use with the core account. Create the machine config by running the following command: USD oc create -f <file-name>.yaml The nodes do not reboot and should become available in a few moments. You can use the oc get mcp to watch for the machine config pools to be updated, as shown in the following example: Verification After the nodes return to the UPDATED=True state, start a debug session for a node by running the following command: USD oc debug node/<node_name> Set /host as the root directory within the debug shell by running the following command: sh-4.4# chroot /host Check the contents of the /etc/shadow file: Example output ... core:USD6USD2sE/010goDuRSxxvUSDo18K52wor.wIwZp:19418:0:99999:7::: ... The hashed password is assigned to the core user.
[ "variant: openshift version: 4.18.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-chrony.bu -o 99-worker-chrony.yaml", "oc apply -f ./99-worker-chrony.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: disable-chronyd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=NTP client/server Documentation=man:chronyd(8) man:chrony.conf(5) After=ntpdate.service sntp.service ntpd.service Conflicts=ntpd.service systemd-timesyncd.service ConditionCapability=CAP_SYS_TIME [Service] Type=forking PIDFile=/run/chrony/chronyd.pid EnvironmentFile=-/etc/sysconfig/chronyd ExecStart=/usr/sbin/chronyd USDOPTIONS ExecStartPost=/usr/libexec/chrony-helper update-daemon PrivateTmp=yes ProtectHome=yes ProtectSystem=full [Install] WantedBy=multi-user.target enabled: false name: \"chronyd.service\" - name: \"kubelet-dependencies.target\" contents: | [Unit] Description=Dependencies necessary to run kubelet Documentation=https://github.com/openshift/machine-config-operator/ Requires=basic.target network-online.target Wants=NetworkManager-wait-online.service crio-wipe.service Wants=rpc-statd.service", "oc create -f disable-chronyd.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3", "oc create -f 05-worker-kernelarg-selinuxpermissive.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.31.3 ip-10-0-136-243.ec2.internal Ready master 34m v1.31.3 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.31.3 ip-10-0-142-249.ec2.internal Ready master 34m v1.31.3 ip-10-0-153-11.ec2.internal Ready worker 28m v1.31.3 ip-10-0-153-150.ec2.internal Ready master 34m v1.31.3", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "oc create -f ./99-worker-kargs-mpath.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-kargs-mpath 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 105s 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.31.3 ip-10-0-136-243.ec2.internal Ready master 34m v1.31.3 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.31.3 ip-10-0-142-249.ec2.internal Ready master 34m v1.31.3 ip-10-0-153-11.ec2.internal Ready worker 28m v1.31.3 ip-10-0-153-150.ec2.internal Ready master 34m v1.31.3", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit", "cat << EOF > 99-worker-realtime.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-realtime spec: kernelType: realtime EOF", "oc create -f 99-worker-realtime.yaml", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.31.3 ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.31.3 ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.31.3", "oc debug node/ip-10-0-143-147.us-east-2.compute.internal", "Starting pod/ip-10-0-143-147us-east-2computeinternal-debug To use host binaries, run `chroot /host` sh-4.4# uname -a Linux <worker_node> 4.18.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux", "oc delete -f 99-worker-realtime.yaml", "variant: openshift version: 4.18.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/systemd/journald.conf mode: 0644 overwrite: true contents: inline: | # Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s", "butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml", "oc apply -f 40-worker-custom-journald.yaml", "oc get machineconfigpool NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m", "oc get node | grep worker ip-10-0-0-1.us-east-2.compute.internal Ready worker 39m v0.0.0-master+USDFormat:%hUSD oc debug node/ip-10-0-0-1.us-east-2.compute.internal Starting pod/ip-10-0-141-142us-east-2computeinternal-debug sh-4.2# chroot /host sh-4.4# cat /etc/systemd/journald.conf Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s sh-4.4# exit", "cat << EOF > 80-extensions.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 80-worker-extensions spec: config: ignition: version: 3.2.0 extensions: - usbguard EOF", "oc create -f 80-extensions.yaml", "oc get machineconfig 80-worker-extensions", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 80-worker-extensions 3.2.0 57s", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m", "oc get node | grep worker", "NAME STATUS ROLES AGE VERSION ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.31.3", "oc debug node/ip-10-0-169-2.us-east-2.compute.internal", "To use host binaries, run `chroot /host` sh-4.4# chroot /host sh-4.4# rpm -q usbguard usbguard-0.7.4-4.el8.x86_64.rpm", "variant: openshift version: 4.18.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-worker-firmware-blob storage: files: - path: /var/lib/firmware/<package_name> 1 contents: local: <package_name> 2 mode: 0644 3 openshift: kernel_arguments: - 'firmware_class.path=/var/lib/firmware' 4", "butane 98-worker-firmware-blob.bu -o 98-worker-firmware-blob.yaml --files-dir <directory_including_package_name>", "oc apply -f 98-worker-firmware-blob.yaml", "mkpasswd -m SHA-512 testpass", "USD6USDCBZwA6s6AVFOtiZeUSDaUKDWpthhJEyR3nnhM02NM1sKCpHn9XN.NPrJNQ3HYewioaorpwL3mKGLxvW0AOb4pJxqoqP4nFX77y0p00.8.", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: set-core-user-password spec: config: ignition: version: 3.2.0 passwd: users: - name: core 1 passwordHash: <password> 2", "oc create -f <file-name>.yaml", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-d686a3ffc8fdec47280afec446fce8dd True False False 3 3 3 0 64m worker rendered-worker-4605605a5b1f9de1d061e9d350f251e5 False True False 3 0 0 0 64m", "oc debug node/<node_name>", "sh-4.4# chroot /host", "core:USD6USD2sE/010goDuRSxxvUSDo18K52wor.wIwZp:19418:0:99999:7:::" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/machine_configuration/machine-configs-configure
Chapter 15. Configuring the loopback interface by using nmcli
Chapter 15. Configuring the loopback interface by using nmcli By default, NetworkManager does not manage the loopback ( lo ) interface. After creating a connection profile for the lo interface, you can configure this device by using NetworkManager. Some of the examples are as follows: Assign additional IP addresses to the lo interface Define DNS addresses Change the Maximum Transmission Unit (MTU) size of the lo interface Procedure Create a new connection of type loopback : Configure custom connection settings, for example: To assign an additional IP address to the interface, enter: Note NetworkManager manages the lo interface by always assigning the IP addresses 127.0.0.1 and ::1 that are persistent across the reboots. You can not override 127.0.0.1 and ::1 . However, you can assign additional IP addresses to the interface. To set a custom Maximum Transmission Unit (MTU), enter: To set an IP address to your DNS server, enter: If you set a DNS server in the loopback connection profile, this entry is always available in the /etc/resolv.conf file. The DNS server entry remains independent of whether or not the host roams between different networks. Activate the connection: Verification Display the settings of the lo interface: Verify the DNS address:
[ "nmcli connection add con-name example-loopback type loopback", "nmcli connection modify example-loopback +ipv4.addresses 192.0.2.1/24", "nmcli con mod example-loopback loopback.mtu 16384", "nmcli connection modify example-loopback ipv4.dns 192.0.2.0", "nmcli connection up example-loopback", "ip address show lo 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16384 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 192.0.2.1/24 brd 192.0.2.255 scope global lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever", "cat /etc/resolv.conf nameserver 192.0.2.0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/proc_configuring-the-loopback-interface-by-using-nmcli_configuring-and-managing-networking
6.4. Setting up Active Directory for Synchronization
6.4. Setting up Active Directory for Synchronization Synchronizing user accounts is enabled within IdM. It is only necessary to set up a synchronization agreement ( Section 6.5.1, "Creating Synchronization Agreements" ). However, the Active Directory does need to be configured in a way that allows the Identity Management server to connect to it. 6.4.1. Creating an Active Directory User for Synchronization On the Windows server, it is necessary to create the user that the IdM server will use to connect to the Active Directory domain. The process for creating a user in Active Directory is covered in the Windows server documentation at http://technet.microsoft.com/en-us/library/cc732336.aspx . The new user account must have the proper permissions: Grant the synchronization user account Replicating directory changes rights to the synchronized Active Directory subtree. Replicator rights are required for the synchronization user to perform synchronization operations. Replicator rights are described in http://support.microsoft.com/kb/303972 . Add the synchronization user as a member of the Account Operators and Enterprise Read-only Domain Controllers groups. It is not necessary for the user to belong to the Domain Admins group. 6.4.2. Setting up an Active Directory Certificate Authority The Identity Management server connects to the Active Directory server using a secure connection. This requires that the Active Directory server have an available CA certificate or CA certificate chain available, which can be imported into the Identity Management security databases, so that the Windows server is a trusted peer. While this could technically be done with an external (to Active Directory) CA, most deployments should use the Certificate Services available with Active Directory. The procedure for setting up and configuring certificate services on Active Directory is covered in the Microsoft documentation at http://technet.microsoft.com/en-us/library/cc772393(v=WS.10).aspx .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/setting_up_active_directory
Chapter 12. Managing file system permissions
Chapter 12. Managing file system permissions File system permissions control the ability of user and group accounts to read, modify, and execute the contents of the files and to enter directories. Set permissions carefully to protect your data against unauthorized access. 12.1. Managing file permissions Every file or directory has three levels of ownership: User owner ( u ). Group owner ( g ). Others ( o ). Each level of ownership can be assigned the following permissions: Read ( r ). Write ( w ). Execute ( x ). Note that the execute permission for a file allows you to execute that file. The execute permission for a directory allows you to access the contents of the directory, but not execute it. When a new file or directory is created, the default set of permissions are automatically assigned to it. The default permissions for a file or directory are based on two factors: Base permission. The user file-creation mode mask ( umask ). 12.1.1. Base file permissions Whenever a new file or directory is created, a base permission is automatically assigned to it. Base permissions for a file or directory can be expressed in symbolic or octal values. Permission Symbolic value Octal value No permission --- 0 Execute --x 1 Write -w- 2 Write and execute -wx 3 Read r-- 4 Read and execute r-x 5 Read and write rw- 6 Read, write, execute rwx 7 The base permission for a directory is 777 ( drwxrwxrwx ), which grants everyone the permissions to read, write, and execute. This means that the directory owner, the group, and others can list the contents of the directory, create, delete, and edit items within the directory, and descend into it. Note that individual files within a directory can have their own permission that might prevent you from editing them, despite having unrestricted access to the directory. The base permission for a file is 666 ( -rw-rw-rw- ), which grants everyone the permissions to read and write. This means that the file owner, the group, and others can read and edit the file. Example 12.1. Permissions for a file If a file has the following permissions: - indicates it is a file. rwx indicates that the file owner has permissions to read, write, and execute the file. rw- indicates that the group has permissions to read and write, but not execute the file. --- indicates that other users have no permission to read, write, or execute the file. . indicates that the SELinux security context is set for the file. Example 12.2. Permissions for a directory If a directory has the following permissions: d indicates it is a directory. rwx indicates that the directory owner has the permissions to read, write, and access the contents of the directory. As a directory owner, you can list the items (files, subdirectories) within the directory, access the content of those items, and modify them. r-x indicates that the group has permissions to read the content of the directory, but not write - create new entries or delete files. The x permission means that you can also access the directory using the cd command. --- indicates that other users have no permission to read, write, or access the contents of the directory. As someone who is not a user owner, or as a group, you cannot list the items within the directory, access information about those items, or modify them. . indicates that the SELinux security context is set for the directory. Note The base permission that is automatically assigned to a file or directory is not the default permission the file or directory ends up with. When you create a file or directory, the base permission is altered by the umask . The combination of the base permission and the umask creates the default permission for files and directories. 12.1.2. User file-creation mode mask The user file-creation mode mask ( umask ) is variable that controls how file permissions are set for newly created files and directories. The umask automatically removes permissions from the base permission value to increase the overall security of a Linux system. The umask can be expressed in symbolic or octal values. Permission Symbolic value Octal value Read, write, and execute rwx 0 Read and write rw- 1 Read and execute r-x 2 Read r-- 3 Write and execute -wx 4 Write -w- 5 Execute --x 6 No permissions --- 7 The default umask for a standard user is 0002 . The default umask for a root user is 0022 . The first digit of the umask represents special permissions (sticky bit, ). The last three digits of the umask represent the permissions that are removed from the user owner ( u ), group owner ( g ), and others ( o ) respectively. Example 12.3. Applying the umask when creating a file The following example illustrates how the umask with an octal value of 0137 is applied to the file with the base permission of 777 , to create the file with the default permission of 640 . 12.1.3. Default file permissions The default permissions are set automatically for all newly created files and directories. The value of the default permissions is determined by applying the umask to the base permission. Example 12.4. Default permissions for a directory created by a standard user When a standard user creates a new directory , the umask is set to 002 ( rwxrwxr-x ), and the base permissions for a directory are set to 777 ( rwxrwxrwx ). This brings the default permissions to 775 ( drwxrwxr-x ). Symbolic value Octal value Base permission rwxrwxrwx 777 Umask rwxrwxr-x 002 Default permission rwxrwxr-x 775 This means that the directory owner and the group can list the contents of the directory, create, delete, and edit items within the directory, and descend into it. Other users can only list the contents of the directory and descend into it. Example 12.5. Default permissions for a file created by a standard user When a standard user creates a new file , the umask is set to 002 ( rwxrwxr-x ), and the base permissions for a file are set to 666 ( rw-rw-rw- ). This brings the default permissions to 664 ( -rw-rw-r-- ). Symbolic value Octal value Base permission rw-rw-rw- 666 Umask rwxrwxr-x 002 Default permission rw-rw-r-- 664 This means that the file owner and the group can read and edit the file, while other users can only read the file. Example 12.6. Default permissions for a directory created by the root user When a root user creates a new directory , the umask is set to 022 ( rwxr-xr-x ), and the base permissions for a directory are set to 777 ( rwxrwxrwx ). This brings the default permissions to 755 ( rwxr-xr-x ). Symbolic value Octal value Base permission rwxrwxrwx 777 Umask rwxr-xr-x 022 Default permission rwxr-xr-x 755 This means that the directory owner can list the contents of the directory, create, delete, and edit items within the directory, and descend into it. The group and others can only list the contents of the directory and descend into it. Example 12.7. Default permissions for a file created by the root user When a root user creates a new file , the umask is set to 022 ( rwxr-xr-x ), and the base permissions for a file are set to 666 ( rw-rw-rw- ). This brings the default permissions to 644 ( -rw-r- r-- ). Symbolic value Octal value Base permission rw-rw-rw- 666 Umask rwxr-xr-x 022 Default permission rw-r- r-- 644 This means that the file owner can read and edit the file, while the group and others can only read the file. Note For security reasons, regular files cannot have execute permissions by default, even if the umask is set to 000 ( rwxrwxrwx ). However, directories can be created with execute permissions. 12.1.4. Changing file permissions using symbolic values You can use the chmod utility with symbolic values (a combination of letters and signs) to change file permissions for a file or directory. You can assign the following permissions : Read ( r ) Write ( w ) Execute ( x ) Permissions can be assigned to the following levels of ownership : User owner ( u ) Group owner ( g ) Other ( o ) All ( a ) To add or remove permissions you can use the following signs : + to add the permissions on top of the existing permissions - to remove the permissions from the existing permission = to remove the existing permissions and explicitly define the new ones Procedure To change the permissions for a file or directory, use: Replace <level> with the level of ownership you want to set the permissions for. Replace <operation> with one of the signs . Replace <permission> with the permissions you want to assign. Replace file-name with the name of the file or directory. For example, to grant everyone the permissions to read, write, and execute ( rwx ) my-script.sh , use the chmod a=rwx my-script.sh command. See Base file permissions for more details. Verification To see the permissions for a particular file, use: Replace file-name with the name of the file. To see the permissions for a particular directory, use: Replace directory-name with the name of the directory. To see the permissions for all the files within a particular directory, use: Replace directory-name with the name of the directory. Example 12.8. Changing permissions for files and directories To change file permissions for my-file.txt from -rw-rw-r-- to -rw------ , use: Display the current permissions for my-file.txt : Remove the permissions to read, write, and execute ( rwx ) the file from group owner ( g ) and others ( o ): Note that any permission that is not specified after the equals sign ( = ) is automatically prohibited. Verify that the permissions for my-file.txt were set correctly: To change file permissions for my-directory from drwxrwx--- to drwxrwxr-x , use: Display the current permissions for my-directory : Add the read and execute ( r-x ) access for all users ( a ): Verify that the permissions for my-directory and its content were set correctly: 12.1.5. Changing file permissions using octal values You can use the chmod utility with octal values (numbers) to change file permissions for a file or directory. Procedure To change the file permissions for an existing file or directory, use: Replace file-name with the name of the file or directory. Replace octal_value with an octal value. See Base file permissions for more details. 12.2. Managing the Access Control List Each file and directory can only have one user owner and one group owner at a time. If you want to grant a user permissions to access specific files or directories that belong to a different user or group while keeping other files and directories private, you can utilize Linux Access Control Lists (ACLs). 12.2.1. Setting the Access Control List You can use the setfacl utility to set the ACL for a file or directory. Prerequisites You have the root access. Procedure To display the current ACL for a particular file or directory, run: Replace file-name with the name of the file or directory. To set the ACL for a file or directory, use: Replace username with the name of the user, symbolic_value with a symbolic value, and file-name with the name of the file or directory. For more information see the setfacl man page on your system. Example 12.9. Modifying permissions for a group project The following example describes how to modify permissions for the group-project file owned by the root user that belongs to the root group so that this file is: Not executable by anyone. The user andrew has the rw- permissions. The user susan has the --- permissions. Other users have the r-- permissions. Procedure Verification To verify that the user andrew has the rw- permission, the user susan has the --- permission, and other users have the r-- permission, use: The output returns: 12.3. Managing the umask You can use the umask utility to display, set, or change the current or default value of the umask . 12.3.1. Displaying the current value of the umask You can use the umask utility to display the current value of the umask in symbolic or octal mode. Procedure To display the current value of the umask in symbolic mode, use: To display the current value of the umask in the octal mode, use: Note When displaying the umask in octal mode, you may notice it displayed as a four digit number ( 0002 or 0022 ). The first digit of the umask represents a special bit (sticky bit, SGID bit, or SUID bit). If the first digit is set to 0 , the special bit is not set. 12.3.2. Setting the umask using symbolic values You can use the umask utility with symbolic values (a combination letters and signs) to set the umask for the current shell session You can assign the following permissions : Read ( r ) Write ( w ) Execute ( x ) Permissions can be assigned to the following levels of ownership : User owner ( u ) Group owner ( g ) Other ( o ) All ( a ) To add or remove permissions you can use the following signs : + to add the permissions on top of the existing permissions - to remove the permissions from the existing permission = to remove the existing permissions and explicitly define the new ones Note Any permission that is not specified after the equals sign ( = ) is automatically prohibited. Procedure To set the umask for the current shell session, use: Replace <level> with the level of ownership you want to set the umask for. Replace <operation> with one of the signs . Replace <permission> with the permissions you want to assign. For example, to set the umask to u=rwx,g=rwx,o=rwx , use umask -S a=rwx . See User file-creation mode for more details. Note The umask is only valid for the current shell session. 12.3.3. Setting the umask using octal values You can use the umask utility with octal values (numbers) to set the umask for the current shell session. Procedure To set the umask for the current shell session, use: Replace octal_value with an octal value. See User file-creation mode mask for more details. Note The umask is only valid for the current shell session. 12.3.4. Changing the default umask for the non-login shell You can change the default bash umask for standard users by modifying the /etc/bashrc file. Prerequisites You have the root access. Procedure Open the /etc/bashrc file in the editor. Modify the following sections to set a new default bash umask : Changing the UID -gt 199 , will apply the new umask for all IDs >= 199 and impact services and security. Replace the default octal value of the umask ( 002 ) with another octal value. See User file-creation mode mask for more details. Save the changes and exit the editor. 12.3.5. Changing the default umask for the login shell You can change the default bash umask for the root user by modifying the /etc/profile file. Prerequisites root access Procedure As root , open the /etc/profile file in the editor. Modify the following sections to set a new default bash umask : Replace the default octal value of the umask ( 022 ) with another octal value. See User file-creation mode mask for more details. Save the changes and exit the editor. 12.3.6. Changing the default umask for a specific user You can change the default umask for a specific user by modifying the .bashrc for that user. Procedure Append the line that specifies the octal value of the umask into the .bashrc file for the particular user. Replace octal_value with an octal value and replace username with the name of the user. See User file-creation mode mask for more details. 12.3.7. Setting default permissions for newly created home directories You can change the permission modes for home directories of newly created users by modifying the /etc/login.defs file. Procedure As root , open the /etc/login.defs file in the editor. Modify the following section to set a new default HOME_MODE : Replace the default octal value ( 0700 ) with another octal value. The selected mode will be used to create the permissions for the home directory. If HOME_MODE is set, save the changes and exit the editor. If HOME_MODE is not set, modify the UMASK to set the mode for the newly created home directories: Replace the default octal value ( 022 ) with another octal value. See User file-creation mode mask for more details. Save the changes and exit the editor.
[ "ls -l -rwxrw----. 1 sysadmins sysadmins 2 Mar 2 08:43 file", "ls -dl directory drwxr-----. 1 sysadmins sysadmins 2 Mar 2 08:43 directory", "chmod <level><operation><permission> file-name", "ls -l file-name", "ls -dl directory-name", "ls -l directory-name", "ls -l my-file.txt -rw-rw-r--. 1 username username 0 Feb 24 17:56 my-file.txt", "chmod go= my-file.txt", "ls -l my-file.txt -rw-------. 1 username username 0 Feb 24 17:56 my-file.txt", "ls -dl my-directory drwxrwx---. 2 username username 4096 Feb 24 18:12 my-directory", "chmod o+rx my-directory", "ls -dl my-directory drwxrwxr-x. 2 username username 4096 Feb 24 18:12 my-directory", "chmod octal_value file-name", "getfacl file-name", "setfacl -m u: username : symbolic_value file-name", "setfacl -m u:andrew:rw- group-project setfacl -m u:susan:--- group-project", "getfacl group-project", "file: group-project owner: root group: root user:andrew:rw- user:susan:--- group::r-- mask::rw- other::r--", "umask -S", "umask", "umask -S <level><operation><permission>", "umask octal_value", "if [ USDUID -gt 199 ] && [ \"id -gn\" = \"id -un\" ]; then umask 002 else umask 022 fi", "if [ USDUID -gt 199 ] && [ \"/usr/bin/id -gn\" = \"/usr/bin/id -un\" ]; then umask 002 else umask 022 fi", "echo 'umask octal_value ' >> /home/ username /.bashrc", "HOME_MODE is used by useradd(8) and newusers(8) to set the mode for new home directories. If HOME_MODE is not set, the value of UMASK is used to create the mode. HOME_MODE 0700", "Default initial \"umask\" value used by login(1) on non-PAM enabled systems. Default \"umask\" value for pam_umask(8) on PAM enabled systems. UMASK is also used by useradd(8) and newusers(8) to set the mode for new home directories if HOME_MODE is not set. 022 is the default value, but 027, or even 077, could be considered for increased privacy. There is no One True Answer here: each sysadmin must make up their mind. UMASK 022" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_basic_system_settings/managing-file-system-permissions_configuring-basic-system-settings
Chapter 4. Migrating from internal Satellite databases to external databases
Chapter 4. Migrating from internal Satellite databases to external databases When you install Red Hat Satellite, the satellite-installer command installs PostgreSQL databases on the same server as Satellite. If you are using the default internal databases but want to start using external databases to help with the server load, you can migrate your internal databases to external databases. To confirm whether your Satellite Server has internal or external databases, you can query the status of your databases: For PostgreSQL, enter the following command: Red Hat does not provide support or tools for external database maintenance. This includes backups, upgrades, and database tuning. You must have your own database administrator to support and maintain external databases. To migrate from the default internal databases to external databases, you must complete the following procedures: Section 4.2, "Preparing a host for external databases" . Prepare a Red Hat Enterprise Linux 8 server to host the external databases. Section 4.3, "Installing PostgreSQL" . Prepare PostgreSQL with databases for Satellite, Pulp and Candlepin with dedicated users owning them. Section 4.4, "Migrating to external databases" . Edit the parameters of satellite-installer to point to the new databases, and run satellite-installer . 4.1. PostgreSQL as an external database considerations Foreman, Katello, and Candlepin use the PostgreSQL database. If you want to use PostgreSQL as an external database, the following information can help you decide if this option is right for your Satellite configuration. Satellite supports PostgreSQL version 13. Advantages of external PostgreSQL Increase in free memory and free CPU on Satellite Flexibility to set shared_buffers on the PostgreSQL database to a high number without the risk of interfering with other services on Satellite Flexibility to tune the PostgreSQL server's system without adversely affecting Satellite operations Disadvantages of external PostgreSQL Increase in deployment complexity that can make troubleshooting more difficult The external PostgreSQL server is an additional system to patch and maintain If either Satellite or the PostgreSQL database server suffers a hardware or storage failure, Satellite is not operational If there is latency between the Satellite server and database server, performance can suffer If you suspect that the PostgreSQL database on your Satellite is causing performance problems, use the information in Satellite 6: How to enable postgres query logging to detect slow running queries to determine if you have slow queries. Queries that take longer than one second are typically caused by performance issues with large installations, and moving to an external database might not help. If you have slow queries, contact Red Hat Support. 4.2. Preparing a host for external databases Install a freshly provisioned system with the latest Red Hat Enterprise Linux 9 or Red Hat Enterprise Linux 8 to host the external databases. Subscriptions for Red Hat Enterprise Linux do not provide the correct service level agreement for using Satellite with external databases. You must also attach a Satellite subscription to the base operating system that you want to use for the external databases. Prerequisites The prepared host must meet Satellite's Storage Requirements . You must attach a Satellite subscription to your server. For more information about subscription, see Attaching the Satellite Infrastructure Subscription in Installing Satellite Server in a connected network environment . Procedure Select the operating system and version you are installing external database on: Red Hat Enterprise Linux 9 Red Hat Enterprise Linux 8 4.2.1. Red Hat Enterprise Linux 9 Disable all repositories: Enable the following repositories: Verification Verify that the required repositories are enabled: 4.2.2. Red Hat Enterprise Linux 8 Disable all repositories: Enable the following repositories: Enable the following module: Note Enablement of the module satellite:el8 warns about a conflict with postgresql:10 and ruby:2.5 as these modules are set to the default module versions on Red Hat Enterprise Linux 8. The module satellite:el8 has a dependency for the modules postgresql:12 and ruby:2.7 that will be enabled with the satellite:el8 module. These warnings do not cause installation process failure, hence can be ignored safely. For more information about modules and lifecycle streams on Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux Application Streams Lifecycle . Verification Verify that the required repositories are enabled: 4.3. Installing PostgreSQL You can install only the same version of PostgreSQL that is installed with the satellite-installer tool during an internal database installation. Satellite supports PostgreSQL version 12. Procedure To install PostgreSQL, enter the following command: To initialize PostgreSQL, enter the following command: Edit the /var/lib/pgsql/data/postgresql.conf file: Note that the default configuration of external PostgreSQL needs to be adjusted to work with Satellite. The base recommended external database configuration adjustments are as follows: checkpoint_completion_target: 0.9 max_connections: 500 shared_buffers: 512MB work_mem: 4MB Remove the # and edit to listen to inbound connections: Add the following line to the end of the file to use SCRAM for authentication: Edit the /var/lib/pgsql/data/pg_hba.conf file: Add the following line to the file: To start, and enable PostgreSQL service, enter the following commands: Open the postgresql port on the external PostgreSQL server: Make the changes persistent: Switch to the postgres user and start the PostgreSQL client: Create three databases and dedicated roles: one for Satellite, one for Candlepin, and one for Pulp: Connect to the Pulp database: Create the hstore extension: Exit the postgres user: From Satellite Server, test that you can access the database. If the connection succeeds, the commands return 1 . 4.4. Migrating to external databases Back up and transfer existing data, then use the satellite-installer command to configure Satellite to connect to an external PostgreSQL database server. Prerequisites You have installed and configured a PostgreSQL server on a Red Hat Enterprise Linux server. Procedure On Satellite Server, stop all Satellite services except for PostgreSQL: Back up the internal databases: Transfer the data to the new external databases: Use the satellite-installer command to update Satellite to point to the new databases: Remove the PostgreSQL package on Satellite Server: Remove the PostgreSQL data directory:
[ "satellite-maintain service status --only postgresql", "subscription-manager repos --disable \"*\"", "subscription-manager repos --enable=satellite-6.16-for-rhel-9-x86_64-rpms --enable=satellite-maintenance-6.16-for-rhel-9-x86_64-rpms --enable=rhel-9-for-x86_64-baseos-rpms --enable=rhel-9-for-x86_64-appstream-rpms", "dnf repolist enabled", "subscription-manager repos --disable \"*\"", "subscription-manager repos --enable=satellite-6.16-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.16-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms", "dnf module enable satellite:el8", "dnf repolist enabled", "dnf install postgresql-server postgresql-evr postgresql-contrib", "postgresql-setup initdb", "vi /var/lib/pgsql/data/postgresql.conf", "listen_addresses = '*'", "password_encryption=scram-sha-256", "vi /var/lib/pgsql/data/pg_hba.conf", "host all all Satellite_ip /32 scram-sha-256", "systemctl enable --now postgresql", "firewall-cmd --add-service=postgresql", "firewall-cmd --runtime-to-permanent", "su - postgres -c psql", "CREATE USER \"foreman\" WITH PASSWORD ' Foreman_Password '; CREATE USER \"candlepin\" WITH PASSWORD ' Candlepin_Password '; CREATE USER \"pulp\" WITH PASSWORD ' Pulpcore_Password '; CREATE DATABASE foreman OWNER foreman; CREATE DATABASE candlepin OWNER candlepin; CREATE DATABASE pulpcore OWNER pulp;", "postgres=# \\c pulpcore You are now connected to database \"pulpcore\" as user \"postgres\".", "pulpcore=# CREATE EXTENSION IF NOT EXISTS \"hstore\"; CREATE EXTENSION", "\\q", "PGPASSWORD=' Foreman_Password ' psql -h postgres.example.com -p 5432 -U foreman -d foreman -c \"SELECT 1 as ping\" PGPASSWORD=' Candlepin_Password ' psql -h postgres.example.com -p 5432 -U candlepin -d candlepin -c \"SELECT 1 as ping\" PGPASSWORD=' Pulpcore_Password ' psql -h postgres.example.com -p 5432 -U pulp -d pulpcore -c \"SELECT 1 as ping\"", "satellite-maintain service stop --exclude postgresql", "satellite-maintain backup online --preserve-directory --skip-pulp-content /var/migration_backup", "PGPASSWORD=' Foreman_Password ' pg_restore -h postgres.example.com -U foreman -d foreman < /var/migration_backup/foreman.dump PGPASSWORD=' Candlepin_Password ' pg_restore -h postgres.example.com -U candlepin -d candlepin < /var/migration_backup/candlepin.dump PGPASSWORD=' Pulpcore_Password ' pg_restore -h postgres.example.com -U pulp -d pulpcore < /var/migration_backup/pulpcore.dump", "satellite-installer --katello-candlepin-manage-db false --katello-candlepin-db-host postgres.example.com --katello-candlepin-db-name candlepin --katello-candlepin-db-user candlepin --katello-candlepin-db-password Candlepin_Password --foreman-proxy-content-pulpcore-manage-postgresql false --foreman-proxy-content-pulpcore-postgresql-host postgres.example.com --foreman-proxy-content-pulpcore-postgresql-db-name pulpcore --foreman-proxy-content-pulpcore-postgresql-user pulp --foreman-proxy-content-pulpcore-postgresql-password Pulpcore_Password --foreman-db-manage false --foreman-db-host postgres.example.com --foreman-db-database foreman --foreman-db-username foreman --foreman-db-password Foreman_Password", "satellite-maintain packages remove postgresql-server", "rm -fr /var/lib/pgsql/data" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/administering_red_hat_satellite/Migrating_from_Internal_Databases_to_External_Databases_admin
Migrating Apicurio Registry deployments
Migrating Apicurio Registry deployments Red Hat build of Apicurio Registry 2.6 Migrate from Apicurio Registry version 1.1 to 2.6 Red Hat build of Apicurio Registry Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apicurio_registry/2.6/html/migrating_apicurio_registry_deployments/index
Chapter 2. ContainerRuntimeConfig [machineconfiguration.openshift.io/v1]
Chapter 2. ContainerRuntimeConfig [machineconfiguration.openshift.io/v1] Description ContainerRuntimeConfig describes a customized Container Runtime configuration. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ContainerRuntimeConfigSpec defines the desired state of ContainerRuntimeConfig status object ContainerRuntimeConfigStatus defines the observed state of a ContainerRuntimeConfig 2.1.1. .spec Description ContainerRuntimeConfigSpec defines the desired state of ContainerRuntimeConfig Type object Required containerRuntimeConfig Property Type Description containerRuntimeConfig object ContainerRuntimeConfiguration defines the tuneables of the container runtime machineConfigPoolSelector object MachineConfigPoolSelector selects which pools the ContainerRuntimeConfig shoud apply to. A nil selector will result in no pools being selected. 2.1.2. .spec.containerRuntimeConfig Description ContainerRuntimeConfiguration defines the tuneables of the container runtime Type object Property Type Description defaultRuntime string defaultRuntime is the name of the OCI runtime to be used as the default. logLevel string logLevel specifies the verbosity of the logs based on the level it is set to. Options are fatal, panic, error, warn, info, and debug. logSizeMax integer-or-string logSizeMax specifies the Maximum size allowed for the container log file. Negative numbers indicate that no size limit is imposed. If it is positive, it must be >= 8192 to match/exceed conmon's read buffer. overlaySize integer-or-string overlaySize specifies the maximum size of a container image. This flag can be used to set quota on the size of container images. (default: 10GB) pidsLimit integer pidsLimit specifies the maximum number of processes allowed in a container 2.1.3. .spec.machineConfigPoolSelector Description MachineConfigPoolSelector selects which pools the ContainerRuntimeConfig shoud apply to. A nil selector will result in no pools being selected. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.4. .spec.machineConfigPoolSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.5. .spec.machineConfigPoolSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.6. .status Description ContainerRuntimeConfigStatus defines the observed state of a ContainerRuntimeConfig Type object Property Type Description conditions array conditions represents the latest available observations of current state. conditions[] object ContainerRuntimeConfigCondition defines the state of the ContainerRuntimeConfig observedGeneration integer observedGeneration represents the generation observed by the controller. 2.1.7. .status.conditions Description conditions represents the latest available observations of current state. Type array 2.1.8. .status.conditions[] Description ContainerRuntimeConfigCondition defines the state of the ContainerRuntimeConfig Type object Property Type Description lastTransitionTime `` lastTransitionTime is the time of the last update to the current status object. message string message provides additional information about the current condition. This is only to be consumed by humans. reason string reason is the reason for the condition's last transition. Reasons are PascalCase status string status of the condition, one of True, False, Unknown. type string type specifies the state of the operator's reconciliation functionality. 2.2. API endpoints The following API endpoints are available: /apis/machineconfiguration.openshift.io/v1/containerruntimeconfigs DELETE : delete collection of ContainerRuntimeConfig GET : list objects of kind ContainerRuntimeConfig POST : create a ContainerRuntimeConfig /apis/machineconfiguration.openshift.io/v1/containerruntimeconfigs/{name} DELETE : delete a ContainerRuntimeConfig GET : read the specified ContainerRuntimeConfig PATCH : partially update the specified ContainerRuntimeConfig PUT : replace the specified ContainerRuntimeConfig /apis/machineconfiguration.openshift.io/v1/containerruntimeconfigs/{name}/status GET : read status of the specified ContainerRuntimeConfig PATCH : partially update status of the specified ContainerRuntimeConfig PUT : replace status of the specified ContainerRuntimeConfig 2.2.1. /apis/machineconfiguration.openshift.io/v1/containerruntimeconfigs HTTP method DELETE Description delete collection of ContainerRuntimeConfig Table 2.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ContainerRuntimeConfig Table 2.2. HTTP responses HTTP code Reponse body 200 - OK ContainerRuntimeConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a ContainerRuntimeConfig Table 2.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.4. Body parameters Parameter Type Description body ContainerRuntimeConfig schema Table 2.5. HTTP responses HTTP code Reponse body 200 - OK ContainerRuntimeConfig schema 201 - Created ContainerRuntimeConfig schema 202 - Accepted ContainerRuntimeConfig schema 401 - Unauthorized Empty 2.2.2. /apis/machineconfiguration.openshift.io/v1/containerruntimeconfigs/{name} Table 2.6. Global path parameters Parameter Type Description name string name of the ContainerRuntimeConfig HTTP method DELETE Description delete a ContainerRuntimeConfig Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ContainerRuntimeConfig Table 2.9. HTTP responses HTTP code Reponse body 200 - OK ContainerRuntimeConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ContainerRuntimeConfig Table 2.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK ContainerRuntimeConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ContainerRuntimeConfig Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. Body parameters Parameter Type Description body ContainerRuntimeConfig schema Table 2.14. HTTP responses HTTP code Reponse body 200 - OK ContainerRuntimeConfig schema 201 - Created ContainerRuntimeConfig schema 401 - Unauthorized Empty 2.2.3. /apis/machineconfiguration.openshift.io/v1/containerruntimeconfigs/{name}/status Table 2.15. Global path parameters Parameter Type Description name string name of the ContainerRuntimeConfig HTTP method GET Description read status of the specified ContainerRuntimeConfig Table 2.16. HTTP responses HTTP code Reponse body 200 - OK ContainerRuntimeConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ContainerRuntimeConfig Table 2.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK ContainerRuntimeConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ContainerRuntimeConfig Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body ContainerRuntimeConfig schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK ContainerRuntimeConfig schema 201 - Created ContainerRuntimeConfig schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/machine_apis/containerruntimeconfig-machineconfiguration-openshift-io-v1
C.10. References
C.10. References For more information about tracepoints and the GFS2 glocks file, see the following resources: For information on glock internal locking rules, see http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/filesystems/gfs2-glocks.txt;h=0494f78d87e40c225eb1dc1a1489acd891210761;hb=HEAD . For information on event tracing, see http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/trace/events.txt;h=09bd8e9029892e4e1d48078de4d076e24eff3dd2;hb=HEAD . For information on the trace-cmd utility, see http://lwn.net/Articles/341902/ .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/ap-references-gfs2
Chapter 2. Overview of installing and deploying OpenShift AI
Chapter 2. Overview of installing and deploying OpenShift AI Red Hat OpenShift AI is a platform for data scientists and developers of artificial intelligence (AI) applications. It provides a fully supported environment that lets you rapidly develop, train, test, and deploy machine learning models on-premises and/or in the public cloud. OpenShift AI is provided as a managed cloud service add-on for Red Hat OpenShift or as self-managed software that you can install on-premise or in the public cloud on OpenShift. For information about installing OpenShift AI as self-managed software on your OpenShift cluster in a connected or a disconnected environment, see Product Documentation for Red Hat OpenShift AI Self-Managed . There are two deployment options for Red Hat OpenShift AI as a managed cloud service add-on: OpenShift Dedicated with a Customer Cloud Subscription on Amazon Web Services or Google Cloud Platform OpenShift Dedicated is a complete OpenShift Container Platform cluster provided as a cloud service, configured for high availability, and dedicated to a single customer. OpenShift Dedicated is professionally managed by Red Hat and hosted on Amazon Web Services (AWS) or Google Cloud Platform (GCP). The Customer Cloud Subscription (CCS) model allows Red Hat to deploy and manage clusters into a customer's AWS or GCP account. Contact your Red Hat account manager to get OpenShift Dedicated through a CCS. Red Hat OpenShift Service on AWS (ROSA) ROSA is a fully-managed, turnkey application platform that allows you to focus on delivering value to your customers by building and deploying applications. You subscribe to the service directly from your AWS account. Installing OpenShift AI as a managed cloud service involves the following high-level tasks: Confirm that your OpenShift cluster meets all requirements. Configure an identity provider for your OpenShift cluster. Add administrative users for your OpenShift cluster. Subscribe to the Red Hat OpenShift Data Science Add-on. For OpenShift Dedicated with a CCS for AWS or GCP, get a subscription through Red Hat. For ROSA, get a subscription through the AWS Marketplace. Install the OpenShift Data Science Add-on. Access the OpenShift AI dashboard. Optionally, enable graphics processing units (GPUs) in OpenShift AI to ensure that your data scientists can use compute-heavy workloads in their models.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/installing_the_openshift_ai_cloud_service/overview-of-deploying-openshift-ai_install
Chapter 1. The LVM Logical Volume Manager
Chapter 1. The LVM Logical Volume Manager This chapter provides a summary of the features of the LVM logical volume manager that are new since the initial release of Red Hat Enterprise Linux 7. This chapter also provides a high-level overview of the components of the Logical Volume Manager (LVM). 1.1. New and Changed Features This section lists features of the LVM logical volume manager that are new since the initial release of Red Hat Enterprise Linux 7. 1.1.1. New and Changed Features for Red Hat Enterprise Linux 7.1 Red Hat Enterprise Linux 7.1 includes the following documentation and feature updates and changes. The documentation for thinly-provisioned volumes and thinly-provisioned snapshots has been clarified. Additional information about LVM thin provisioning is now provided in the lvmthin (7) man page. For general information on thinly-provisioned logical volumes, see Section 2.3.4, "Thinly-Provisioned Logical Volumes (Thin Volumes)" . For information on thinly-provisioned snapshot volumes, see Section 2.3.6, "Thinly-Provisioned Snapshot Volumes" . This manual now documents the lvm dumpconfig command in Section B.2, "The lvmconfig Command" . Note that as of the Red Hat Enterprise Linux 7.2 release, this command was renamed lvmconfig , although the old format continues to work. This manual now documents LVM profiles in Section B.3, "LVM Profiles" . This manual now documents the lvm command in Section 3.6, "Displaying LVM Information with the lvm Command" . In the Red Hat Enterprise Linux 7.1 release, you can control activation of thin pool snapshots with the -k and -K options of the lvcreate and lvchange command, as documented in Section 4.4.20, "Controlling Logical Volume Activation" . This manual documents the --force argument of the vgimport command. This allows you to import volume groups that are missing physical volumes and subsequently run the vgreduce --removemissing command. For information on the vgimport command, refer to Section 4.3.15, "Moving a Volume Group to Another System" . This manual documents the --mirrorsonly argument of the vgreduce command. This allows you remove only the logical volumes that are mirror images from a physical volume that has failed. For information on using this option, refer to Section 4.3.15, "Moving a Volume Group to Another System" . In addition, small technical corrections and clarifications have been made throughout the document. 1.1.2. New and Changed Features for Red Hat Enterprise Linux 7.2 Red Hat Enterprise Linux 7.2 includes the following documentation and feature updates and changes. Many LVM processing commands now accept the -S or --select option to define selection criteria for those commands. LVM selection criteria are documented in the new appendix Appendix C, LVM Selection Criteria . This document provides basic procedures for creating cache logical volumes in Section 4.4.8, "Creating LVM Cache Logical Volumes" . The troubleshooting chapter of this document includes a new section, Section 6.7, "Duplicate PV Warnings for Multipathed Devices" . As of the Red Hat Enterprise Linux 7.2 release, the lvm dumpconfig command was renamed lvmconfig , although the old format continues to work. This change is reflected throughout this document. In addition, small technical corrections and clarifications have been made throughout the document. 1.1.3. New and Changed Features for Red Hat Enterprise Linux 7.3 Red Hat Enterprise Linux 7.3 includes the following documentation and feature updates and changes. LVM supports RAID0 segment types. RAID0 spreads logical volume data across multiple data subvolumes in units of stripe size. For information on creating RAID0 volumes, see Section 4.4.3.1, "Creating RAID0 Volumes (Red Hat Enterprise Linux 7.3 and Later)" . You can report information about physical volumes, volume groups, logical volumes, physical volume segments, and logical volume segments all at once with the lvm fullreport command. For information on this command and its capabilities, see the lvm-fullreport (8) man page. LVM supports log reports, which contain a log of operations, messages, and per-object status with complete object identification collected during LVM command execution. For an example of an LVM log report, see Section 4.8.6, "Command Log Reporting (Red Hat Enterprise Linux 7.3 and later)" . For further information about the LVM log report. see the lvmreport (7) man page. You can use the --reportformat option of the LVM display commands to display the output in JSON format. For an example of output displayed in JSON format, see Section 4.8.5, "JSON Format Output (Red Hat Enterprise Linux 7.3 and later)" . You can now configure your system to track thin snapshot and thin logical volumes that have been removed by enabling the record_lvs_history metadata option in the lvm.conf configuration file. This allows you to display a full thin snapshot dependency chain that includes logical volumes that have been removed from the original dependency chain and have become historical logical volumes. For information on historical logical volumes, see Section 4.4.21, "Tracking and Displaying Historical Logical Volumes (Red Hat Enterprise Linux 7.3 and Later)" . In addition, small technical corrections and clarifications have been made throughout the document. 1.1.4. New and Changed Features for Red Hat Enterprise Linux 7.4 Red Hat Enterprise Linux 7.4 includes the following documentation and feature updates and changes. Red Hat Enterprise Linux 7.4 provides support for RAID takeover and RAID reshaping. For a summary of these features, see Section 4.4.3.12, "RAID Takeover (Red Hat Enterprise Linux 7.4 and Later)" and Section 4.4.3.13, "Reshaping a RAID Logical Volume (Red Hat Enterprise Linux 7.4 and Later)" .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/lvm_overview
probe::scsi.iodone
probe::scsi.iodone Name probe::scsi.iodone - SCSI command completed by low level driver and enqueued into the done queue. Synopsis scsi.iodone Values device_state The current state of the device data_direction_str Data direction, as a string req_addr The current struct request pointer, as a number dev_id The scsi device id lun The lun number scsi_timer_pending 1 if a timer is pending on this request device_state_str The current state of the device, as a string host_no The host number channel The channel number data_direction The data_direction specifies whether this command is from/to the device.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-scsi-iodone
3.2. Performance Tuning with tuned and tuned-adm
3.2. Performance Tuning with tuned and tuned-adm The tuned tuning service can adapt the operating system to perform better under certain workloads by setting a tuning profile. The tuned-adm command-line tool allows users to switch between different tuning profiles. tuned Profiles Overview Several pre-defined profiles are included for common use cases, but tuned also enables you to define custom profiles, which can be either based on one of the pre-defined profiles, or defined from scratch. In Red Hat Enterprise Linux 7, the default profile is throughput-performance . The profiles provided with tuned are divided into two categories: power-saving profiles, and performance-boosting profiles. The performance-boosting profiles include profiles focus on the following aspects: low latency for storage and network high throughput for storage and network virtual machine performance virtualization host performance tuned Boot Loader plug-in You can use the tuned Bootloader plug-in to add parameters to the kernel (boot or dracut) command line. Note that only the GRUB 2 boot loader is supported and a reboot is required to apply profile changes. For example, to add the quiet parameter to a tuned profile, include the following lines in the tuned.conf file: Switching to another profile or manually stopping the tuned service removes the additional parameters. If you shut down or reboot the system, the kernel parameters persist in the grub.cfg file. Environment Variables and Expanding tuned Built-In Functions If you run tuned-adm profile profile_name and then grub2-mkconfig -o profile_path after updating GRUB 2 configuration, you can use Bash environment variables, which are expanded after running grub2-mkconfig . For example, the following environment variable is expanded to nfsroot=/root : You can use tuned variables as an alternative to environment variables. In the following example, USD{isolated_cores} expands to 1,2 , so the kernel boots with the isolcpus=1,2 parameter: In the following example, USD{non_isolated_cores} expands to 0,3-5 , and the cpulist_invert built-in function is called with the 0,3-5 arguments: The cpulist_invert function inverts the list of CPUs. For a 6-CPU machine, the inversion is 1,2 , and the kernel boots with the isolcpus=1,2 command-line parameter. Using tuned environment variables reduces the amount of necessary typing. You can also use various built-in functions together with tuned variables. If the built-in functions do not satisfy your needs, you can create custom functions in Python and add them to tuned in the form of plug-ins. Variables and built-in functions are expanded at run time when the tuned profile is activated. The variables can be specified in a separate file. You can, for example, add the following lines to tuned.conf : If you add isolated_cores=1,2 to the /etc/tuned/ my-variables.conf file, the kernel boots with the isolcpus=1,2 parameter. Modifying Default System tuned Profiles There are two ways of modifying the default system tuned profiles. You can either create a new tuned profile directory, or copy the directory of a system profile and edit the profile as needed. Procedure 3.1. Creating a New Tuned Profile Directory In /etc/tuned/ , create a new directory named the same as the profile you want to create: /etc/tuned/ my_profile_name / . In the new directory, create a file named tuned.conf , and include the following lines at the top: Include your profile modifications. For example, to use the settings from the throughput-performance profile with the value of vm.swappiness set to 5, instead of default 10, include the following lines: To activate the profile, run: Creating a directory with a new tuned.conf file enables you to keep all your profile modifications after system tuned profiles are updated. Alternatively, copy the directory with a system profile from /user/lib/tuned/ to /etc/tuned/ . For example: Then, edit the profile in /etc/tuned according to your needs. Note that if there are two profiles of the same name, the profile located in /etc/tuned/ is loaded. The disadvantage of this approach is that if a system profile is updated after a tuned upgrade, the changes will not be reflected in the now-outdated modified version. Resources For more information, see Section A.4, "tuned" and Section A.5, "tuned-adm" . For detailed information on using tuned and tuned-adm , see the tuned (8) and tuned-adm (1) manual pages.
[ "[bootloader] cmdline=quiet", "[bootloader] cmdline=\"nfsroot=USDHOME\"", "[variables] isolated_cores=1,2 [bootloader] cmdline=isolcpus=USD{isolated_cores}", "[variables] non_isolated_cores=0,3-5 [bootloader] cmdline=isolcpus=USD{f:cpulist_invert:USD{non_isolated_cores}}", "[variables] include=/etc/tuned/ my-variables.conf [bootloader] cmdline=isolcpus=USD{isolated_cores}", "[main] include= profile_name", "[main] include=throughput-performance [sysctl] vm.swappiness=5", "tuned-adm profile my_profile_name", "cp -r /usr/lib/tuned/throughput-performance /etc/tuned" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-tuned_and_tuned_adm
Chapter 18. Managing More Code with Make
Chapter 18. Managing More Code with Make The GNU Make utility, commonly abbreviated as Make , is a tool for controlling the generation of executables from source files. Make automatically determines which parts of a complex program have changed and need to be recompiled. Make uses configuration files called Makefiles to control the way programs are built. 18.1. GNU make and Makefile Overview To create a usable form (usually executable files) from the source files of a particular project, perform several necessary steps. Record the actions and their sequence to be able to repeat them later. Red Hat Enterprise Linux contains GNU make , a build system designed for this purpose. Prerequisites Understanding the concepts of compiling and linking GNU make GNU make reads Makefiles which contain the instructions describing the build process. A Makefile contains multiple rules that describe a way to satisfy a certain condition ( target ) with a specific action ( recipe ). Rules can hierarchically depend on another rule. Running make without any options makes it look for a Makefile in the current directory and attempt to reach the default target. The actual Makefile file name can be one of Makefile , makefile , and GNUmakefile . The default target is determined from the Makefile contents. Makefile Details Makefiles use a relatively simple syntax for defining variables and rules , which consists of a target and a recipe . The target specifies the output if a rule is executed. The lines with recipes must start with the tab character. Typically, a Makefile contains rules for compiling source files, a rule for linking the resulting object files, and a target that serves as the entry point at the top of the hierarchy. Consider the following Makefile for building a C program which consists of a single file, hello.c . all: hello hello: hello.o gcc hello.o -o hello hello.o: hello.c gcc -c hello.c -o hello.o This specifies that to reach the target all , the file hello is required. To get hello , one needs hello.o (linked by gcc ), which in turn is created from hello.c (compiled by gcc ). The target all is the default target because it is the first target that does not start with a period. Running make without any arguments is then identical to running make all , if the current directory contains this Makefile . Typical Makefile A more typical Makefile uses variables for generalization of the steps and adds a target "clean" which removes everything but the source files. CC=gcc CFLAGS=-c -Wall SOURCE=hello.c OBJ=USD(SOURCE:.c=.o) EXE=hello all: USD(SOURCE) USD(EXE) USD(EXE): USD(OBJ) USD(CC) USD(OBJ) -o USD@ %.o: %.c USD(CC) USD(CFLAGS) USD< -o USD@ clean: rm -rf USD(OBJ) USD(EXE) Adding more source files to such a Makefile requires adding them to the line where the SOURCE variable is defined. Additional resources GNU make: Introduction - 2 An Introduction to Makefiles Chapter 15, Building Code with GCC 18.2. Example: Building a C Program Using a Makefile Build a sample C program using a Makefile by following the steps in the example below. Prerequisites Understanding of Makefiles and make Procedure Create a directory hellomake and change to this directory: Create a file hello.c with the following contents: #include <stdio.h> int main(int argc, char *argv[]) { printf("Hello, World!\n"); return 0; } Create a file Makefile with the following contents: CC=gcc CFLAGS=-c -Wall SOURCE=hello.c OBJ=USD(SOURCE:.c=.o) EXE=hello all: USD(SOURCE) USD(EXE) USD(EXE): USD(OBJ) USD(CC) USD(OBJ) -o USD@ %.o: %.c USD(CC) USD(CFLAGS) USD< -o USD@ clean: rm -rf USD(OBJ) USD(EXE) Caution The Makefile recipe lines must start with the tab character. When copying the text above from the browser, you may paste spaces instead. Correct this change manually. Run make : This creates an executable file hello . Run the executable file hello : Run the Makefile target clean to remove the created files: Additional Resources Section 15.8, "Example: Building a C Program with GCC" Section 15.9, "Example: Building a C++ Program with GCC" 18.3. Documentation Resources for make For more information about make , see the resources listed below. Installed Documentation Use the man and info tools to view manual pages and information pages installed on your system: Online Documentation The GNU Make Manual hosted by the Free Software Foundation The Red Hat Developer Toolset User Guide - GNU make
[ "all: hello hello: hello.o gcc hello.o -o hello hello.o: hello.c gcc -c hello.c -o hello.o", "CC=gcc CFLAGS=-c -Wall SOURCE=hello.c OBJ=USD(SOURCE:.c=.o) EXE=hello all: USD(SOURCE) USD(EXE) USD(EXE): USD(OBJ) USD(CC) USD(OBJ) -o USD@ %.o: %.c USD(CC) USD(CFLAGS) USD< -o USD@ clean: rm -rf USD(OBJ) USD(EXE)", "mkdir hellomake cd hellomake", "#include <stdio.h> int main(int argc, char *argv[]) { printf(\"Hello, World!\\n\"); return 0; }", "CC=gcc CFLAGS=-c -Wall SOURCE=hello.c OBJ=USD(SOURCE:.c=.o) EXE=hello all: USD(SOURCE) USD(EXE) USD(EXE): USD(OBJ) USD(CC) USD(OBJ) -o USD@ %.o: %.c USD(CC) USD(CFLAGS) USD< -o USD@ clean: rm -rf USD(OBJ) USD(EXE)", "make gcc -c -Wall hello.c -o hello.o gcc hello.o -o hello", "./hello Hello, World!", "make clean rm -rf hello.o hello", "man make info make" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/developer_guide/managing-more-code-make
28.4.3. Event Configuration in ABRT GUI
28.4.3. Event Configuration in ABRT GUI Events can use parameters passed to them as environment variables (for example, the report_Logger event accepts an output file name as a parameter). Using the respective /etc/libreport/events/ event_name .xml file, ABRT GUI determines which parameters can be specified for a selected event and allows a user to set the values for these parameters. These values are saved by ABRT GUI and reused on subsequent invocations of these events. Open the Event Configuration window by clicking Edit Preferences . This window shows a list of all available events that can be selected during the reporting process. When you select one of the configurable events, you can click the Configure Event button and you will be able to configure settings for that event. If you change any of the events' parameters, they are saved in the Gnome keyring and will be used in the future GUI sessions. Note All files in the /etc/libreport/ directory hierarchy are world readable and are meant to be used as global settings. Thus, it is not advisable to store user names, passwords or any other sensitive data in them. The per-user settings (set in the GUI application and readable by the owner of USDHOME only) are stored in the Gnome keyring or can be stored in a text file in USDHOME/.abrt/*.conf for use in abrt-cli . Figure 28.12. The Event Configuration Window The following is a list of all configuration options available for each predefined event that is configurable in the ABRT GUI application. Logger In the Logger event configuration window, you can configure the following parameter: Log file - Specifies a file into which the crash reports are saved (by default, set to /var/log/abrt.log ). When the Append option is checked, the Logger event will append new crash reports to the log file specified in the Logger file option. When unchecked, the new crash report always replaces the one. Red Hat Customer Support In the Red Hat Customer Support event configuration window, you can configure the following parameters: RH Portal URL - Specifies the Red Hat Customer Support URL where crash dumps are sent (by default, set to https://api.access.redhat.com/rs ). Username - User login which is used to log into Red Hat Customer Support and create a Red Hat Customer Support database entry for a reported crash. Use your Red Hat Login acquired by creating an account on https://www.redhat.com/en , the Red Hat Customer Portal ( https://access.redhat.com/home ) or the Red Hat Network ( https://rhn.redhat.com/ ). Password - Password used to log into Red Hat Customer Support (that is, password associated with your Red Hat Login ) When the SSL verify option is checked, the SSL protocol is used when sending the data over the network. MailX In the MailX event configuration window, you can configure the following parameters: Subject - A string that appears in the Subject field of a problem report email sent by Mailx (by default, set to "[abrt] detected a crash" ). Sender - A string that appears in the From field of a problem report email. Recipient - Email address of the recipient of a problem report email. When the Send Binary Data option is checked, the problem report email will also contain all binary files associated with the problem in an attachment. The core dump file is also sent as an attachment. Kerneloops.org In the Kerneloops.org event configuration window, you can configure the following parameter: Kerneloops URL - Specifies the URL where Kernel problems are reported to (by default, set to http://submit.kerneloops.org/submitoops.php ) Report Uploader In the Report Uploader event configuration widow, you can configure the following parameter: URL - Specifies the URL where a tarball containing compressed problem data is uploaded using the FTP or SCP protocol (by default, set to ftp://localhost:/tmp/upload ).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-abrt-configuration-event_configuration_in_gui
Installation Guide
Installation Guide Red Hat Ceph Storage 5 Installing Red Hat Ceph Storage on Red Hat Enterprise Linux Red Hat Ceph Storage Documentation Team
[ "ceph soft nofile unlimited", "USER_NAME soft nproc unlimited", "cephadm shell ceph -s", "cephadm shell ceph -s", "subscription-manager register", "subscription-manager refresh", "subscription-manager list --available --matches ' Red Hat Ceph Storage '", "subscription-manager attach --pool= POOL_ID", "subscription-manager repos --disable=* subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms", "subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms", "dnf update", "subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpms", "dnf install cephadm-ansible", "cd /usr/share/cephadm-ansible", "mkdir -p inventory/staging inventory/production", "[defaults] inventory = ./inventory/staging", "touch inventory/staging/hosts touch inventory/production/hosts", "NODE_NAME_1 NODE_NAME_2 [admin] ADMIN_NODE_NAME_1", "host02 host03 host04 [admin] host01", "ansible-playbook -i inventory/staging/hosts PLAYBOOK.yml", "ansible-playbook -i inventory/production/hosts PLAYBOOK.yml", "ssh root@myhostname root@myhostname password: Permission denied, please try again.", "echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config.d/01-permitrootlogin.conf", "systemctl restart sshd.service", "ssh root@ HOST_NAME", "ssh root@host01", "ssh root@ HOST_NAME", "ssh root@host01", "adduser USER_NAME", "adduser ceph-admin", "passwd USER_NAME", "passwd ceph-admin", "cat << EOF >/etc/sudoers.d/ USER_NAME USDUSER_NAME ALL = (root) NOPASSWD:ALL EOF", "cat << EOF >/etc/sudoers.d/ceph-admin ceph-admin ALL = (root) NOPASSWD:ALL EOF", "chmod 0440 /etc/sudoers.d/ USER_NAME", "chmod 0440 /etc/sudoers.d/ceph-admin", "[ceph-admin@admin cephadm-ansible]USD ceph cephadm generate-key", "[ceph-admin@admin cephadm-ansible]USD ceph cephadm get-pub-key", "[ceph-admin@admin cephadm-ansible]USDceph cephadm clear-key", "[ceph-admin@admin cephadm-ansible]USD ceph mgr fail", "[ceph-admin@admin cephadm-ansible]USD ceph cephadm set-user <user>", "[ceph-admin@admin cephadm-ansible]USD ceph cephadm set-user user", "ceph cephadm get-pub-key > ~/ceph.pub", "[ceph-admin@admin cephadm-ansible]USD ceph cephadm get-pub-key > ~/ceph.pub", "ssh-copy-id -f -i ~/ceph.pub USER @ HOST", "[ceph-admin@admin cephadm-ansible]USD ssh-copy-id ceph-admin@host01", "[ceph-admin@admin ~]USD ssh-keygen", "ssh-copy-id USER_NAME @ HOST_NAME", "[ceph-admin@admin ~]USD ssh-copy-id ceph-admin@host01", "[ceph-admin@admin ~]USD touch ~/.ssh/config", "Host host01 Hostname HOST_NAME User USER_NAME Host host02 Hostname HOST_NAME User USER_NAME", "Host host01 Hostname host01 User ceph-admin Host host02 Hostname host02 User ceph-admin Host host03 Hostname host03 User ceph-admin", "[ceph-admin@admin ~]USD chmod 600 ~/.ssh/config", "host02 host03 host04 [admin] host01", "host02 host03 host04 [admin] host01", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit GROUP_NAME | NODE_NAME", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit clients [ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host01", "cephadm bootstrap --cluster-network NETWORK_CIDR --mon-ip IP_ADDRESS --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD --yes-i-know", "cephadm bootstrap --cluster-network 10.10.128.0/24 --mon-ip 10.10.128.68 --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1 --yes-i-know", "Ceph Dashboard is now available at: URL: https://host01:8443/ User: admin Password: i8nhu7zham Enabling client.admin keyring and conf on hosts with \"admin\" label You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid 266ee7a8-2a05-11eb-b846-5254002d4916 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ Bootstrap complete.", "cephadm bootstrap --ssh-user USER_NAME --mon-ip IP_ADDRESS --allow-fqdn-hostname --registry-json REGISTRY_JSON", "cephadm bootstrap --ssh-user ceph-admin --mon-ip 10.10.128.68 --allow-fqdn-hostname --registry-json /etc/mylogin.json", "{ \"url\":\" REGISTRY_URL \", \"username\":\" USER_NAME \", \"password\":\" PASSWORD \" }", "{ \"url\":\"registry.redhat.io\", \"username\":\"myuser1\", \"password\":\"mypassword1\" }", "cephadm bootstrap --mon-ip IP_ADDRESS --registry-json /etc/mylogin.json", "cephadm bootstrap --mon-ip 10.10.128.68 --registry-json /etc/mylogin.json", "service_type: host addr: host01 hostname: host01 --- service_type: host addr: host02 hostname: host02 --- service_type: host addr: host03 hostname: host03 --- service_type: host addr: host04 hostname: host04 --- service_type: mon placement: host_pattern: \"host[0-2]\" --- service_type: osd service_id: my_osds placement: host_pattern: \"host[1-3]\" data_devices: all: true", "cephadm bootstrap --apply-spec CONFIGURATION_FILE_NAME --mon-ip MONITOR_IP_ADDRESS --ssh-private-key PRIVATE_KEY --ssh-public-key PUBLIC_KEY --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD", "cephadm bootstrap --apply-spec initial-config.yaml --mon-ip 10.10.128.68 --ssh-private-key /home/ceph/.ssh/id_rsa --ssh-public-key /home/ceph/.ssh/id_rsa.pub --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1", "su - SSH_USER_NAME", "su - ceph Last login: Tue Sep 14 12:00:29 EST 2021 on pts/0", "ssh host01 Last login: Tue Sep 14 12:03:29 EST 2021 on pts/0", "cephadm bootstrap --ssh-user USER_NAME --mon-ip IP_ADDRESS --ssh-private-key PRIVATE_KEY --ssh-public-key PUBLIC_KEY --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD", "cephadm bootstrap --ssh-user ceph-admin --mon-ip 10.10.128.68 --ssh-private-key /home/ceph/.ssh/id_rsa --ssh-public-key /home/ceph/.ssh/id_rsa.pub --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1", "subscription-manager register", "subscription-manager refresh", "subscription-manager list --available --all --matches=\"*Ceph*\"", "subscription-manager attach --pool= POOL_ID", "subscription-manager repos --disable=* subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms", "subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms", "dnf install -y podman httpd-tools", "mkdir -p /opt/registry/{auth,certs,data}", "htpasswd -bBc /opt/registry/auth/htpasswd PRIVATE_REGISTRY_USERNAME PRIVATE_REGISTRY_PASSWORD", "htpasswd -bBc /opt/registry/auth/htpasswd myregistryusername myregistrypassword1", "openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 365 -out /opt/registry/certs/domain.crt -addext \"subjectAltName = DNS: LOCAL_NODE_FQDN \"", "openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 365 -out /opt/registry/certs/domain.crt -addext \"subjectAltName = DNS:admin.lab.redhat.com\"", "ln -s /opt/registry/certs/domain.crt /opt/registry/certs/domain.cert", "cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust trust list | grep -i \" LOCAL_NODE_FQDN \"", "cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust trust list | grep -i \"admin.lab.redhat.com\" label: admin.lab.redhat.com", "scp /opt/registry/certs/domain.crt root@host01:/etc/pki/ca-trust/source/anchors/ ssh root@host01 update-ca-trust trust list | grep -i \"admin.lab.redhat.com\" label: admin.lab.redhat.com", "podman run --restart=always --name NAME_OF_CONTAINER -p 5000:5000 -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/certs:/certs:z -e \"REGISTRY_AUTH=htpasswd\" -e \"REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm\" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -e \"REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt\" -e \"REGISTRY_HTTP_TLS_KEY=/certs/domain.key\" -e REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true -d registry:2", "podman run --restart=always --name myprivateregistry -p 5000:5000 -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/certs:/certs:z -e \"REGISTRY_AUTH=htpasswd\" -e \"REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm\" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -e \"REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt\" -e \"REGISTRY_HTTP_TLS_KEY=/certs/domain.key\" -e REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true -d registry:2", "unqualified-search-registries = [\"registry.redhat.io\", \"registry.access.redhat.com\", \"registry.fedoraproject.org\", \"registry.centos.org\", \"docker.io\"]", "login registry.redhat.io", "run -v / CERTIFICATE_DIRECTORY_PATH :/certs:Z -v / CERTIFICATE_DIRECTORY_PATH /domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel8/skopeo:8.5-8 skopeo copy --remove-signatures --src-creds RED_HAT_CUSTOMER_PORTAL_LOGIN : RED_HAT_CUSTOMER_PORTAL_PASSWORD --dest-cert-dir=./certs/ --dest-creds PRIVATE_REGISTRY_USERNAME : PRIVATE_REGISTRY_PASSWORD docker://registry.redhat.io/ SRC_IMAGE : SRC_TAG docker:// LOCAL_NODE_FQDN :5000/ DST_IMAGE : DST_TAG", "podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel8/skopeo:8.5-8 skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/rhceph/rhceph-5-rhel8:latest docker://admin.lab.redhat.com:5000/rhceph/rhceph-5-rhel8:latest podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel8/skopeo:8.5-8 skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.10 docker://admin.lab.redhat.com:5000/openshift4/ose-prometheus-node-exporter:v4.10 podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel8/skopeo:8.5-8 skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/rhceph/rhceph-5-dashboard-rhel8:latest docker://admin.lab.redhat.com:5000/rhceph/rhceph-5-dashboard-rhel8:latest podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel8/skopeo:8.5-8 skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/openshift4/ose-prometheus:v4.10 docker://admin.lab.redhat.com:5000/openshift4/ose-prometheus:v4.10 podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel8/skopeo:8.5-8 skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.10 docker://admin.lab.redhat.com:5000/openshift4/ose-prometheus-alertmanager:v4.10", "curl -u PRIVATE_REGISTRY_USERNAME : PRIVATE_REGISTRY_PASSWORD https:// LOCAL_NODE_FQDN :5000/v2/_catalog", "curl -u myregistryusername:myregistrypassword1 https://admin.lab.redhat.com:5000/v2/_catalog {\"repositories\":[\"openshift4/ose-prometheus\",\"openshift4/ose-prometheus-alertmanager\",\"openshift4/ose-prometheus-node-exporter\",\"rhceph/rhceph-5-dashboard-rhel8\",\"rhceph/rhceph-5-rhel8\"]}", "host02 host03 host04 [admin] host01", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url= CUSTOM_REPO_URL \"", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/\"", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url= CUSTOM_REPO_URL \" --limit GROUP_NAME | NODE_NAME", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/\" --limit clients [ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/\" --limit host02", "cephadm --image PRIVATE_REGISTRY_NODE_FQDN :5000/ CUSTOM_IMAGE_NAME : IMAGE_TAG bootstrap --mon-ip IP_ADDRESS --registry-url PRIVATE_REGISTRY_NODE_FQDN :5000 --registry-username PRIVATE_REGISTRY_USERNAME --registry-password PRIVATE_REGISTRY_PASSWORD", "cephadm --image admin.lab.redhat.com:5000/rhceph/rhceph-5-rhel8:latest bootstrap --mon-ip 10.10.128.68 --registry-url admin.lab.redhat.com:5000 --registry-username myregistryusername --registry-password myregistrypassword1", "Ceph Dashboard is now available at: URL: https://host01:8443/ User: admin Password: i8nhu7zham Enabling client.admin keyring and conf on hosts with \"admin\" label You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid 266ee7a8-2a05-11eb-b846-5254002d4916 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ Bootstrap complete.", "ceph cephadm registry-login --registry-url CUSTOM_REGISTRY_NAME --registry_username REGISTRY_USERNAME --registry_password REGISTRY_PASSWORD", "ceph cephadm registry-login --registry-url myregistry --registry_username myregistryusername --registry_password myregistrypassword1", "ceph config set mgr mgr/cephadm/ OPTION_NAME CUSTOM_REGISTRY_NAME / CONTAINER_NAME", "container_image_prometheus container_image_grafana container_image_alertmanager container_image_node_exporter", "ceph config set mgr mgr/cephadm/container_image_prometheus myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_grafana myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_alertmanager myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_node_exporter myregistry/mycontainer", "ceph orch redeploy node-exporter", "ceph config rm mgr mgr/cephadm/ OPTION_NAME", "ceph config rm mgr mgr/cephadm/container_image_prometheus", "[ansible@admin ~]USD cd /usr/share/cephadm-ansible", "ansible-playbook -i INVENTORY_HOST_FILE cephadm-distribute-ssh-key.yml -e cephadm_ssh_user= USER_NAME -e cephadm_pubkey_path= home/cephadm/ceph.key -e admin_node= ADMIN_NODE_NAME_1", "[ansible@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e cephadm_pubkey_path=/home/cephadm/ceph.key -e admin_node=host01 [ansible@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e admin_node=host01", "cephadm shell ceph -s", "cephadm shell ceph -s cluster: id: f64f341c-655d-11eb-8778-fa163e914bcc health: HEALTH_OK services: mon: 3 daemons, quorum host01,host02,host03 (age 94m) mgr: host01.lbnhug(active, since 59m), standbys: host02.rofgay, host03.ohipra mds: 1/1 daemons up, 1 standby osd: 18 osds: 18 up (since 10m), 18 in (since 10m) rgw: 4 daemons active (2 hosts, 1 zones) data: volumes: 1/1 healthy pools: 8 pools, 225 pgs objects: 230 objects, 9.9 KiB usage: 271 MiB used, 269 GiB / 270 GiB avail pgs: 225 active+clean io: client: 85 B/s rd, 0 op/s rd, 0 op/s wr", "podman ps", "cephadm shell ceph -s cluster: id: f64f341c-655d-11eb-8778-fa163e914bcc health: HEALTH_OK services: mon: 3 daemons, quorum host01,host02,host03 (age 94m) mgr: host01.lbnhug(active, since 59m), standbys: host02.rofgay, host03.ohipra mds: 1/1 daemons up, 1 standby osd: 18 osds: 18 up (since 10m), 18 in (since 10m) rgw: 4 daemons active (2 hosts, 1 zones) data: volumes: 1/1 healthy pools: 8 pools, 225 pgs objects: 230 objects, 9.9 KiB usage: 271 MiB used, 269 GiB / 270 GiB avail pgs: 225 active+clean io: client: 85 B/s rd, 0 op/s rd, 0 op/s wr", ".Syntax [source,subs=\"verbatim,quotes\"] ---- ceph cephadm registry-login --registry-url _CUSTOM_REGISTRY_NAME_ --registry_username _REGISTRY_USERNAME_ --registry_password _REGISTRY_PASSWORD_ ----", ".Example ---- ceph cephadm registry-login --registry-url myregistry --registry_username myregistryusername --registry_password myregistrypassword1 ----", "ssh-copy-id -f -i /etc/ceph/ceph.pub user@ NEWHOST", "ssh-copy-id -f -i /etc/ceph/ceph.pub root@host02 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host03", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "[ceph-admin@admin ~]USD cat hosts host02 host03 host04 [admin] host01", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit NEWHOST", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host02", "dnf install podman lvm2 chrony cephadm", "ceph orch host add NEWHOST", "ceph orch host add host02 Added host 'host02' with addr '10.10.128.69' ceph orch host add host03 Added host 'host03' with addr '10.10.128.70'", "ceph orch host add HOSTNAME IP_ADDRESS", "ceph orch host add host02 10.10.128.69 Added host 'host02' with addr '10.10.128.69'", "ceph orch host ls", "ceph orch host add HOSTNAME IP_ADDR", "ceph orch host add host01 10.10.128.68", "ceph orch host set-addr HOSTNAME IP_ADDR", "ceph orch host set-addr HOSTNAME IPV4_ADDRESS", "service_type: host addr: hostname: host02 labels: - mon - osd - mgr --- service_type: host addr: hostname: host03 labels: - mon - osd - mgr --- service_type: host addr: hostname: host04 labels: - mon - osd", "ceph orch apply -i hosts.yaml Added host 'host02' with addr '10.10.128.69' Added host 'host03' with addr '10.10.128.70' Added host 'host04' with addr '10.10.128.71'", "cephadm shell --mount hosts.yaml -- ceph orch apply -i /mnt/hosts.yaml", "ceph orch host ls HOST ADDR LABELS STATUS host02 host02 mon osd mgr host03 host03 mon osd mgr host04 host04 mon osd", "cephadm shell", "ceph orch host add HOST_NAME HOST_ADDRESS", "ceph orch host add host03 10.10.128.70", "cephadm shell", "ceph orch host ls", "ceph orch host drain HOSTNAME", "ceph orch host drain host02", "ceph orch osd rm status", "ceph orch ps HOSTNAME", "ceph orch ps host02", "ceph orch host rm HOSTNAME", "ceph orch host rm host02", "cephadm shell", "ceph orch host label add HOSTNAME LABEL", "ceph orch host label add host02 mon", "ceph orch host ls", "cephadm shell", "ceph orch host label rm HOSTNAME LABEL", "ceph orch host label rm host02 mon", "ceph orch host ls", "cephadm shell", "ceph orch host ls HOST ADDR LABELS STATUS host01 _admin mon osd mgr host02 mon osd mgr mylabel", "ceph orch apply DAEMON --placement=\"label: LABEL \"", "ceph orch apply prometheus --placement=\"label:mylabel\"", "vi placement.yml", "service_type: prometheus placement: label: \"mylabel\"", "ceph orch apply -i FILENAME", "ceph orch apply -i placement.yml Scheduled prometheus update...", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=prometheus NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID prometheus.host02 host02 *:9095 running (2h) 8m ago 2h 85.3M - 2.22.2 ac25aac5d567 ad8c7593d7c0", "ceph orch apply mon 5", "ceph orch apply mon --unmanaged", "ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon,mgr host04 host05 host06", "ceph orch host label add HOSTNAME _admin", "ceph orch host label add host03 _admin", "ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon,mgr,_admin host04 host05 host06", "ceph orch host label add HOSTNAME mon", "ceph orch host label add host02 mon ceph orch host label add host03 mon", "ceph orch host ls", "ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon host04 host05 host06", "ceph orch apply mon label:mon", "ceph orch apply mon HOSTNAME1 , HOSTNAME2 , HOSTNAME3", "ceph orch apply mon host01,host02,host03", "ceph orch apply mon NODE:IP_ADDRESS_OR_NETWORK_NAME [ NODE:IP_ADDRESS_OR_NETWORK_NAME ...]", "ceph orch apply mon host02:10.10.128.69 host03:mynetwork", "ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon,mgr,_admin host04 host05 host06", "cephadm shell", "ceph orch host label rm HOSTNAME LABEL", "ceph orch host label rm host03 _admin", "ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon,mgr host04 host05 host06", "ceph orch apply mgr NUMBER_OF_DAEMONS", "ceph orch apply mgr 3", "ceph orch apply mgr --placement \" HOSTNAME1 HOSTNAME2 HOSTNAME3 \"", "ceph orch apply mgr --placement \"host02 host03 host04\"", "ceph orch device ls [--hostname= HOSTNAME1 HOSTNAME2 ] [--wide] [--refresh]", "ceph orch device ls --wide --refresh", "ceph orch daemon add osd HOSTNAME : DEVICE_PATH", "ceph orch daemon add osd host02:/dev/sdb", "ceph orch apply osd --all-available-devices", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "host02 host03 host04 [clients] client01 client02 client03 [admin] host01", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --limit CLIENT_GROUP_NAME | CLIENT_NODE_NAME", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --limit clients", "ansible-playbook -i INVENTORY_FILE cephadm-clients.yml --extra-vars '{\"fsid\":\" FSID \",\"keyring\":\" KEYRING_PATH \",\"client_group\":\" CLIENT_GROUP_NAME \",\"conf\":\" CEPH_CONFIGURATION_PATH \",\"keyring_dest\":\" KEYRING_DESTINATION_PATH \"}'", "[ceph-admin@host01 cephadm-ansible]USD ansible-playbook -i hosts cephadm-clients.yml --extra-vars '{\"fsid\":\"266ee7a8-2a05-11eb-b846-5254002d4916\",\"client_group\":\"clients\",\"keyring\":\"/etc/ceph/ceph.client.admin.keyring\",\"conf\":\"/etc/ceph/ceph.conf\",\"keyring_dest\":\"/etc/ceph/custom.name.ceph.keyring\"}'", "ansible-playbook -i INVENTORY_FILE cephadm-clients.yml --extra-vars '{\"fsid\":\" FSID \",\"keyring\":\" KEYRING_PATH \",\"conf\":\" CEPH_CONFIGURATION_PATH \"}'", "[ceph-admin@host01 cephadm-ansible]USD ansible-playbook -i hosts cephadm-clients.yml --extra-vars '{\"fsid\":\"266ee7a8-2a05-11eb-b846-5254002d4916\",\"keyring\":\"/etc/ceph/ceph.client.admin.keyring\",\"conf\":\"/etc/ceph/ceph.conf\"}'", "ls -l /etc/ceph/ -rw-------. 1 ceph ceph 151 Jul 11 12:23 custom.name.ceph.keyring -rw-------. 1 ceph ceph 151 Jul 11 12:23 ceph.keyring -rw-------. 1 ceph ceph 269 Jul 11 12:23 ceph.conf", "host02 host03 host04 [admin] host01 [clients] client01 client02 client03", "ansible-playbook -i hosts cephadm-purge-cluster.yml -e fsid= FSID -vvv", "[ceph-admin@host01 cephadm-ansible]USD ansible-playbook -i hosts cephadm-purge-cluster.yml -e fsid=a6ca415a-cde7-11eb-a41a-002590fc2544 -vvv", "ceph mgr module disable cephadm", "ceph fsid", "exit", "cephadm rm-cluster --force --zap-osds --fsid FSID", "cephadm rm-cluster --force --zap-osds --fsid a6ca415a-cde7-11eb-a41a-002590fc2544", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi INVENTORY_FILE HOST1 labels=\"[' LABEL1 ', ' LABEL2 ']\" HOST2 labels=\"[' LABEL1 ', ' LABEL2 ']\" HOST3 labels=\"[' LABEL1 ']\" [admin] ADMIN_HOST monitor_address= MONITOR_IP_ADDRESS labels=\"[' ADMIN_LABEL ', ' LABEL1 ', ' LABEL2 ']\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi hosts host02 labels=\"['mon', 'mgr']\" host03 labels=\"['mon', 'mgr']\" host04 labels=\"['osd']\" host05 labels=\"['osd']\" host06 labels=\"['osd']\" [admin] host01 monitor_address=10.10.128.68 labels=\"['_admin', 'mon', 'mgr']\"", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"", "sudo vi PLAYBOOK_FILENAME .yml --- - name: NAME_OF_PLAY hosts: BOOTSTRAP_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: -name: NAME_OF_TASK cephadm_registry_login: state: STATE registry_url: REGISTRY_URL registry_username: REGISTRY_USER_NAME registry_password: REGISTRY_PASSWORD - name: NAME_OF_TASK cephadm_bootstrap: mon_ip: \"{{ monitor_address }}\" dashboard_user: DASHBOARD_USER dashboard_password: DASHBOARD_PASSWORD allow_fqdn_hostname: ALLOW_FQDN_HOSTNAME cluster_network: NETWORK_CIDR", "[ceph-admin@admin cephadm-ansible]USD sudo vi bootstrap.yml --- - name: bootstrap the cluster hosts: host01 become: true gather_facts: false tasks: - name: login to registry cephadm_registry_login: state: login registry_url: registry.redhat.io registry_username: user1 registry_password: mypassword1 - name: bootstrap initial cluster cephadm_bootstrap: mon_ip: \"{{ monitor_address }}\" dashboard_user: mydashboarduser dashboard_password: mydashboardpassword allow_fqdn_hostname: true cluster_network: 10.10.128.0/28", "ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml -vvv", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts bootstrap.yml -vvv", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi INVENTORY_FILE NEW_HOST1 labels=\"[' LABEL1 ', ' LABEL2 ']\" NEW_HOST2 labels=\"[' LABEL1 ', ' LABEL2 ']\" NEW_HOST3 labels=\"[' LABEL1 ']\" [admin] ADMIN_HOST monitor_address= MONITOR_IP_ADDRESS labels=\"[' ADMIN_LABEL ', ' LABEL1 ', ' LABEL2 ']\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi hosts host02 labels=\"['mon', 'mgr']\" host03 labels=\"['mon', 'mgr']\" host04 labels=\"['osd']\" host05 labels=\"['osd']\" host06 labels=\"['osd']\" [admin] host01 monitor_address= 10.10.128.68 labels=\"['_admin', 'mon', 'mgr']\"", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit NEWHOST", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host02", "sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: HOSTS_OR_HOST_GROUPS become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_host: name: \"{{ ansible_facts['hostname'] }}\" address: \"{{ ansible_facts['default_ipv4']['address'] }}\" labels: \"{{ labels }}\" delegate_to: HOST_TO_DELEGATE_TASK_TO - name: NAME_OF_TASK when: inventory_hostname in groups['admin'] ansible.builtin.shell: cmd: CEPH_COMMAND_TO_RUN register: REGISTER_NAME - name: NAME_OF_TASK when: inventory_hostname in groups['admin'] debug: msg: \"{{ REGISTER_NAME .stdout }}\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi add-hosts.yml --- - name: add additional hosts to the cluster hosts: all become: true gather_facts: true tasks: - name: add hosts to the cluster ceph_orch_host: name: \"{{ ansible_facts['hostname'] }}\" address: \"{{ ansible_facts['default_ipv4']['address'] }}\" labels: \"{{ labels }}\" delegate_to: host01 - name: list hosts in the cluster when: inventory_hostname in groups['admin'] ansible.builtin.shell: cmd: ceph orch host ls register: host_list - name: print current list of hosts when: inventory_hostname in groups['admin'] debug: msg: \"{{ host_list.stdout }}\"", "ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts add-hosts.yml", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi PLAYBOOK_FILENAME .yml --- - name: NAME_OF_PLAY hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_host: name: HOST_TO_REMOVE state: STATE - name: NAME_OF_TASK ceph_orch_host: name: HOST_TO_REMOVE state: STATE retries: NUMBER_OF_RETRIES delay: DELAY until: CONTINUE_UNTIL register: REGISTER_NAME - name: NAME_OF_TASK ansible.builtin.shell: cmd: ceph orch host ls register: REGISTER_NAME - name: NAME_OF_TASK debug: msg: \"{{ REGISTER_NAME .stdout }}\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi remove-hosts.yml --- - name: remove host hosts: host01 become: true gather_facts: true tasks: - name: drain host07 ceph_orch_host: name: host07 state: drain - name: remove host from the cluster ceph_orch_host: name: host07 state: absent retries: 20 delay: 1 until: result is succeeded register: result - name: list hosts in the cluster ansible.builtin.shell: cmd: ceph orch host ls register: host_list - name: print current list of hosts debug: msg: \"{{ host_list.stdout }}\"", "ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts remove-hosts.yml", "TASK [print current hosts] ****************************************************************************************************** Friday 24 June 2022 14:52:40 -0400 (0:00:03.365) 0:02:31.702 *********** ok: [host01] => msg: |- HOST ADDR LABELS STATUS host01 10.10.128.68 _admin mon mgr host02 10.10.128.69 mon mgr host03 10.10.128.70 mon mgr host04 10.10.128.71 osd host05 10.10.128.72 osd host06 10.10.128.73 osd", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_config: action: GET_OR_SET who: DAEMON_TO_SET_CONFIGURATION_TO option: CEPH_CONFIGURATION_OPTION value: VALUE_OF_PARAMETER_TO_SET - name: NAME_OF_TASK ceph_config: action: GET_OR_SET who: DAEMON_TO_SET_CONFIGURATION_TO option: CEPH_CONFIGURATION_OPTION register: REGISTER_NAME - name: NAME_OF_TASK debug: msg: \" MESSAGE_TO_DISPLAY {{ REGISTER_NAME .stdout }}\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi change_configuration.yml --- - name: set pool delete hosts: host01 become: true gather_facts: false tasks: - name: set the allow pool delete option ceph_config: action: set who: mon option: mon_allow_pool_delete value: true - name: get the allow pool delete setting ceph_config: action: get who: mon option: mon_allow_pool_delete register: verify_mon_allow_pool_delete - name: print current mon_allow_pool_delete setting debug: msg: \"the value of 'mon_allow_pool_delete' is {{ verify_mon_allow_pool_delete.stdout }}\"", "ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts change_configuration.yml", "TASK [print current mon_allow_pool_delete setting] ************************************************************* Wednesday 29 June 2022 13:51:41 -0400 (0:00:05.523) 0:00:17.953 ******** ok: [host01] => msg: the value of 'mon_allow_pool_delete' is true", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: HOSTS_OR_HOST_GROUPS become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_apply: spec: | service_type: SERVICE_TYPE service_id: UNIQUE_NAME_OF_SERVICE placement: host_pattern: ' HOST_PATTERN_TO_SELECT_HOSTS ' label: LABEL spec: SPECIFICATION_OPTIONS :", "[ceph-admin@admin cephadm-ansible]USD sudo vi deploy_osd_service.yml --- - name: deploy osd service hosts: host01 become: true gather_facts: true tasks: - name: apply osd spec ceph_orch_apply: spec: | service_type: osd service_id: osd placement: host_pattern: '*' label: osd spec: data_devices: all: true", "ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts deploy_osd_service.yml", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_daemon: state: STATE_OF_SERVICE daemon_id: DAEMON_ID daemon_type: TYPE_OF_SERVICE", "[ceph-admin@admin cephadm-ansible]USD sudo vi restart_services.yml --- - name: start and stop services hosts: host01 become: true gather_facts: false tasks: - name: start osd.0 ceph_orch_daemon: state: started daemon_id: 0 daemon_type: osd - name: stop mon.host02 ceph_orch_daemon: state: stopped daemon_id: host02 daemon_type: mon", "ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts restart_services.yml", "cephadm adopt [-h] --name DAEMON_NAME --style STYLE [--cluster CLUSTER ] --legacy-dir [ LEGACY_DIR ] --config-json CONFIG_JSON ] [--skip-firewalld] [--skip-pull]", "cephadm adopt --style=legacy --name prometheus.host02", "cephadm ceph-volume inventory/simple/raw/lvm [-h] [--fsid FSID ] [--config-json CONFIG_JSON ] [--config CONFIG , -c CONFIG ] [--keyring KEYRING , -k KEYRING ]", "cephadm ceph-volume inventory --fsid f64f341c-655d-11eb-8778-fa163e914bcc", "cephadm check-host [--expect-hostname HOSTNAME ]", "cephadm check-host --expect-hostname host02", "cephadm shell deploy DAEMON_TYPE [-h] [--name DAEMON_NAME ] [--fsid FSID ] [--config CONFIG , -c CONFIG ] [--config-json CONFIG_JSON ] [--keyring KEYRING ] [--key KEY ] [--osd-fsid OSD_FSID ] [--skip-firewalld] [--tcp-ports TCP_PORTS ] [--reconfig] [--allow-ptrace] [--memory-request MEMORY_REQUEST ] [--memory-limit MEMORY_LIMIT ] [--meta-json META_JSON ]", "cephadm shell deploy mon --fsid f64f341c-655d-11eb-8778-fa163e914bcc", "cephadm enter [-h] [--fsid FSID ] --name NAME [command [command ...]]", "cephadm enter --name 52c611f2b1d9", "cephadm help", "cephadm help", "cephadm install PACKAGES", "cephadm install ceph-common ceph-osd", "cephadm --image IMAGE_ID inspect-image", "cephadm --image 13ea90216d0be03003d12d7869f72ad9de5cec9e54a27fd308e01e467c0d4a0a inspect-image", "cephadm list-networks", "cephadm list-networks", "cephadm ls [--no-detail] [--legacy-dir LEGACY_DIR ]", "cephadm ls --no-detail", "cephadm logs [--fsid FSID ] --name DAEMON_NAME cephadm logs [--fsid FSID ] --name DAEMON_NAME -- -n NUMBER # Last N lines cephadm logs [--fsid FSID ] --name DAEMON_NAME -- -f # Follow the logs", "cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 -- -n 20 cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 -- -f", "cephadm prepare-host [--expect-hostname HOSTNAME ]", "cephadm prepare-host cephadm prepare-host --expect-hostname host01", "cephadm [-h] [--image IMAGE_ID ] pull", "cephadm --image 13ea90216d0be03003d12d7869f72ad9de5cec9e54a27fd308e01e467c0d4a0a pull", "cephadm registry-login --registry-url [ REGISTRY_URL ] --registry-username [ USERNAME ] --registry-password [ PASSWORD ] [--fsid FSID ] [--registry-json JSON_FILE ]", "cephadm registry-login --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1", "cat REGISTRY_FILE { \"url\":\" REGISTRY_URL \", \"username\":\" REGISTRY_USERNAME \", \"password\":\" REGISTRY_PASSWORD \" }", "cat registry_file { \"url\":\"registry.redhat.io\", \"username\":\"myuser\", \"password\":\"mypass\" } cephadm registry-login -i registry_file", "cephadm rm-daemon [--fsid FSID ] [--name DAEMON_NAME ] [--force ] [--force-delete-data]", "cephadm rm-daemon --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8", "cephadm rm-cluster [--fsid FSID ] [--force]", "cephadm rm-cluster --fsid f64f341c-655d-11eb-8778-fa163e914bcc", "cephadm rm-repo [-h]", "cephadm rm-repo", "cephadm run [--fsid FSID ] --name DAEMON_NAME", "cephadm run --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8", "cephadm shell [--fsid FSID ] [--name DAEMON_NAME , -n DAEMON_NAME ] [--config CONFIG , -c CONFIG ] [--mount MOUNT , -m MOUNT ] [--keyring KEYRING , -k KEYRING ] [--env ENV , -e ENV ]", "cephadm shell -- ceph orch ls cephadm shell", "cephadm unit [--fsid FSID ] --name DAEMON_NAME start/stop/restart/enable/disable", "cephadm unit --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8 start", "cephadm version", "cephadm version" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html-single/installation_guide/%7Bupgrade-guide%7D
Chapter 5. Fixed issues
Chapter 5. Fixed issues The following sections list the issues fixed in AMQ Streams 1.6.x. Red Hat recommends that you upgrade to the latest patch release if you are using AMQ Streams 1.6.x with RHEL 7 and 8. For details of the issues fixed in: Kafka 2.6.3, refer to the Kafka 2.6.3 Release Notes Kafka 2.6.2, refer to the Kafka 2.6.2 Release Notes Kafka 2.6.1, refer to the Kafka 2.6.1 Release Notes Kafka 2.6.0, refer to the Kafka 2.6.0 Release Notes 5.1. Fixed issues for AMQ Streams 1.6.7 The AMQ Streams 1.6.7 patch release (Long Term Support) is now available. AMQ Streams 1.6.7 is the latest Long Term Support release for use with RHEL 7 and 8. For additional details about the issues resolved in AMQ Streams 1.6.7, see AMQ Streams 1.6.x Resolved Issues . Log4j vulnerabilities AMQ Streams includes log4j 1.2.17. The release fixes a number of log4j vulnerabilities. For more information on the vulnerabilities addressed in this release, see the following CVE articles: CVE-2022-23307 CVE-2022-23305 CVE-2022-23302 CVE-2021-4104 CVE-2020-9488 CVE-2019-17571 CVE-2017-5645 5.2. Fixed issues for AMQ Streams 1.6.6 For additional details about the issues resolved in AMQ Streams 1.6.6, see AMQ Streams 1.6.x Resolved Issues . Log4j2 vulnerabilities AMQ Streams includes log4j2 2.17.1. The release fixes a number of log4j2 vulnerabilities. For more information on the vulnerabilities addressed in this release, see the following CVE articles: CVE-2021-45046 CVE-2021-45105 CVE-2021-44832 CVE-2021-44228 5.3. Fixed issues for AMQ Streams 1.6.5 For additional details about the issues resolved in AMQ Streams 1.6.5, see AMQ Streams 1.6.x Resolved Issues . Log4j2 vulnerability The 1.6.5 release fixes a remote code execution vulnerability for AMQ Streams components that use log4j2. The vulnerability could allow a remote code execution on the server if the system logs a string value from an unauthorized source. This affects log4j versions between 2.0 and 2.14.1. For more information, see CVE-2021-44228 . 5.4. Fixed issues for AMQ Streams 1.6.4 For additional details about the issues resolved in AMQ Streams 1.6.4, see AMQ Streams 1.6.x Resolved Issues . 5.5. Fixed issues for AMQ Streams 1.6.0 Issue Number Description ENTMQST-2049 Kafka Bridge: Kafka consumer should be tracked with group-consumerid key ENTMQST-2084 Zookeeper version on the docs doesn't match with the version in AMQ Streams 1.5
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_amq_streams_1.6_on_rhel/resolved-issues-str
Security and compliance
Security and compliance OpenShift Container Platform 4.7 Learning about and managing security for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/security_and_compliance/index
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ Streams is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the AMQ Streams for Apache Kafka entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ Streams product. The Software Downloads page opens. Click the Download link for your component. Revised on 2022-04-13 17:50:11 UTC
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/using_the_amq_streams_kafka_bridge/using_your_subscription
Chapter 5. KubeletConfig [machineconfiguration.openshift.io/v1]
Chapter 5. KubeletConfig [machineconfiguration.openshift.io/v1] Description KubeletConfig describes a customized Kubelet configuration. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object KubeletConfigSpec defines the desired state of KubeletConfig status object KubeletConfigStatus defines the observed state of a KubeletConfig 5.1.1. .spec Description KubeletConfigSpec defines the desired state of KubeletConfig Type object Property Type Description autoSizingReserved boolean kubeletConfig `` kubeletConfig fields are defined in kubernetes upstream. Please refer to the types defined in the version/commit used by OpenShift of the upstream kubernetes. It's important to note that, since the fields of the kubelet configuration are directly fetched from upstream the validation of those values is handled directly by the kubelet. Please refer to the upstream version of the relevant kubernetes for the valid values of these fields. Invalid values of the kubelet configuration fields may render cluster nodes unusable. logLevel integer machineConfigPoolSelector object MachineConfigPoolSelector selects which pools the KubeletConfig shoud apply to. A nil selector will result in no pools being selected. tlsSecurityProfile object If unset, the default is based on the apiservers.config.openshift.io/cluster resource. Note that only Old and Intermediate profiles are currently supported, and the maximum available minTLSVersion is VersionTLS12. 5.1.2. .spec.machineConfigPoolSelector Description MachineConfigPoolSelector selects which pools the KubeletConfig shoud apply to. A nil selector will result in no pools being selected. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 5.1.3. .spec.machineConfigPoolSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 5.1.4. .spec.machineConfigPoolSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 5.1.5. .spec.tlsSecurityProfile Description If unset, the default is based on the apiservers.config.openshift.io/cluster resource. Note that only Old and Intermediate profiles are currently supported, and the maximum available minTLSVersion is VersionTLS12. Type object Property Type Description custom `` custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 intermediate `` intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: VersionTLS12 modern `` modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 minTLSVersion: VersionTLS13 old `` old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 - DHE-RSA-CHACHA20-POLY1305 - ECDHE-ECDSA-AES128-SHA256 - ECDHE-RSA-AES128-SHA256 - ECDHE-ECDSA-AES128-SHA - ECDHE-RSA-AES128-SHA - ECDHE-ECDSA-AES256-SHA384 - ECDHE-RSA-AES256-SHA384 - ECDHE-ECDSA-AES256-SHA - ECDHE-RSA-AES256-SHA - DHE-RSA-AES128-SHA256 - DHE-RSA-AES256-SHA256 - AES128-GCM-SHA256 - AES256-GCM-SHA384 - AES128-SHA256 - AES256-SHA256 - AES128-SHA - AES256-SHA - DES-CBC3-SHA minTLSVersion: VersionTLS10 type string type is one of Old, Intermediate, Modern or Custom. Custom provides the ability to specify individual TLS security profile parameters. Old, Intermediate and Modern are TLS security profiles based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Recommended_configurations The profiles are intent based, so they may change over time as new ciphers are developed and existing ciphers are found to be insecure. Depending on precisely which ciphers are available to a process, the list may be reduced. Note that the Modern profile is currently not supported because it is not yet well adopted by common software libraries. 5.1.6. .status Description KubeletConfigStatus defines the observed state of a KubeletConfig Type object Property Type Description conditions array conditions represents the latest available observations of current state. conditions[] object KubeletConfigCondition defines the state of the KubeletConfig observedGeneration integer observedGeneration represents the generation observed by the controller. 5.1.7. .status.conditions Description conditions represents the latest available observations of current state. Type array 5.1.8. .status.conditions[] Description KubeletConfigCondition defines the state of the KubeletConfig Type object Property Type Description lastTransitionTime `` lastTransitionTime is the time of the last update to the current status object. message string message provides additional information about the current condition. This is only to be consumed by humans. reason string reason is the reason for the condition's last transition. Reasons are PascalCase status string status of the condition, one of True, False, Unknown. type string type specifies the state of the operator's reconciliation functionality. 5.2. API endpoints The following API endpoints are available: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs DELETE : delete collection of KubeletConfig GET : list objects of kind KubeletConfig POST : create a KubeletConfig /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/{name} DELETE : delete a KubeletConfig GET : read the specified KubeletConfig PATCH : partially update the specified KubeletConfig PUT : replace the specified KubeletConfig /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/{name}/status GET : read status of the specified KubeletConfig PATCH : partially update status of the specified KubeletConfig PUT : replace status of the specified KubeletConfig 5.2.1. /apis/machineconfiguration.openshift.io/v1/kubeletconfigs HTTP method DELETE Description delete collection of KubeletConfig Table 5.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind KubeletConfig Table 5.2. HTTP responses HTTP code Reponse body 200 - OK KubeletConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a KubeletConfig Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.4. Body parameters Parameter Type Description body KubeletConfig schema Table 5.5. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 201 - Created KubeletConfig schema 202 - Accepted KubeletConfig schema 401 - Unauthorized Empty 5.2.2. /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/{name} Table 5.6. Global path parameters Parameter Type Description name string name of the KubeletConfig HTTP method DELETE Description delete a KubeletConfig Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified KubeletConfig Table 5.9. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified KubeletConfig Table 5.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified KubeletConfig Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body KubeletConfig schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 201 - Created KubeletConfig schema 401 - Unauthorized Empty 5.2.3. /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/{name}/status Table 5.15. Global path parameters Parameter Type Description name string name of the KubeletConfig HTTP method GET Description read status of the specified KubeletConfig Table 5.16. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified KubeletConfig Table 5.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified KubeletConfig Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body KubeletConfig schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 201 - Created KubeletConfig schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/machine_apis/kubeletconfig-machineconfiguration-openshift-io-v1
Chapter 18. System and Subscription Management
Chapter 18. System and Subscription Management The default registration URL is now subscription.rhsm.redhat.com Since Red Hat Enterprise Linux 7.3, the default registration URL has been changed to subscription.rhsm.redhat.com. (BZ# 1396085 ) subscription-manager displays all addresses associated with a network interface Previously, the subscription-manager utility displayed only one address per network interface even if the network interface had more than one associated address. With this update, a new system fact with the suffix _list corresponding to each network interface is reported to the entitlement server that contains a comma-separated string of values. As a result, subscription-manager is now able to display all addresses associated with the network interface. (BZ# 874735 ) rct now enables displaying only subscription data The rct utility now accepts the --no-content option. Passing --no-content to the rct cat-manifest command ensures that rct displays only subscription data. (BZ# 1336883 ) rct cat-manifest now displays information to determine if virt-who is required The output of the rct cat-manifest [MANIFEST_ZIP] command now includes fields for Virt Limit and Requires Virt-who . These fields help determine if the virt-who component is required for the deployment. (BZ# 1336880 ) The needs-restarting utility has the new --services option With this update, the needs-restarting utility has the new --services option. When the new option is specified, needs-restarting lists newline-separated service names instead of process IDs. This helps the system administrator to find out which systemd services they need to restart after running yum update to benefit from the updates. (BZ#1335587) The needs-restarting utility has the new --reboothint option With this update, the needs-restarting utility has the new --reboothint option. Running needs-restarting --reboothint outputs a message saying which core packages have been updated since the last boot, if any, and thus whether a reboot is recommended. This helps the system administrator to find out whether they need to reboot the system to benefit from all updates. Note that the advice is only informational and does not mean it is strictly necessary to reboot the system immediately. (BZ# 1192946 ) New skip_missing_names_on_install and skip_missing_names_on_update options for yum The skip_missing_names_on_install and skip_missing_names_on_update options have been added to yum repository configuration. With skip_missing_names_on_install set to False in the /etc/yum.conf file, using the yum install command fails if yum cannot find one of the specified packages, groups, or RPM files. With skip_missing_names_on_update set to False , using the yum update command fails if yum cannot find one of the specified packages, groups, or RPM files, or if they are available, but not installed. (BZ# 1274211 ) New compare_providers_priority option for yum This update adds the compare_providers_priority option to yum repository configuration. When set in the /etc/yum.conf file, this option enables yum to respect repository priorities when resolving dependencies, which can be used to influence what yum does when it encounters a dependency that can be satisfied by packages from multiple different repositories. (BZ#1186690)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/new_features_system_and_subscription_management
Vulnerability reporting with Clair on Red Hat Quay
Vulnerability reporting with Clair on Red Hat Quay Red Hat Quay 3 Vulnerability reporting with Clair on Red Hat Quay Red Hat OpenShift Documentation Team
[ "updaters: config: rhel: ignore_unpatched: false", "auth: psk: key: >- MDQ4ODBlNDAtNDc0ZC00MWUxLThhMzAtOTk0MzEwMGQwYTMxCg== iss: 'issuer'", "# updaters: sets: - alpine - aws - osv #", "# updaters: sets: - alpine #", "# updaters: sets: - aws #", "# updaters: sets: - debian #", "# updaters: sets: - clair.cvss #", "# updaters: sets: - oracle #", "# updaters: sets: - photon #", "# updaters: sets: - suse #", "# updaters: sets: - ubuntu #", "# updaters: sets: - osv #", "# updaters: sets: - rhel - rhcc - clair.cvss - osv #", "# updaters: sets: - apline config: alpine: url: https://secdb.alpinelinux.org/ #", "# updaters: sets: - debian config: debian: mirror_url: https://deb.debian.org/ json_url: https://security-tracker.debian.org/tracker/data/json #", "# updaters: config: clair.cvss: url: https://nvd.nist.gov/feeds/json/cve/1.1/ #", "# updaters: sets: - oracle config: oracle-2023-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2023.xml.bz2 oracle-2022-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2022.xml.bz2 #", "# updaters: sets: - photon config: photon: url: https://packages.vmware.com/photon/photon_oval_definitions/ #", "# updaters: sets: - rhel config: rhel: url: https://access.redhat.com/security/data/oval/v2/PULP_MANIFEST ignore_unpatched: true 1 #", "# updaters: sets: - rhcc config: rhcc: url: https://access.redhat.com/security/data/metrics/cvemap.xml #", "# updaters: sets: - suse config: suse: url: https://support.novell.com/security/oval/ #", "# updaters: config: ubuntu: url: https://api.launchpad.net/1.0/ name: ubuntu force: 1 - name: focal 2 version: 20.04 3 #", "# updaters: sets: - osv config: osv: url: https://osv-vulnerabilities.storage.googleapis.com/ allowlist: 1 - npm - pypi #", "# matcher: disable_updaters: true #", "--- FEATURE_FIPS = true ---", "mkdir /home/<user-name>/quay-poc/postgres-clairv4", "setfacl -m u:26:-wx /home/<user-name>/quay-poc/postgres-clairv4", "sudo podman run -d --name postgresql-clairv4 -e POSTGRESQL_USER=clairuser -e POSTGRESQL_PASSWORD=clairpass -e POSTGRESQL_DATABASE=clair -e POSTGRESQL_ADMIN_PASSWORD=adminpass -p 5433:5432 -v /home/<user-name>/quay-poc/postgres-clairv4:/var/lib/pgsql/data:Z registry.redhat.io/rhel8/postgresql-15", "sudo podman exec -it postgresql-clairv4 /bin/bash -c 'echo \"CREATE EXTENSION IF NOT EXISTS \\\"uuid-ossp\\\"\" | psql -d clair -U postgres'", "CREATE EXTENSION", "sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 -v USDQUAY/config:/conf/stack:Z registry.redhat.io/quay/quay-rhel8:v3.13.3 config secret", "tar xvf quay-config.tar.gz -d /home/<user-name>/quay-poc/", "mkdir /etc/opt/clairv4/config/", "cd /etc/opt/clairv4/config/", "http_listen_addr: :8081 introspection_addr: :8088 log_level: debug indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true matcher: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable max_conn_pool: 100 migrations: true indexer_addr: clair-indexer notifier: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable delivery_interval: 1m poll_interval: 5m migrations: true auth: psk: key: \"MTU5YzA4Y2ZkNzJoMQ==\" iss: [\"quay\"] tracing and metrics trace: name: \"jaeger\" probability: 1 jaeger: agent: endpoint: \"localhost:6831\" service_name: \"clair\" metrics: name: \"prometheus\"", "sudo podman run -d --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/opt/clairv4/config:/clair:Z registry.redhat.io/quay/clair-rhel8:v3.13.3", "sudo podman stop <quay_container_name>", "sudo podman stop <clair_container_id>", "sudo podman run -d --name <clair_migration_postgresql_database> 1 -e POSTGRESQL_MIGRATION_REMOTE_HOST=<container_ip_address> \\ 2 -e POSTGRESQL_MIGRATION_ADMIN_PASSWORD=remoteAdminP@ssword -v </host/data/directory:/var/lib/pgsql/data:Z> \\ 3 [ OPTIONAL_CONFIGURATION_VARIABLES ] registry.redhat.io/rhel8/postgresql-15", "mkdir -p /host/data/clair-postgresql15-directory", "setfacl -m u:26:-wx /host/data/clair-postgresql15-directory", "sudo podman stop <clair_postgresql13_container_name>", "sudo podman run -d --rm --name <postgresql15-clairv4> -e POSTGRESQL_USER=<clair_username> -e POSTGRESQL_PASSWORD=<clair_password> -e POSTGRESQL_DATABASE=<clair_database_name> -e POSTGRESQL_ADMIN_PASSWORD=<admin_password> -p 5433:5432 -v </host/data/clair-postgresql15-directory:/var/lib/postgresql/data:Z> registry.redhat.io/rhel8/postgresql-15", "sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v /home/<quay_user>/quay-poc/config:/conf/stack:Z -v /home/<quay_user>/quay-poc/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv}", "sudo podman run -d --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo registry.redhat.io/quay/clair-rhel8:{productminv}", "podman stop <clairv4_container_name>", "podman pull quay.io/projectquay/clair:nightly-2024-02-03", "podman run -d --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/opt/clairv4/config:/clair:Z quay.io/projectquay/clair:nightly-2024-02-03", "podman pull ubuntu:20.04", "sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04", "sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: false", "oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret", "indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true", "apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> extra_ca_cert_<name>: <base64 encoded ca cert> ssl.crt: <base64 encoded SSL certificate> ssl.key: <base64 encoded SSL private key>", "oc get pods -n <namespace>", "NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: true", "oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml config-bundle-secret", "indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true", "apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config>", "oc get pods -n <namespace>", "NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s", "oc -n quay-enterprise exec example-registry-clair-app-64dd48f866-6ptgw -- cat /usr/bin/clairctl > clairctl", "chmod u+x ./clairctl", "oc get secret -n quay-enterprise example-registry-clair-config-secret -o \"jsonpath={USD.data['config\\.yaml']}\" | base64 -d > clair-config.yaml", "--- indexer: airgap: true --- matcher: disable_updaters: true ---", "./clairctl --config ./config.yaml export-updaters updates.gz", "oc get svc -n quay-enterprise", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h", "oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432", "indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json", "./clairctl --config ./clair-config.yaml import-updaters updates.gz", "sudo podman cp clairv4:/usr/bin/clairctl ./clairctl", "chmod u+x ./clairctl", "mkdir /etc/clairv4/config/", "--- indexer: airgap: true --- matcher: disable_updaters: true ---", "sudo podman run -it --rm --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/clairv4/config:/clair:Z registry.redhat.io/quay/clair-rhel8:v3.13.3", "./clairctl --config ./config.yaml export-updaters updates.gz", "oc get svc -n quay-enterprise", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h", "oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432", "indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json", "./clairctl --config ./clair-config.yaml import-updaters updates.gz", "indexer: scanner: repo: rhel-repository-scanner: repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: name2repos_mapping_file: /data/repo-map.json", "clair -conf ./path/to/config.yaml -mode indexer", "clair -conf ./path/to/config.yaml -mode matcher", "export HTTP_PROXY=http://<user_name>:<password>@<proxy_host>:<proxy_port>", "export HTTPS_PROXY=https://<user_name>:<password>@<proxy_host>:<proxy_port>", "export SSL_CERT_DIR=/<path>/<to>/<ssl>/<certificates>", "export NO_PROXY=<comma_separated_list_of_hosts_and_domains>", "http_listen_addr: \"\" introspection_addr: \"\" log_level: \"\" tls: {} indexer: connstring: \"\" scanlock_retry: 0 layer_scan_concurrency: 5 migrations: false scanner: {} airgap: false matcher: connstring: \"\" indexer_addr: \"\" migrations: false period: \"\" disable_updaters: false update_retention: 2 matchers: names: nil config: nil updaters: sets: nil config: nil notifier: connstring: \"\" migrations: false indexer_addr: \"\" matcher_addr: \"\" poll_interval: \"\" delivery_interval: \"\" disable_summary: false webhook: null amqp: null stomp: null auth: psk: nil trace: name: \"\" probability: null jaeger: agent: endpoint: \"\" collector: endpoint: \"\" username: null password: null service_name: \"\" tags: nil buffer_max: 0 metrics: name: \"\" prometheus: endpoint: null dogstatsd: url: \"\"", "http_listen_addr: 0.0.0.0:6060 introspection_addr: 0.0.0.0:8089 log_level: info", "indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true", "matcher: connstring: >- host=<DB_HOST> port=5432 dbname=<matcher> user=<DB_USER> password=D<B_PASS> sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ disable_updaters: false migrations: true period: 6h update_retention: 2", "matchers: names: - \"alpine-matcher\" - \"aws\" - \"debian\" - \"oracle\"", "updaters: sets: - rhel config: rhel: ignore_unpatched: false", "notifier: connstring: >- host=DB_HOST port=5432 dbname=notifier user=DB_USER password=DB_PASS sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ matcher_addr: http://clair-v4/ delivery_interval: 5s migrations: true poll_interval: 15s webhook: target: \"http://webhook/\" callback: \"http://clair-notifier/notifier/api/v1/notifications\" headers: \"\" amqp: null stomp: null", "notifier: webhook: target: \"http://webhook/\" callback: \"http://clair-notifier/notifier/api/v1/notifications\"", "notifier: amqp: exchange: name: \"\" type: \"direct\" durable: true auto_delete: false uris: [\"amqp://user:pass@host:10000/vhost\"] direct: false routing_key: \"notifications\" callback: \"http://clair-notifier/notifier/api/v1/notifications\" tls: root_ca: \"optional/path/to/rootca\" cert: \"madatory/path/to/cert\" key: \"madatory/path/to/key\"", "notifier: stomp: desitnation: \"notifications\" direct: false callback: \"http://clair-notifier/notifier/api/v1/notifications\" login: login: \"username\" passcode: \"passcode\" tls: root_ca: \"optional/path/to/rootca\" cert: \"madatory/path/to/cert\" key: \"madatory/path/to/key\"", "auth: psk: key: MTU5YzA4Y2ZkNzJoMQ== 1 iss: [\"quay\"]", "trace: name: \"jaeger\" probability: 1 jaeger: agent: endpoint: \"localhost:6831\" service_name: \"clair\"", "metrics: name: \"prometheus\" prometheus: endpoint: \"/metricsz\"" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html-single/vulnerability_reporting_with_clair_on_red_hat_quay/OSV.dev
Chapter 42. Producer Interface
Chapter 42. Producer Interface Abstract This chapter describes how to implement the Producer interface, which is an essential step in the implementation of a Apache Camel component. 42.1. The Producer Interface Overview An instance of org.apache.camel.Producer type represents a target endpoint in a route. The role of the producer is to send requests ( In messages) to a specific physical endpoint and to receive the corresponding response ( Out or Fault message). A Producer object is essentially a special kind of Processor that appears at the end of a processor chain (equivalent to a route). Figure 42.1, "Producer Inheritance Hierarchy" shows the inheritance hierarchy for producers. Figure 42.1. Producer Inheritance Hierarchy The Producer interface Example 42.1, "Producer Interface" shows the definition of the org.apache.camel.Producer interface. Example 42.1. Producer Interface Producer methods The Producer interface defines the following methods: process() (inherited from Processor) - The most important method. A producer is essentially a special type of processor that sends a request to an endpoint, instead of forwarding the exchange object to another processor. By overriding the process() method, you define how the producer sends and receives messages to and from the relevant endpoint. getEndpoint() - Returns a reference to the parent endpoint instance. createExchange() - These overloaded methods are analogous to the corresponding methods defined in the Endpoint interface. Normally, these methods delegate to the corresponding methods defined on the parent Endpoint instance (this is what the DefaultEndpoint class does by default). Occasionally, you might need to override these methods. Asynchronous processing Processing an exchange object in a producer - which usually involves sending a message to a remote destination and waiting for a reply - can potentially block for a significant length of time. If you want to avoid blocking the current thread, you can opt to implement the producer as an asynchronous processor . The asynchronous processing pattern decouples the preceding processor from the producer, so that the process() method returns without delay. See Section 38.1.4, "Asynchronous Processing" . When implementing a producer, you can support the asynchronous processing model by implementing the org.apache.camel.AsyncProcessor interface. On its own, this is not enough to ensure that the asynchronous processing model will be used: it is also necessary for the preceding processor in the chain to call the asynchronous version of the process() method. The definition of the AsyncProcessor interface is shown in Example 42.2, "AsyncProcessor Interface" . Example 42.2. AsyncProcessor Interface The asynchronous version of the process() method takes an extra argument, callback , of org.apache.camel.AsyncCallback type. The corresponding AsyncCallback interface is defined as shown in Example 42.3, "AsyncCallback Interface" . Example 42.3. AsyncCallback Interface The caller of AsyncProcessor.process() must provide an implementation of AsyncCallback to receive the notification that processing has finished. The AsyncCallback.done() method takes a boolean argument that indicates whether the processing was performed synchronously or not. Normally, the flag would be false , to indicate asynchronous processing. In some cases, however, it can make sense for the producer not to process asynchronously (in spite of being asked to do so). For example, if the producer knows that the processing of the exchange will complete rapidly, it could optimise the processing by doing it synchronously. In this case, the doneSynchronously flag should be set to true . ExchangeHelper class When implementing a producer, you might find it helpful to call some of the methods in the org.apache.camel.util.ExchangeHelper utility class. For full details of the ExchangeHelper class, see Section 35.4, "The ExchangeHelper Class" . 42.2. Implementing the Producer Interface Alternative ways of implementing a producer You can implement a producer in one of the following ways: How to implement a synchronous producer How to implement an asynchronous producer How to implement a synchronous producer Example 42.4, "DefaultProducer Implementation" outlines how to implement a synchronous producer. In this case, call to Producer.process() blocks until a reply is received. Example 42.4. DefaultProducer Implementation 1 Implement a custom synchronous producer class, CustomProducer , by extending the org.apache.camel.impl.DefaultProducer class. 2 Implement a constructor that takes a reference to the parent endpoint. 3 The process() method implementation represents the core of the producer code. The implementation of the process() method is entirely dependent on the type of component that you are implementing. In outline, the process() method is normally implemented as follows: If the exchange contains an In message, and if this is consistent with the specified exchange pattern, then send the In message to the designated endpoint. If the exchange pattern anticipates the receipt of an Out message, then wait until the Out message has been received. This typically causes the process() method to block for a significant length of time. When a reply is received, call exchange.setOut() to attach the reply to the exchange object. If the reply contains a fault message, set the fault flag on the Out message using Message.setFault(true) . How to implement an asynchronous producer Example 42.5, "CollectionProducer Implementation" outlines how to implement an asynchronous producer. In this case, you must implement both a synchronous process() method and an asynchronous process() method (which takes an additional AsyncCallback argument). Example 42.5. CollectionProducer Implementation import org.apache.camel.AsyncCallback; import org.apache.camel.AsyncProcessor; import org.apache.camel.Endpoint; import org.apache.camel.Exchange; import org.apache.camel.Producer; import org.apache.camel.impl.DefaultProducer; public class _CustomProducer_ extends DefaultProducer implements AsyncProcessor { 1 public _CustomProducer_(Endpoint endpoint) { 2 super(endpoint); // ... } public void process(Exchange exchange) throws Exception { 3 // Process exchange synchronously. // ... } public boolean process(Exchange exchange, AsyncCallback callback) { 4 // Process exchange asynchronously. CustomProducerTask task = new CustomProducerTask(exchange, callback); // Process 'task' in a separate thread... // ... return false; 5 } } public class CustomProducerTask implements Runnable { 6 private Exchange exchange; private AsyncCallback callback; public CustomProducerTask(Exchange exchange, AsyncCallback callback) { this.exchange = exchange; this.callback = callback; } public void run() { 7 // Process exchange. // ... callback.done(false); } } 1 Implement a custom asynchronous producer class, CustomProducer , by extending the org.apache.camel.impl.DefaultProducer class, and implementing the AsyncProcessor interface. 2 Implement a constructor that takes a reference to the parent endpoint. 3 Implement the synchronous process() method. 4 Implement the asynchronous process() method. You can implement the asynchronous method in several ways. The approach shown here is to create a java.lang.Runnable instance, task , that represents the code that runs in a sub-thread. You then use the Java threading API to run the task in a sub-thread (for example, by creating a new thread or by allocating the task to an existing thread pool). 5 Normally, you return false from the asynchronous process() method, to indicate that the exchange was processed asynchronously. 6 The CustomProducerTask class encapsulates the processing code that runs in a sub-thread. This class must store a copy of the Exchange object, exchange , and the AsyncCallback object, callback , as private member variables. 7 The run() method contains the code that sends the In message to the producer endpoint and waits to receive the reply, if any. After receiving the reply ( Out message or Fault message) and inserting it into the exchange object, you must call callback.done() to notify the caller that processing is complete.
[ "package org.apache.camel; public interface Producer extends Processor, Service, IsSingleton { Endpoint<E> getEndpoint(); Exchange createExchange(); Exchange createExchange(ExchangePattern pattern); Exchange createExchange(E exchange); }", "package org.apache.camel; public interface AsyncProcessor extends Processor { boolean process(Exchange exchange, AsyncCallback callback); }", "package org.apache.camel; public interface AsyncCallback { void done(boolean doneSynchronously); }", "import org.apache.camel.Endpoint; import org.apache.camel.Exchange; import org.apache.camel.Producer; import org.apache.camel.impl.DefaultProducer; public class CustomProducer extends DefaultProducer { 1 public CustomProducer (Endpoint endpoint) { 2 super(endpoint); // Perform other initialization tasks } public void process(Exchange exchange) throws Exception { 3 // Process exchange synchronously. // } }", "import org.apache.camel.AsyncCallback; import org.apache.camel.AsyncProcessor; import org.apache.camel.Endpoint; import org.apache.camel.Exchange; import org.apache.camel.Producer; import org.apache.camel.impl.DefaultProducer; public class _CustomProducer_ extends DefaultProducer implements AsyncProcessor { 1 public _CustomProducer_(Endpoint endpoint) { 2 super(endpoint); // } public void process(Exchange exchange) throws Exception { 3 // Process exchange synchronously. // } public boolean process(Exchange exchange, AsyncCallback callback) { 4 // Process exchange asynchronously. CustomProducerTask task = new CustomProducerTask(exchange, callback); // Process 'task' in a separate thread // return false; 5 } } public class CustomProducerTask implements Runnable { 6 private Exchange exchange; private AsyncCallback callback; public CustomProducerTask(Exchange exchange, AsyncCallback callback) { this.exchange = exchange; this.callback = callback; } public void run() { 7 // Process exchange. // callback.done(false); } }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/producerintf
Chapter 3. Installing a cluster on Nutanix in a restricted network
Chapter 3. Installing a cluster on Nutanix in a restricted network In OpenShift Container Platform 4.14, you can install a cluster on Nutanix infrastructure in a restricted network by creating an internal mirror of the installation release content. 3.1. Prerequisites You have reviewed details about the OpenShift Container Platform installation and update processes. The installation program requires access to port 9440 on Prism Central and Prism Element. You verified that port 9440 is accessible. If you use a firewall, you have met these prerequisites: You confirmed that port 9440 is accessible. Control plane nodes must be able to reach Prism Central and Prism Element on port 9440 for the installation to succeed. You configured the firewall to grant access to the sites that OpenShift Container Platform requires. This includes the use of Telemetry. If your Nutanix environment is using the default self-signed SSL/TLS certificate, replace it with a certificate that is signed by a CA. The installation program requires a valid CA-signed certificate to access to the Prism Central API. For more information about replacing the self-signed certificate, see the Nutanix AOS Security Guide . If your Nutanix environment uses an internal CA to issue certificates, you must configure a cluster-wide proxy as part of the installation process. For more information, see Configuring a custom PKI . Important Use 2048-bit certificates. The installation fails if you use 4096-bit certificates with Prism Central 2022.x. You have a container image registry, such as Red Hat Quay. If you do not already have a registry, you can create a mirror registry using mirror registry for Red Hat OpenShift . You have used the oc-mirror OpenShift CLI (oc) plugin to mirror all of the required OpenShift Container Platform content and other images, including the Nutanix CSI Operator, to your mirror registry. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. 3.2. About installations in restricted networks In OpenShift Container Platform 4.14, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 3.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 3.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.4. Adding Nutanix root CA certificates to your system trust Because the installation program requires access to the Prism Central API, you must add your Nutanix trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the Prism Central web console, download the Nutanix root CA certificates. Extract the compressed file that contains the Nutanix root CA certificates. Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 3.5. Downloading the RHCOS cluster image Prism Central requires access to the Red Hat Enterprise Linux CoreOS (RHCOS) image to install the cluster. You can use the installation program to locate and download the RHCOS image and make it available through an internal HTTP server or Nutanix Objects. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install coreos print-stream-json Use the output of the command to find the location of the Nutanix image, and click the link to download it. Example output "nutanix": { "release": "411.86.202210041459-0", "formats": { "qcow2": { "disk": { "location": "https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2", "sha256": "42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b" Make the image available through an internal HTTP server or Nutanix Objects. Note the location of the downloaded image. You update the platform section in the installation configuration file ( install-config.yaml ) with the image's location before deploying the cluster. Snippet of an install-config.yaml file that specifies the RHCOS image platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2 3.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Nutanix. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSourcePolicy.yaml file that was created when you mirrored your registry. You have the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image you download. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. You have verified that you have met the Nutanix networking requirements. For more information, see "Preparing to install on Nutanix". Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select nutanix as the platform to target. Enter the Prism Central domain name or IP address. Enter the port that is used to log into Prism Central. Enter the credentials that are used to log into Prism Central. The installation program connects to Prism Central. Select the Prism Element that will manage the OpenShift Container Platform cluster. Select the network subnet to use. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you configured in the DNS records. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. In the install-config.yaml file, set the value of platform.nutanix.clusterOSImage to the image location or name. For example: platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. Optional: Update one or more of the default configuration parameters in the install.config.yaml file to customize the installation. For more information about the parameters, see "Installation configuration parameters". Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on {platform}". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for Nutanix 3.6.1. Sample customized install-config.yaml file for Nutanix You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 10 12 13 16 17 18 19 Required. The installation program prompts you for this value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 8 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 5 9 14 Optional: Provide one or more pairs of a prism category key and a prism category value. These category key-value pairs must exist in Prism Central. You can provide separate categories to compute machines, control plane machines, or all machines. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 15 Optional: Specify a project with which VMs are associated. Specify either name or uuid for the project type, and then provide the corresponding UUID or project name. You can associate projects to compute machines, control plane machines, or all machines. 20 Optional: By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can override the default behavior by hosting the RHCOS image on any HTTP server or Nutanix Objects and pointing the installation program to the image. 21 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 Optional: You can provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 Provide the contents of the certificate file that you used for your mirror registry. 25 Provide these values from the metadata.name: release-0 section of the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. 3.6.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.8. Configuring IAM for Nutanix Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets. Prerequisites You have configured the ccoctl binary. You have an install-config.yaml file. Procedure Create a YAML file that contains the credentials data in the following format: Credentials data format credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element> 1 Specify the authentication type. Only basic authentication is supported. 2 Specify the Prism Central credentials. 3 Optional: Specify the Prism Element credentials. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: "true" labels: controller-tools.k8s.io: "1.0" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl nutanix create-shared-secrets \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --credentials-source-filepath=<path_to_credentials_file> 3 1 Specify the path to the directory that contains the files for the component CredentialsRequests objects. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Optional: Specify the directory that contains the credentials data YAML file. By default, ccoctl expects this file to be in <home_directory>/.nutanix/credentials . Edit the install-config.yaml configuration file so that the credentialsMode parameter is set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 ... 1 Add this line to set the credentialsMode parameter to Manual . Create the installation manifests by running the following command: USD openshift-install create manifests --dir <installation_directory> 1 1 Specify the path to the directory that contains the install-config.yaml file for your cluster. Copy the generated credential files to the target manifests directory by running the following command: USD cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests Verification Ensure that the appropriate secrets exist in the manifests directory. USD ls ./<installation_directory>/manifests Example output cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml 3.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.10. Post installation Complete the following steps to complete the configuration of your cluster. 3.10.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 3.10.2. Installing the policy resources into the cluster Mirroring the OpenShift Container Platform content using the oc-mirror OpenShift CLI (oc) plugin creates resources, which include catalogSource-certified-operator-index.yaml and imageContentSourcePolicy.yaml . The ImageContentSourcePolicy resource associates the mirror registry with the source registry and redirects image pull requests from the online registries to the mirror registry. The CatalogSource resource is used by Operator Lifecycle Manager (OLM) to retrieve information about the available Operators in the mirror registry, which lets users discover and install Operators. After you install the cluster, you must install these resources into the cluster. Prerequisites You have mirrored the image set to the registry mirror in the disconnected environment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift CLI as a user with the cluster-admin role. Apply the YAML files from the results directory to the cluster: USD oc apply -f ./oc-mirror-workspace/results-<id>/ Verification Verify that the ImageContentSourcePolicy resources were successfully installed: USD oc get imagecontentsourcepolicy Verify that the CatalogSource resources were successfully installed: USD oc get catalogsource --all-namespaces 3.10.3. Configuring the default storage container After you install the cluster, you must install the Nutanix CSI Operator and configure the default storage container for the cluster. For more information, see the Nutanix documentation for installing the CSI Operator and configuring registry storage . 3.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. 3.12. Additional resources About remote health monitoring 3.13. steps If necessary, see Opt out of remote health reporting If necessary, see Registering your disconnected cluster Customize your cluster
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install coreos print-stream-json", "\"nutanix\": { \"release\": \"411.86.202210041459-0\", \"formats\": { \"qcow2\": { \"disk\": { \"location\": \"https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2\", \"sha256\": \"42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b\"", "platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api", "ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1", "openshift-install create manifests --dir <installation_directory> 1", "cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests", "ls ./<installation_directory>/manifests", "cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc apply -f ./oc-mirror-workspace/results-<id>/", "oc get imagecontentsourcepolicy", "oc get catalogsource --all-namespaces" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_nutanix/installing-restricted-networks-nutanix-installer-provisioned
13.2. XML representation of a Storage Connection Resource
13.2. XML representation of a Storage Connection Resource Example 13.1. An XML representation of a storage connection resource
[ "<storage_connections> <storage_connection href= \"/ovirt-engine/api/storageconnections/608c5b96-9939-4331-96b5-197f28aa2e35\" id=\"608c5b96-9939-4331-96b5-197f28aa2e35\"> <address>domain.example.com</address> <type>nfs</type> <path>/var/lib/exports/iso</path> </storage_connection> <storage_connection href= \"/ovirt-engine/api/storageconnections/2ebb3f78-8c22-4666-8df4-e4bb7fec6b3a\" id=\"2ebb3f78-8c22-4666-8df4-e4bb7fec6b3a\"> <address>domain.example.com</address> <type>posixfs</type> <path>/export/storagedata/username/data</path> <vfs_type>nfs</vfs_type> </storage_connection> </storage_connections>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/xml_representation_of_a_storage_connection_resource
Chapter 2. Red Hat Decision Manager components
Chapter 2. Red Hat Decision Manager components The product is made up of Business Central and KIE Server. Business Central is the graphical user interface where you create and manage business rules. You can install Business Central in a Red Hat JBoss EAP instance or on the Red Hat OpenShift Container Platform (OpenShift). Business Central is also available as a standalone JAR file. You can use the Business Central standalone JAR file to run Business Central without deploying it to an application server. KIE Server is the server where rules and other artifacts are executed. It is used to instantiate and execute rules and solve planning problems. You can install KIE Server in a Red Hat JBoss EAP instance, in a Red Hat JBoss EAP cluster, on OpenShift, in an Oracle WebLogic server instance, in an IBM WebSphere Application Server instance, or as a part of Spring Boot application. You can configure KIE Server to run in managed or unmanaged mode. If KIE Server is unmanaged, you must manually create and maintain KIE containers (deployment units). A KIE container is a specific version of a project. If KIE Server is managed, the Process Automation Manager controller manages the KIE Server configuration and you interact with the Process Automation Manager controller to create and maintain KIE containers. The Process Automation Manager controller is integrated with Business Central. If you install Business Central on Red Hat JBoss EAP, use the Execution Server page to create and maintain KIE containers. However, if you do not install Business Central, you can install the headless Process Automation Manager controller and use the REST API or the KIE Server Java Client API to interact with it. Red Hat build of OptaPlanner is integrated in Business Central and KIE Server. It is a lightweight, embeddable planning engine that optimizes planning problems. Red Hat build of OptaPlanner helps Java programmers solve planning problems efficiently, and it combines optimization heuristics and metaheuristics with efficient score calculations.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/components-con_planning
Installing on IBM Power
Installing on IBM Power OpenShift Container Platform 4.15 Installing OpenShift Container Platform on IBM Power Red Hat OpenShift Documentation Team
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "./openshift-install create manifests --dir <installation_directory>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "bootlist -m normal -o sda", "bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "./openshift-install create manifests --dir <installation_directory>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "bootlist -m normal -o sda", "bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/installing_on_ibm_power/index
Chapter 3. Getting started
Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites You must complete the installation procedure for your environment. You must have an AMQP 1.0 message broker listening for connections on interface localhost and port 5672 . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named examples . For more information, see Creating a queue . 3.2. Running Hello World The Hello World example creates a connection to the broker, sends a message containing a greeting to the examples queue, and receives it back. On success, it prints the received message to the console. Change to the examples directory and run the helloworld.rb example. USD cd /usr/share/proton/examples/ruby/ USD ruby helloworld.rb amqp://127.0.0.1 examples Hello World!
[ "cd /usr/share/proton/examples/ruby/ ruby helloworld.rb amqp://127.0.0.1 examples Hello World!" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_ruby_client/getting_started
Chapter 2. MTC release notes
Chapter 2. MTC release notes 2.1. Migration Toolkit for Containers 1.8 release notes The release notes for Migration Toolkit for Containers (MTC) describe new features and enhancements, deprecated features, and known issues. The MTC enables you to migrate application workloads between OpenShift Container Platform clusters at the granularity of a namespace. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. For information on the support policy for MTC, see OpenShift Application and Cluster Migration Solutions , part of the Red Hat OpenShift Container Platform Life Cycle Policy . 2.1.1. Migration Toolkit for Containers 1.8.5 release notes 2.1.1.1. Technical changes Migration Toolkit for Containers (MTC) 1.8.5 has the following technical changes: Federal Information Processing Standard (FIPS) FIPS is a set of computer security standards developed by the United States federal government in accordance with the Federal Information Security Management Act (FISMA). Starting with version 1.8.5, MTC is designed for FIPS compliance. 2.1.1.2. Resolved issues For more information, see the list of MTC 1.8.5 resolved issues in Jira. 2.1.1.3. Known issues MTC 1.8.5 has the following known issues: The associated SCC for service account cannot be migrated in OpenShift Container Platform 4.12 The associated Security Context Constraints (SCCs) for service accounts in OpenShift Container Platform 4.12 cannot be migrated. This issue is planned to be resolved in a future release of MTC. (MIG-1454) MTC does not patch statefulset.spec.volumeClaimTemplates[].spec.storageClassName on storage class conversion While running a Storage Class conversion for a StatefulSet application, MTC updates the persistent volume claims (PVC) references in .spec.volumeClaimTemplates[].metadata.name to use the migrated PVC names. MTC does not update spec.volumeClaimTemplates[].spec.storageClassName , which causes the application to scale up. Additionally, new replicas consume PVCs created under the old Storage Class instead of the migrated Storage Class. (MIG-1660) Performing a StorageClass conversion triggers the scale-down of all applications in the namespace When running a StorageClass conversion on more than one application, MTC scales down all the applications in the cutover phase, including those not involved in the migration. (MIG-1661) MigPlan cannot be edited to have the same target namespace as the source cluster after it is changed After changing the target namespace to something different from the source namespace while creating a MigPlan in the MTC UI, you cannot edit the MigPlan again to make the target namespace the same as the source namespace. (MIG-1600) Migrated builder pod fails to push to the image registry When migrating an application that includes BuildConfig from the source to the target cluster, the builder pod encounters an error, failing to push the image to the image registry. (BZ#2234781) Conflict condition clears briefly after it is displayed When creating a new state migration plan that results in a conflict error, the error is cleared shortly after it is displayed. (BZ#2144299) PvCapacityAdjustmentRequired warning not displayed after setting pv_resizing_threshold The PvCapacityAdjustmentRequired warning does not appear in the migration plan after the pv_resizing_threshold is adjusted. (BZ#2270160) For a complete list of all known issues, see the list of MTC 1.8.5 known issues in Jira. 2.1.2. Migration Toolkit for Containers 1.8.4 release notes 2.1.2.1. Technical changes Migration Toolkit for Containers (MTC) 1.8.4 has the following technical changes: MTC 1.8.4 extends its dependency resolution to include support for using OpenShift API for Data Protection (OADP) 1.4. Support for KubeVirt Virtual Machines with DirectVolumeMigration MTC 1.8.4 adds support for KubeVirt Virtual Machines (VMs) with Direct Volume Migration (DVM). 2.1.2.2. Resolved issues MTC 1.8.4 has the following major resolved issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes, with an exception, ValueError: too many values to unpack , returned during the task. Earlier versions of MTC are impacted, while MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) UI stuck at Namespaces while creating a migration plan When trying to create a migration plan from the MTC UI, the migration plan wizard becomes stuck at the Namespaces step. This issue has been resolved in MTC 1.8.4. (MIG-1597) Migration fails with error of no matches for kind Virtual machine in version kubevirt/v1 During the migration of an application, all the necessary steps, including the backup, DVM, and restore, are successfully completed. However, the migration is marked as unsuccessful with the error message no matches for kind Virtual machine in version kubevirt/v1 . (MIG-1594) Direct Volume Migration fails when migrating to a namespace different from the source namespace On performing a migration from source cluster to target cluster, with the target namespace different from the source namespace, the DVM fails. (MIG-1592) Direct Image Migration does not respect label selector on migplan When using Direct Image Migration (DIM), if a label selector is set on the migration plan, DIM does not respect it and attempts to migrate all imagestreams in the namespace. (MIG-1533) 2.1.2.3. Known issues MTC 1.8.4 has the following known issues: The associated SCC for service account cannot be migrated in OpenShift Container Platform 4.12 The associated Security Context Constraints (SCCs) for service accounts in OpenShift Container Platform 4.12 cannot be migrated. This issue is planned to be resolved in a future release of MTC. (MIG-1454) . Rsync pod fails to start causing the DVM phase to fail The DVM phase fails due to the Rsync pod failing to start, because of a permission issue. (BZ#2231403) Migrated builder pod fails to push to image registry When migrating an application including BuildConfig from source to target cluster, the builder pod results in error, failing to push the image to the image registry. (BZ#2234781) Conflict condition gets cleared briefly after it is created When creating a new state migration plan that results in a conflict error, that error is cleared shorty after it is displayed. (BZ#2144299) PvCapacityAdjustmentRequired Warning Not Displayed After Setting pv_resizing_threshold The PvCapacityAdjustmentRequired warning fails to appear in the migration plan after the pv_resizing_threshold is adjusted. (BZ#2270160) 2.1.3. Migration Toolkit for Containers 1.8.3 release notes 2.1.3.1. Technical changes Migration Toolkit for Containers (MTC) 1.8.3 has the following technical changes: OADP 1.3 is now supported MTC 1.8.3 adds support to OpenShift API for Data Protection (OADP) as a dependency of MTC 1.8.z. 2.1.3.2. Resolved issues MTC 1.8.3 has the following major resolved issues: CVE-2024-24786: Flaw in Golang protobuf module causes unmarshal function to enter infinite loop In releases of MTC, a vulnerability was found in Golang's protobuf module, where the unmarshal function entered an infinite loop while processing certain invalid inputs. Consequently, an attacker provided carefully constructed invalid inputs, which caused the function to enter an infinite loop. With this update, the unmarshal function works as expected. For more information, see CVE-2024-24786 . CVE-2023-45857: Axios Cross-Site Request Forgery Vulnerability In releases of MTC, a vulnerability was discovered in Axios 1.5.1 that inadvertently revealed a confidential XSRF-TOKEN stored in cookies by including it in the HTTP header X-XSRF-TOKEN for every request made to the host, allowing attackers to view sensitive information. For more information, see CVE-2023-45857 . Restic backup does not work properly when the source workload is not quiesced In releases of MTC, some files did not migrate when deploying an application with a route. The Restic backup did not function as expected when the quiesce option was unchecked for the source workload. This issue has been resolved in MTC 1.8.3. For more information, see BZ#2242064 . The Migration Controller fails to install due to an unsupported value error in Velero The MigrationController failed to install due to an unsupported value error in Velero. Updating OADP 1.3.0 to OADP 1.3.1 resolves this problem. For more information, see BZ#2267018 . This issue has been resolved in MTC 1.8.3. For a complete list of all resolved issues, see the list of MTC 1.8.3 resolved issues in Jira. 2.1.3.3. Known issues Migration Toolkit for Containers (MTC) 1.8.3 has the following known issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes, with an exception, ValueError: too many values to unpack , returned during the task. MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) The associated SCC for service account cannot be migrated in OpenShift Container Platform 4.12 The associated Security Context Constraints (SCCs) for service accounts in OpenShift Container Platform version 4.12 cannot be migrated. This issue is planned to be resolved in a future release of MTC. (MIG-1454) . For a complete list of all known issues, see the list of MTC 1.8.3 known issues in Jira. 2.1.4. Migration Toolkit for Containers 1.8.2 release notes 2.1.4.1. Resolved issues This release has the following major resolved issues: Backup phase fails after setting custom CA replication repository In releases of Migration Toolkit for Containers (MTC), after editing the replication repository, adding a custom CA certificate, successfully connecting the repository, and triggering a migration, a failure occurred during the backup phase. CVE-2023-26136: tough-cookie package before 4.1.3 are vulnerable to Prototype Pollution In releases of (MTC), versions before 4.1.3 of the tough-cookie package used in MTC were vulnerable to prototype pollution. This vulnerability occurred because CookieJar did not handle cookies properly when the value of the rejectPublicSuffixes was set to false . For more details, see (CVE-2023-26136) CVE-2022-25883 openshift-migration-ui-container: nodejs-semver: Regular expression denial of service In releases of (MTC), versions of the semver package before 7.5.2, used in MTC, were vulnerable to Regular Expression Denial of Service (ReDoS) from the function newRange , when untrusted user data was provided as a range. For more details, see (CVE-2022-25883) 2.1.4.2. Known issues MTC 1.8.2 has the following known issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes, with an exception, ValueError: too many values to unpack , returned during the task. MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) 2.1.5. Migration Toolkit for Containers 1.8.1 release notes 2.1.5.1. Resolved issues Migration Toolkit for Containers (MTC) 1.8.1 has the following major resolved issues: CVE-2023-39325: golang: net/http, x/net/http2: rapid stream resets can cause excessive work A flaw was found in handling multiplexed streams in the HTTP/2 protocol, which is used by MTC. A client could repeatedly make a request for a new multiplex stream and immediately send an RST_STREAM frame to cancel it. This creates additional workload for the server in terms of setting up and dismantling streams, while avoiding any server-side limitations on the maximum number of active streams per connection, resulting in a denial of service due to server resource consumption. (BZ#2245079) It is advised to update to MTC 1.8.1 or later, which resolve this issue. For more details, see (CVE-2023-39325) and (CVE-2023-44487) 2.1.5.2. Known issues Migration Toolkit for Containers (MTC) 1.8.1 has the following known issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes. An exception, ValueError: too many values to unpack , is returned during the task. MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) 2.1.6. Migration Toolkit for Containers 1.8.0 release notes 2.1.6.1. Resolved issues Migration Toolkit for Containers (MTC) 1.8.0 has the following resolved issues: Indirect migration is stuck on backup stage In releases, an indirect migration became stuck at the backup stage, due to InvalidImageName error. ( (BZ#2233097) ) PodVolumeRestore remain In Progress keeping the migration stuck at Stage Restore In releases, on performing an indirect migration, the migration became stuck at the Stage Restore step, waiting for the podvolumerestore to be completed. ( (BZ#2233868) ) Migrated application unable to pull image from internal registry on target cluster In releases, on migrating an application to the target cluster, the migrated application failed to pull the image from the internal image registry resulting in an application failure . ( (BZ#2233103) ) Migration failing on Azure due to authorization issue In releases, on an Azure cluster, when backing up to Azure storage, the migration failed at the Backup stage. ( (BZ#2238974) ) 2.1.6.2. Known issues MTC 1.8.0 has the following known issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes, with an exception ValueError: too many values to unpack returned during the task. MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) Old Restic pods are not getting removed on upgrading MTC 1.7.x 1.8.x In this release, on upgrading the MTC Operator from 1.7.x to 1.8.x, the old Restic pods are not being removed. Therefore after the upgrade, both Restic and node-agent pods are visible in the namespace. ( (BZ#2236829) ) Migrated builder pod fails to push to image registry In this release, on migrating an application including a BuildConfig from a source to target cluster, builder pod results in error , failing to push the image to the image registry. ( (BZ#2234781) ) [UI] CA bundle file field is not properly cleared In this release, after enabling Require SSL verification and adding content to the CA bundle file for an MCG NooBaa bucket in MigStorage, the connection fails as expected. However, when reverting these changes by removing the CA bundle content and clearing Require SSL verification , the connection still fails. The issue is only resolved by deleting and re-adding the repository. ( (BZ#2240052) ) Backup phase fails after setting custom CA replication repository In (MTC), after editing the replication repository, adding a custom CA certificate, successfully connecting the repository, and triggering a migration, a failure occurs during the backup phase. This issue is resolved in MTC 1.8.2. CVE-2023-26136: tough-cookie package before 4.1.3 are vulnerable to Prototype Pollution Versions before 4.1.3 of the tough-cookie package, used in MTC, are vulnerable to prototype pollution. This vulnerability occurs because CookieJar does not handle cookies properly when the value of the rejectPublicSuffixes is set to false . This issue is resolved in MTC 1.8.2. For more details, see (CVE-2023-26136) CVE-2022-25883 openshift-migration-ui-container: nodejs-semver: Regular expression denial of service In releases of (MTC), versions of the semver package before 7.5.2, used in MTC, are vulnerable to Regular Expression Denial of Service (ReDoS) from the function newRange , when untrusted user data is provided as a range. This issue is resolved in MTC 1.8.2. For more details, see (CVE-2022-25883) 2.1.6.3. Technical changes This release has the following technical changes: Migration from OpenShift Container Platform 3 to OpenShift Container Platform 4 requires a legacy Migration Toolkit for Containers Operator and Migration Toolkit for Containers 1.7.x. Migration from MTC 1.7.x to MTC 1.8.x is not supported. You must use MTC 1.7.x to migrate anything with a source of OpenShift Container Platform 4.9 or earlier. MTC 1.7.x must be used on both source and destination. Migration Toolkit for Containers (MTC) 1.8.x only supports migrations from OpenShift Container Platform 4.10 or later to OpenShift Container Platform 4.10 or later. For migrations only involving cluster versions 4.10 and later, either 1.7.x or 1.8.x might be used. However, but it must be the same MTC 1.Y.z on both source and destination. Migration from source MTC 1.7.x to destination MTC 1.8.x is unsupported. Migration from source MTC 1.8.x to destination MTC 1.7.x is unsupported. Migration from source MTC 1.7.x to destination MTC 1.7.x is supported. Migration from source MTC 1.8.x to destination MTC 1.8.x is supported. MTC 1.8.x by default installs OADP 1.2.x. Upgrading from MTC 1.7.x to MTC 1.8.0, requires manually changing the OADP channel to 1.2. If this is not done, the upgrade of the Operator fails. 2.2. Migration Toolkit for Containers 1.7 release notes The release notes for Migration Toolkit for Containers (MTC) describe new features and enhancements, deprecated features, and known issues. The MTC enables you to migrate application workloads between OpenShift Container Platform clusters at the granularity of a namespace. You can migrate from OpenShift Container Platform 3 to 4.18 and between OpenShift Container Platform 4 clusters. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. For information on the support policy for MTC, see OpenShift Application and Cluster Migration Solutions , part of the Red Hat OpenShift Container Platform Life Cycle Policy . 2.2.1. Migration Toolkit for Containers 1.7.17 release notes Migration Toolkit for Containers (MTC) 1.7.17 is a Container Grade Only (CGO) release, released to refresh the health grades of the containers, with no changes to any code in the product itself compared to that of MTC 1.7.16. 2.2.2. Migration Toolkit for Containers 1.7.16 release notes 2.2.2.1. Resolved issues This release has the following resolved issues: CVE-2023-45290: Golang: net/http : Memory exhaustion in the Request.ParseMultipartForm method A flaw was found in the net/http Golang standard library package, which impacts earlier versions of MTC. When parsing a multipart form, either explicitly with Request.ParseMultipartForm or implicitly with Request.FormValue , Request.PostFormValue , or Request.FormFile methods, limits on the total size of the parsed form are not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing long lines to cause the allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2023-45290 CVE-2024-24783: Golang: crypto/x509 : Verify panics on certificates with an unknown public key algorithm A flaw was found in the crypto/x509 Golang standard library package, which impacts earlier versions of MTC. Verifying a certificate chain that contains a certificate with an unknown public key algorithm causes Certificate.Verify to panic. This affects all crypto/tls clients and servers that set Config.ClientAuth to VerifyClientCertIfGiven or RequireAndVerifyClientCert . The default behavior is for TLS servers to not verify client certificates. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-24783 . CVE-2024-24784: Golang: net/mail : Comments in display names are incorrectly handled A flaw was found in the net/mail Golang standard library package, which impacts earlier versions of MTC. The ParseAddressList function incorrectly handles comments, text in parentheses, and display names. As this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-24784 . CVE-2024-24785: Golang: html/template : Errors returned from MarshalJSON methods may break template escaping A flaw was found in the html/template Golang standard library package, which impacts earlier versions of MTC. If errors returned from MarshalJSON methods contain user-controlled data, they could be used to break the contextual auto-escaping behavior of the html/template package, allowing subsequent actions to inject unexpected content into templates. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-24785 . CVE-2024-29180: webpack-dev-middleware : Lack of URL validation may lead to file leak A flaw was found in the webpack-dev-middleware package , which impacts earlier versions of MTC. This flaw fails to validate the supplied URL address sufficiently before returning local files, which could allow an attacker to craft URLs to return arbitrary local files from the developer's machine. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-29180 . CVE-2024-30255: envoy : HTTP/2 CPU exhaustion due to CONTINUATION frame flood A flaw was found in how the envoy proxy implements the HTTP/2 codec, which impacts earlier versions of MTC. There are insufficient limitations placed on the number of CONTINUATION frames that can be sent within a single stream, even after exceeding the header map limits of envoy . This flaw could allow an unauthenticated remote attacker to send packets to vulnerable servers. These packets could consume compute resources and cause a denial of service (DoS). To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-30255 . 2.2.2.2. Known issues This release has the following known issues: Direct Volume Migration is failing as the Rsync pod on the source cluster goes into an Error state On migrating any application with a Persistent Volume Claim (PVC), the Stage migration operation succeeds with warnings, but the Direct Volume Migration (DVM) fails with the rsync pod on the source namespace moving into an error state. (BZ#2256141) The conflict condition is briefly cleared after it is created When creating a new state migration plan that returns a conflict error message, the error message is cleared very shortly after it is displayed. (BZ#2144299) Migration fails when there are multiple Volume Snapshot Locations of different provider types configured in a cluster When there are multiple Volume Snapshot Locations (VSLs) in a cluster with different provider types, but you have not set any of them as the default VSL, Velero results in a validation error that causes migration operations to fail. (BZ#2180565) 2.2.3. Migration Toolkit for Containers 1.7.15 release notes 2.2.3.1. Resolved issues This release has the following resolved issues: CVE-2024-24786: A flaw was found in Golang's protobuf module, where the unmarshal function can enter an infinite loop A flaw was found in the protojson.Unmarshal function that could cause the function to enter an infinite loop when unmarshaling certain forms of invalid JSON messages. This condition could occur when unmarshaling into a message that contained a google.protobuf.Any value or when the UnmarshalOptions.DiscardUnknown option was set in a JSON-formatted message. To resolve this issue, upgrade to MTC 1.7.15. For more details, see (CVE-2024-24786) . CVE-2024-28180: jose-go improper handling of highly compressed data A vulnerability was found in Jose due to improper handling of highly compressed data. An attacker could send a JSON Web Encryption (JWE) encrypted message that contained compressed data that used large amounts of memory and CPU when decompressed by the Decrypt or DecryptMulti functions. To resolve this issue, upgrade to MTC 1.7.15. For more details, see (CVE-2024-28180) . 2.2.3.2. Known issues This release has the following known issues: Direct Volume Migration is failing as the Rsync pod on the source cluster goes into an Error state On migrating any application with Persistent Volume Claim (PVC), the Stage migration operation succeeds with warnings, and Direct Volume Migration (DVM) fails with the rsync pod on the source namespace going into an error state. (BZ#2256141) The conflict condition is briefly cleared after it is created When creating a new state migration plan that results in a conflict error message, the error message is cleared shortly after it is displayed. (BZ#2144299) Migration fails when there are multiple Volume Snapshot Locations (VSLs) of different provider types configured in a cluster with no specified default VSL. When there are multiple VSLs in a cluster with different provider types, and you set none of them as the default VSL, Velero results in a validation error that causes migration operations to fail. (BZ#2180565) 2.2.4. Migration Toolkit for Containers 1.7.14 release notes 2.2.4.1. Resolved issues This release has the following resolved issues: CVE-2023-39325 CVE-2023-44487: various flaws A flaw was found in the handling of multiplexed streams in the HTTP/2 protocol, which is utilized by Migration Toolkit for Containers (MTC). A client could repeatedly make a request for a new multiplex stream then immediately send an RST_STREAM frame to cancel those requests. This activity created additional workloads for the server in terms of setting up and dismantling streams, but avoided any server-side limitations on the maximum number of active streams per connection. As a result, a denial of service occurred due to server resource consumption. (BZ#2243564) (BZ#2244013) (BZ#2244014) (BZ#2244015) (BZ#2244016) (BZ#2244017) To resolve this issue, upgrade to MTC 1.7.14. For more details, see (CVE-2023-44487) and (CVE-2023-39325) . CVE-2023-39318 CVE-2023-39319 CVE-2023-39321: various flaws (CVE-2023-39318) : A flaw was discovered in Golang, utilized by MTC. The html/template package did not properly handle HTML-like "" comment tokens, or the hashbang "#!" comment tokens, in <script> contexts. This flaw could cause the template parser to improperly interpret the contents of <script> contexts, causing actions to be improperly escaped. (BZ#2238062) (BZ#2238088) (CVE-2023-39319) : A flaw was discovered in Golang, utilized by MTC. The html/template package did not apply the proper rules for handling occurrences of "<script" , "<!--" , and "</script" within JavaScript literals in <script> contexts. This could cause the template parser to improperly consider script contexts to be terminated early, causing actions to be improperly escaped. (BZ#2238062) (BZ#2238088) (CVE-2023-39321) : A flaw was discovered in Golang, utilized by MTC. Processing an incomplete post-handshake message for a QUIC connection could cause a panic. (BZ#2238062) (BZ#2238088) (CVE-2023-3932) : A flaw was discovered in Golang, utilized by MTC. Connections using the QUIC transport protocol did not set an upper bound on the amount of data buffered when reading post-handshake messages, allowing a malicious QUIC connection to cause unbounded memory growth. (BZ#2238088) To resolve these issues, upgrade to MTC 1.7.14. For more details, see (CVE-2023-39318) , (CVE-2023-39319) , and (CVE-2023-39321) . 2.2.4.2. Known issues There are no major known issues in this release. 2.2.5. Migration Toolkit for Containers 1.7.13 release notes 2.2.5.1. Resolved issues There are no major resolved issues in this release. 2.2.5.2. Known issues There are no major known issues in this release. 2.2.6. Migration Toolkit for Containers 1.7.12 release notes 2.2.6.1. Resolved issues There are no major resolved issues in this release. 2.2.6.2. Known issues This release has the following known issues: Error code 504 is displayed on the Migration details page On the Migration details page, at first, the migration details are displayed without any issues. However, after sometime, the details disappear, and a 504 error is returned. ( BZ#2231106 ) Old restic pods are not removed when upgrading Migration Toolkit for Containers 1.7.x to Migration Toolkit for Containers 1.8 On upgrading the Migration Toolkit for Containers (MTC) operator from 1.7.x to 1.8.x, the old restic pods are not removed. After the upgrade, both restic and node-agent pods are visible in the namespace. ( BZ#2236829 ) 2.2.7. Migration Toolkit for Containers 1.7.11 release notes 2.2.7.1. Resolved issues There are no major resolved issues in this release. 2.2.7.2. Known issues There are no known issues in this release. 2.2.8. Migration Toolkit for Containers 1.7.10 release notes 2.2.8.1. Resolved issues This release has the following major resolved issue: Adjust rsync options in DVM In this release, you can prevent absolute symlinks from being manipulated by Rsync in the course of direct volume migration (DVM). Running DVM in privileged mode preserves absolute symlinks inside the persistent volume claims (PVCs). To switch to privileged mode, in the MigrationController CR, set the migration_rsync_privileged spec to true . ( BZ#2204461 ) 2.2.8.2. Known issues There are no known issues in this release. 2.2.9. Migration Toolkit for Containers 1.7.9 release notes 2.2.9.1. Resolved issues There are no major resolved issues in this release. 2.2.9.2. Known issues This release has the following known issue: Adjust rsync options in DVM In this release, users are unable to prevent absolute symlinks from being manipulated by rsync during direct volume migration (DVM). ( BZ#2204461 ) 2.2.10. Migration Toolkit for Containers 1.7.8 release notes 2.2.10.1. Resolved issues This release has the following major resolved issues: Velero image cannot be overridden in the Migration Toolkit for Containers (MTC) operator In releases, it was not possible to override the velero image using the velero_image_fqin parameter in the MigrationController Custom Resource (CR). ( BZ#2143389 ) Adding a MigCluster from the UI fails when the domain name has more than six characters In releases, adding a MigCluster from the UI failed when the domain name had more than six characters. The UI code expected a domain name of between two and six characters. ( BZ#2152149 ) UI fails to render the Migrations' page: Cannot read properties of undefined (reading 'name') In releases, the UI failed to render the Migrations' page, returning Cannot read properties of undefined (reading 'name') . ( BZ#2163485 ) Creating DPA resource fails on Red Hat OpenShift Container Platform 4.6 clusters In releases, when deploying MTC on an OpenShift Container Platform 4.6 cluster, the DPA failed to be created according to the logs, which resulted in some pods missing. From the logs in the migration-controller in the OpenShift Container Platform 4.6 cluster, it indicated that an unexpected null value was passed, which caused the error. ( BZ#2173742 ) 2.2.10.2. Known issues There are no known issues in this release. 2.2.11. Migration Toolkit for Containers 1.7.7 release notes 2.2.11.1. Resolved issues There are no major resolved issues in this release. 2.2.11.2. Known issues There are no known issues in this release. 2.2.12. Migration Toolkit for Containers 1.7.6 release notes 2.2.12.1. New features Implement proposed changes for DVM support with PSA in Red Hat OpenShift Container Platform 4.12 With the incoming enforcement of Pod Security Admission (PSA) in OpenShift Container Platform 4.12 the default pod would run with a restricted profile. This restricted profile would mean workloads to migrate would be in violation of this policy and no longer work as of now. The following enhancement outlines the changes that would be required to remain compatible with OCP 4.12. ( MIG-1240 ) 2.2.12.2. Resolved issues This release has the following major resolved issues: Unable to create Storage Class Conversion plan due to missing cronjob error in Red Hat OpenShift Platform 4.12 In releases, on the persistent volumes page, an error is thrown that a CronJob is not available in version batch/v1beta1 , and when clicking on cancel, the migplan is created with status Not ready . ( BZ#2143628 ) 2.2.12.3. Known issues This release has the following known issue: Conflict conditions are cleared briefly after they are created When creating a new state migration plan that will result in a conflict error, that error is cleared shorty after it is displayed. ( BZ#2144299 ) 2.2.13. Migration Toolkit for Containers 1.7.5 release notes 2.2.13.1. Resolved issues This release has the following major resolved issue: Direct Volume Migration is failing as rsync pod on source cluster move into Error state In release, migration succeeded with warnings but Direct Volume Migration failed with rsync pod on source namespace going into error state. ( *BZ#2132978 ) 2.2.13.2. Known issues This release has the following known issues: Velero image cannot be overridden in the Migration Toolkit for Containers (MTC) operator In releases, it was not possible to override the velero image using the velero_image_fqin parameter in the MigrationController Custom Resource (CR). ( BZ#2143389 ) When editing a MigHook in the UI, the page might fail to reload The UI might fail to reload when editing a hook if there is a network connection issue. After the network connection is restored, the page will fail to reload until the cache is cleared. ( BZ#2140208 ) 2.2.14. Migration Toolkit for Containers 1.7.4 release notes 2.2.14.1. Resolved issues There are no major resolved issues in this release. 2.2.14.2. Known issues Rollback missing out deletion of some resources from the target cluster On performing the roll back of an application from the Migration Toolkit for Containers (MTC) UI, some resources are not being deleted from the target cluster and the roll back is showing a status as successfully completed. ( BZ#2126880 ) 2.2.15. Migration Toolkit for Containers 1.7.3 release notes 2.2.15.1. Resolved issues This release has the following major resolved issues: Correct DNS validation for destination namespace In releases, the MigPlan could not be validated if the destination namespace started with a non-alphabetic character. ( BZ#2102231 ) Deselecting all PVCs from UI still results in an attempted PVC transfer In releases, while doing a full migration, unselecting the persistent volume claims (PVCs) would not skip selecting the PVCs and still try to migrate them. ( BZ#2106073 ) Incorrect DNS validation for destination namespace In releases, MigPlan could not be validated because the destination namespace started with a non-alphabetic character. ( BZ#2102231 ) 2.2.15.2. Known issues There are no known issues in this release. 2.2.16. Migration Toolkit for Containers 1.7.2 release notes 2.2.16.1. Resolved issues This release has the following major resolved issues: MTC UI does not display logs correctly In releases, the Migration Toolkit for Containers (MTC) UI did not display logs correctly. ( BZ#2062266 ) StorageClass conversion plan adding migstorage reference in migplan In releases, StorageClass conversion plans had a migstorage reference even though it was not being used. ( BZ#2078459 ) Velero pod log missing from downloaded logs In releases, when downloading a compressed (.zip) folder for all logs, the velero pod was missing. ( BZ#2076599 ) Velero pod log missing from UI drop down In releases, after a migration was performed, the velero pod log was not included in the logs provided in the dropdown list. ( BZ#2076593 ) Rsync options logs not visible in log-reader pod In releases, when trying to set any valid or invalid rsync options in the migrationcontroller , the log-reader was not showing any logs regarding the invalid options or about the rsync command being used. ( BZ#2079252 ) Default CPU requests on Velero/Restic are too demanding and fail in certain environments In releases, the default CPU requests on Velero/Restic were too demanding and fail in certain environments. Default CPU requests for Velero and Restic Pods are set to 500m. These values were high. ( BZ#2088022 ) 2.2.16.2. Known issues This release has the following known issues: Updating the replication repository to a different storage provider type is not respected by the UI After updating the replication repository to a different type and clicking Update Repository , it shows connection successful, but the UI is not updated with the correct details. When clicking on the Edit button again, it still shows the old replication repository information. Furthermore, when trying to update the replication repository again, it still shows the old replication details. When selecting the new repository, it also shows all the information you entered previously and the Update repository is not enabled, as if there are no changes to be submitted. ( BZ#2102020 ) Migrations fails because the backup is not found Migration fails at the restore stage because of initial backup has not been found. ( BZ#2104874 ) Update Cluster button is not enabled when updating Azure resource group When updating the remote cluster, selecting the Azure resource group checkbox, and adding a resource group does not enable the Update cluster option. ( BZ#2098594 ) Error pop-up in UI on deleting migstorage resource When creating a backupStorage credential secret in OpenShift Container Platform, if the migstorage is removed from the UI, a 404 error is returned and the underlying secret is not removed. ( BZ#2100828 ) Miganalytic resource displaying resource count as 0 in UI After creating a migplan from backend, the Miganalytic resource displays the resource count as 0 in UI. ( BZ#2102139 ) Registry validation fails when two trailing slashes are added to the Exposed route host to image registry After adding two trailing slashes, meaning // , to the exposed registry route, the MigCluster resource is showing the status as connected . When creating a migplan from backend with DIM, the plans move to the unready status. ( BZ#2104864 ) Service Account Token not visible while editing source cluster When editing the source cluster that has been added and is in Connected state, in the UI, the service account token is not visible in the field. To save the wizard, you have to fetch the token again and provide details inside the field. ( BZ#2097668 ) 2.2.17. Migration Toolkit for Containers 1.7.1 release notes 2.2.17.1. Resolved issues There are no major resolved issues in this release. 2.2.17.2. Known issues This release has the following known issues: Incorrect DNS validation for destination namespace MigPlan cannot be validated because the destination namespace starts with a non-alphabetic character. ( BZ#2102231 ) Cloud propagation phase in migration controller is not functioning due to missing labels on Velero pods The Cloud propagation phase in the migration controller is not functioning due to missing labels on Velero pods. The EnsureCloudSecretPropagated phase in the migration controller waits until replication repository secrets are propagated on both sides. As this label is missing on Velero pods, the phase is not functioning as expected. ( BZ#2088026 ) Default CPU requests on Velero/Restic are too demanding when making scheduling fail in certain environments Default CPU requests on Velero/Restic are too demanding when making scheduling fail in certain environments. Default CPU requests for Velero and Restic Pods are set to 500m. These values are high. The resources can be configured in DPA using the podConfig field for Velero and Restic. Migration operator should set CPU requests to a lower value, such as 100m, so that Velero and Restic pods can be scheduled in resource constrained environments Migration Toolkit for Containers (MTC) often operates in. ( BZ#2088022 ) Warning is displayed on persistentVolumes page after editing storage class conversion plan A warning is displayed on the persistentVolumes page after editing the storage class conversion plan. When editing the existing migration plan, a warning is displayed on the UI At least one PVC must be selected for Storage Class Conversion . ( BZ#2079549 ) Velero pod log missing from downloaded logs When downloading a compressed (.zip) folder for all logs, the velero pod is missing. ( BZ#2076599 ) Velero pod log missing from UI drop down After a migration is performed, the velero pod log is not included in the logs provided in the dropdown list. ( BZ#2076593 ) 2.2.18. Migration Toolkit for Containers 1.7.0 release notes 2.2.18.1. New features and enhancements This release has the following new features and enhancements: The Migration Toolkit for Containers (MTC) Operator now depends upon the OpenShift API for Data Protection (OADP) Operator. When you install the MTC Operator, the Operator Lifecycle Manager (OLM) automatically installs the OADP Operator in the same namespace. You can migrate from a source cluster that is behind a firewall to a cloud-based destination cluster by establishing a network tunnel between the two clusters by using the crane tunnel-api command. Converting storage classes in the MTC web console: You can convert the storage class of a persistent volume (PV) by migrating it within the same cluster. 2.2.18.2. Known issues This release has the following known issues: MigPlan custom resource does not display a warning when an AWS gp2 PVC has no available space. ( BZ#1963927 ) Direct and indirect data transfers do not work if the destination storage is a PV that is dynamically provisioned by the AWS Elastic File System (EFS). This is due to limitations of the AWS EFS Container Storage Interface (CSI) driver. ( BZ#2085097 ) Block storage for IBM Cloud must be in the same availability zone. See the IBM FAQ for block storage for virtual private cloud . MTC 1.7.6 cannot migrate cron jobs from source clusters that support v1beta1 cron jobs to clusters of OpenShift Container Platform 4.12 and later, which do not support v1beta1 cron jobs. ( BZ#2149119 ) 2.3. Migration Toolkit for Containers 1.6 release notes The release notes for Migration Toolkit for Containers (MTC) describe new features and enhancements, deprecated features, and known issues. The MTC enables you to migrate application workloads between OpenShift Container Platform clusters at the granularity of a namespace. You can migrate from OpenShift Container Platform 3 to 4.18 and between OpenShift Container Platform 4 clusters. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. For information on the support policy for MTC, see OpenShift Application and Cluster Migration Solutions , part of the Red Hat OpenShift Container Platform Life Cycle Policy . 2.3.1. Migration Toolkit for Containers 1.6 release notes 2.3.1.1. New features and enhancements This release has the following new features and enhancements: State migration: You can perform repeatable, state-only migrations by selecting specific persistent volume claims (PVCs). "New operator version available" notification: The Clusters page of the MTC web console displays a notification when a new Migration Toolkit for Containers Operator is available. 2.3.1.2. Deprecated features The following features are deprecated: MTC version 1.4 is no longer supported. 2.3.1.3. Known issues This release has the following known issues: On OpenShift Container Platform 3.10, the MigrationController pod takes too long to restart. The Bugzilla report contains a workaround. ( BZ#1986796 ) Stage pods fail during direct volume migration from a classic OpenShift Container Platform source cluster on IBM Cloud. The IBM block storage plugin does not allow the same volume to be mounted on multiple pods of the same node. As a result, the PVCs cannot be mounted on the Rsync pods and on the application pods simultaneously. To resolve this issue, stop the application pods before migration. ( BZ#1887526 ) MigPlan custom resource does not display a warning when an AWS gp2 PVC has no available space. ( BZ#1963927 ) Block storage for IBM Cloud must be in the same availability zone. See the IBM FAQ for block storage for virtual private cloud . 2.4. Migration Toolkit for Containers 1.5 release notes The release notes for Migration Toolkit for Containers (MTC) describe new features and enhancements, deprecated features, and known issues. The MTC enables you to migrate application workloads between OpenShift Container Platform clusters at the granularity of a namespace. You can migrate from OpenShift Container Platform 3 to 4.18 and between OpenShift Container Platform 4 clusters. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. For information on the support policy for MTC, see OpenShift Application and Cluster Migration Solutions , part of the Red Hat OpenShift Container Platform Life Cycle Policy . 2.4.1. Migration Toolkit for Containers 1.5 release notes 2.4.1.1. New features and enhancements This release has the following new features and enhancements: The Migration resource tree on the Migration details page of the web console has been enhanced with additional resources, Kubernetes events, and live status information for monitoring and debugging migrations. The web console can support hundreds of migration plans. A source namespace can be mapped to a different target namespace in a migration plan. Previously, the source namespace was mapped to a target namespace with the same name. Hook phases with status information are displayed in the web console during a migration. The number of Rsync retry attempts is displayed in the web console during direct volume migration. Persistent volume (PV) resizing can be enabled for direct volume migration to ensure that the target cluster does not run out of disk space. The threshold that triggers PV resizing is configurable. Previously, PV resizing occurred when the disk usage exceeded 97%. Velero has been updated to version 1.6, which provides numerous fixes and enhancements. Cached Kubernetes clients can be enabled to provide improved performance. 2.4.1.2. Deprecated features The following features are deprecated: MTC versions 1.2 and 1.3 are no longer supported. The procedure for updating deprecated APIs has been removed from the troubleshooting section of the documentation because the oc convert command is deprecated. 2.4.1.3. Known issues This release has the following known issues: Microsoft Azure storage is unavailable if you create more than 400 migration plans. The MigStorage custom resource displays the following message: The request is being throttled as the limit has been reached for operation type . ( BZ#1977226 ) If a migration fails, the migration plan does not retain custom persistent volume (PV) settings for quiesced pods. You must manually roll back the migration, delete the migration plan, and create a new migration plan with your PV settings. ( BZ#1784899 ) PV resizing does not work as expected for AWS gp2 storage unless the pv_resizing_threshold is 42% or greater. ( BZ#1973148 ) PV resizing does not work with OpenShift Container Platform 3.7 and 3.9 source clusters in the following scenarios: The application was installed after MTC was installed. An application pod was rescheduled on a different node after MTC was installed. OpenShift Container Platform 3.7 and 3.9 do not support the Mount Propagation feature that enables Velero to mount PVs automatically in the Restic pod. The MigAnalytic custom resource (CR) fails to collect PV data from the Restic pod and reports the resources as 0 . The MigPlan CR displays a status similar to the following: Example output status: conditions: - category: Warn lastTransitionTime: 2021-07-15T04:11:44Z message: Failed gathering extended PV usage information for PVs [nginx-logs nginx-html], please see MigAnalytic openshift-migration/ocp-24706-basicvolmig-migplan-1626319591-szwd6 for details reason: FailedRunningDf status: "True" type: ExtendedPVAnalysisFailed To enable PV resizing, you can manually restart the Restic daemonset on the source cluster or restart the Restic pods on the same nodes as the application. If you do not restart Restic, you can run the direct volume migration without PV resizing. ( BZ#1982729 ) 2.4.1.4. Technical changes This release has the following technical changes: The legacy Migration Toolkit for Containers Operator version 1.5.1 is installed manually on OpenShift Container Platform versions 3.7 to 4.5. The Migration Toolkit for Containers Operator version 1.5.1 is installed on OpenShift Container Platform versions 4.6 and later by using the Operator Lifecycle Manager.
[ "status: conditions: - category: Warn lastTransitionTime: 2021-07-15T04:11:44Z message: Failed gathering extended PV usage information for PVs [nginx-logs nginx-html], please see MigAnalytic openshift-migration/ocp-24706-basicvolmig-migplan-1626319591-szwd6 for details reason: FailedRunningDf status: \"True\" type: ExtendedPVAnalysisFailed" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/migration_toolkit_for_containers/mtc-release-notes-1
Chapter 3. Performing cross-site operations with the CLI
Chapter 3. Performing cross-site operations with the CLI Use the Data Grid command line interface (CLI) to connect to Data Grid Server clusters, manage sites, and push state transfer to backup locations. 3.1. Bringing backup locations offline and online Take backup locations offline manually and bring them back online. Prerequisites Create a CLI connection to Data Grid. Procedure Check if backup locations are online or offline with the site status command: Note --site is an optional argument. If not set, the CLI returns all backup locations. Tip Use the --all-caches option to get the backup location status for all caches. Manage backup locations as follows: Bring backup locations online with the bring-online command: Take backup locations offline with the take-offline command: Tip Use the --all-caches option to bring a backup location online, or take a backup location offline, for all caches. For more information and examples, run the help site command. 3.2. Configuring cross-site state transfer modes You can configure cross-site state transfer operations to happen automatically when Data Grid detects that backup locations come online. Alternatively you can use the default mode, which is to manually perform state transfer. Prerequisites Create a CLI connection to Data Grid. Procedure Use the site command to configure state transfer modes, as in the following examples: Retrieve the current state transfer mode. Configure automatic state transfer operations for a cache and backup location. Tip Run the help site command for more information and examples. 3.3. Pushing state to backup locations Transfer cache state to backup locations. Prerequisites Create a CLI connection to Data Grid. Procedure Use the site push-site-state command to push state transfer, as in the following example: Tip Use the --all-caches option to push state transfer for all caches. For more information and examples, run the help site command.
[ "site status --cache=cacheName --site=NYC", "site bring-online --cache=customers --site=NYC", "site take-offline --cache=customers --site=NYC", "site state-transfer-mode get --cache=cacheName --site=NYC", "site state-transfer-mode set --cache=cacheName --site=NYC --mode=AUTO", "site push-site-state --cache=cacheName --site=NYC" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_cross-site_replication/cross-site-operations-cli
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_cpp_client/using_your_subscription
Chapter 24. Extending a Stratis volume with additional block devices
Chapter 24. Extending a Stratis volume with additional block devices You can attach additional block devices to a Stratis pool to provide more storage capacity for Stratis file systems. You can do it manually or by using the web console. 24.1. Adding block devices to a Stratis pool You can add one or more block devices to a Stratis pool. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. The block devices that you are adding to the Stratis pool are not in use and not mounted. The block devices that you are adding to the Stratis pool are at least 1 GiB in size each. Procedure To add one or more block devices to the pool, use: Additional resources stratis(8) man page on your system 24.2. Adding a block device to a Stratis pool by using the web console You can use the web console to add a block device to an existing Stratis pool. You can also add caches as a block device. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The stratisd service is running. A Stratis pool is created. The block devices on which you are creating a Stratis pool are not in use and are not mounted. Each block device on which you are creating a Stratis pool is at least 1 GB. Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Click Storage . In the Storage table, click the Stratis pool to which you want to add a block device. On the Stratis pool page, click Add block devices and select the Tier where you want to add a block device as data or cache. If you are adding the block device to a Stratis pool that is encrypted with a passphrase, enter the passphrase. Under Block devices , select the devices you want to add to the pool. Click Add .
[ "stratis pool add-data my-pool device-1 device-2 device-n" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_storage_devices/extending-a-stratis-volume-with-additional-block-devices
Chapter 16. RHCOS image layering
Chapter 16. RHCOS image layering Red Hat Enterprise Linux CoreOS (RHCOS) image layering allows you to easily extend the functionality of your base RHCOS image by layering additional images onto the base image. This layering does not modify the base RHCOS image. Instead, it creates a custom layered image that includes all RHCOS functionality and adds additional functionality to specific nodes in the cluster. You create a custom layered image by using a Containerfile and applying it to nodes by using a MachineConfig object. The Machine Config Operator overrides the base RHCOS image, as specified by the osImageURL value in the associated machine config, and boots the new image. You can remove the custom layered image by deleting the machine config, The MCO reboots the nodes back to the base RHCOS image. With RHCOS image layering, you can install RPMs into your base image, and your custom content will be booted alongside RHCOS. The Machine Config Operator (MCO) can roll out these custom layered images and monitor these custom containers in the same way it does for the default RHCOS image. RHCOS image layering gives you greater flexibility in how you manage your RHCOS nodes. Important Installing realtime kernel and extensions RPMs as custom layered content is not recommended. This is because these RPMs can conflict with RPMs installed by using a machine config. If there is a conflict, the MCO enters a degraded state when it tries to install the machine config RPM. You need to remove the conflicting extension from your machine config before proceeding. As soon as you apply the custom layered image to your cluster, you effectively take ownership of your custom layered images and those nodes. While Red Hat remains responsible for maintaining and updating the base RHCOS image on standard nodes, you are responsible for maintaining and updating images on nodes that use a custom layered image. You assume the responsibility for the package you applied with the custom layered image and any issues that might arise with the package. To apply a custom layered image, you create a Containerfile that references an OpenShift Container Platform image and the RPM that you want to apply. You then push the resulting custom layered image to an image registry. In a non-production OpenShift Container Platform cluster, create a MachineConfig object for the targeted node pool that points to the new image. Note Use the same base RHCOS image installed on the rest of your cluster. Use the oc adm release info --image-for rhel-coreos command to obtain the base image used in your cluster. RHCOS image layering allows you to use the following types of images to create custom layered images: OpenShift Container Platform Hotfixes . You can work with Customer Experience and Engagement (CEE) to obtain and apply Hotfix packages on top of your RHCOS image. In some instances, you might want a bug fix or enhancement before it is included in an official OpenShift Container Platform release. RHCOS image layering allows you to easily add the Hotfix before it is officially released and remove the Hotfix when the underlying RHCOS image incorporates the fix. Important Some Hotfixes require a Red Hat Support Exception and are outside of the normal scope of OpenShift Container Platform support coverage or life cycle policies. In the event you want a Hotfix, it will be provided to you based on Red Hat Hotfix policy . Apply it on top of the base image and test that new custom layered image in a non-production environment. When you are satisfied that the custom layered image is safe to use in production, you can roll it out on your own schedule to specific node pools. For any reason, you can easily roll back the custom layered image and return to using the default RHCOS. Example Containerfile to apply a Hotfix # Using a 4.12.0 image FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256... #Install hotfix rpm RUN rpm-ostree override replace https://example.com/myrepo/haproxy-1.0.16-5.el8.src.rpm && \ rpm-ostree cleanup -m && \ ostree container commit RHEL packages . You can download Red Hat Enterprise Linux (RHEL) packages from the Red Hat Customer Portal , such as chrony, firewalld, and iputils. Example Containerfile to apply the firewalld utility FROM quay.io/openshift-release-dev/ocp-release@sha256... ADD configure-firewall-playbook.yml . RUN rpm-ostree install firewalld ansible && \ ansible-playbook configure-firewall-playbook.yml && \ rpm -e ansible && \ ostree container commit Example Containerfile to apply the libreswan utility # Get RHCOS base image of target cluster `oc adm release info --image-for rhel-coreos` # hadolint ignore=DL3006 FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256... # Install our config file COPY my-host-to-host.conf /etc/ipsec.d/ # RHEL entitled host is needed here to access RHEL packages # Install libreswan as extra RHEL package RUN rpm-ostree install libreswan && \ systemctl enable ipsec && \ ostree container commit Because libreswan requires additional RHEL packages, the image must be built on an entitled RHEL host. Third-party packages . You can download and install RPMs from third-party organizations, such as the following types of packages: Bleeding edge drivers and kernel enhancements to improve performance or add capabilities. Forensic client tools to investigate possible and actual break-ins. Security agents. Inventory agents that provide a coherent view of the entire cluster. SSH Key management packages. Example Containerfile to apply a third-party package from EPEL # Get RHCOS base image of target cluster `oc adm release info --image-for rhel-coreos` # hadolint ignore=DL3006 FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256... # Install our config file COPY my-host-to-host.conf /etc/ipsec.d/ # RHEL entitled host is needed here to access RHEL packages # Install libreswan as extra RHEL package RUN rpm-ostree install libreswan && \ systemctl enable ipsec && \ ostree container commit Example Containerfile to apply a third-party package that has RHEL dependencies # Get RHCOS base image of target cluster `oc adm release info --image-for rhel-coreos` # hadolint ignore=DL3006 FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256... # Install our config file COPY my-host-to-host.conf /etc/ipsec.d/ # RHEL entitled host is needed here to access RHEL packages # Install libreswan as extra RHEL package RUN rpm-ostree install libreswan && \ systemctl enable ipsec && \ ostree container commit This Containerfile installs the Linux fish program. Because fish requires additional RHEL packages, the image must be built on an entitled RHEL host. After you create the machine config, the Machine Config Operator (MCO) performs the following steps: Renders a new machine config for the specified pool or pools. Performs cordon and drain operations on the nodes in the pool or pools. Writes the rest of the machine config parameters onto the nodes. Applies the custom layered image to the node. Reboots the node using the new image. Important It is strongly recommended that you test your images outside of your production environment before rolling out to your cluster. 16.1. Applying a RHCOS custom layered image You can easily configure Red Hat Enterprise Linux CoreOS (RHCOS) image layering on the nodes in specific machine config pools. The Machine Config Operator (MCO) reboots those nodes with the new custom layered image, overriding the base Red Hat Enterprise Linux CoreOS (RHCOS) image. To apply a custom layered image to your cluster, you must have the custom layered image in a repository that your cluster can access. Then, create a MachineConfig object that points to the custom layered image. You need a separate MachineConfig object for each machine config pool that you want to configure. Important When you configure a custom layered image, OpenShift Container Platform no longer automatically updates any node that uses the custom layered image. You become responsible for manually updating your nodes as appropriate. If you roll back the custom layer, OpenShift Container Platform will again automatically update the node. See the Additional resources section that follows for important information about updating nodes that use a custom layered image. Prerequisites You must create a custom layered image that is based on an OpenShift Container Platform image digest, not a tag. Note You should use the same base RHCOS image that is installed on the rest of your cluster. Use the oc adm release info --image-for rhel-coreos command to obtain the base image being used in your cluster. For example, the following Containerfile creates a custom layered image from an OpenShift Container Platform 4.13 image and overrides the kernel package with one from CentOS 9 Stream: Example Containerfile for a custom layer image # Using a 4.13.0 image FROM quay.io/openshift-release/ocp-release@sha256... 1 #Install hotfix rpm RUN rpm-ostree cliwrap install-to-root / && \ 2 rpm-ostree override replace http://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/Packages/kernel-{,core-,modules-,modules-core-,modules-extra-}5.14.0-295.el9.x86_64.rpm && \ 3 rpm-ostree cleanup -m && \ ostree container commit 1 Specifies the RHCOS base image of your cluster. 2 Enables cliwrap . This is currently required to intercept some command invocations made from kernel scripts. 3 Replaces the kernel packages. Note Instructions on how to create a Containerfile are beyond the scope of this documentation. Because the process for building a custom layered image is performed outside of the cluster, you must use the --authfile /path/to/pull-secret option with Podman or Buildah. Alternatively, to have the pull secret read by these tools automatically, you can add it to one of the default file locations: ~/.docker/config.json , USDXDG_RUNTIME_DIR/containers/auth.json , ~/.docker/config.json , or ~/.dockercfg . Refer to the containers-auth.json man page for more information. You must push the custom layered image to a repository that your cluster can access. Procedure Create a machine config file. Create a YAML file similar to the following: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: os-layer-custom spec: osImageURL: quay.io/my-registry/custom-image@sha256... 2 1 Specifies the machine config pool to apply the custom layered image. 2 Specifies the path to the custom layered image in the repository. Create the MachineConfig object: USD oc create -f <file_name>.yaml Important It is strongly recommended that you test your images outside of your production environment before rolling out to your cluster. Verification You can verify that the custom layered image is applied by performing any of the following checks: Check that the worker machine config pool has rolled out with the new machine config: Check that the new machine config is created: USD oc get mc Sample output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 00-worker 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-master-container-runtime 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-master-kubelet 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-worker-container-runtime 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-worker-kubelet 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 99-master-generated-registries 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 99-master-ssh 3.2.0 98m 99-worker-generated-registries 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 99-worker-ssh 3.2.0 98m os-layer-custom 10s 1 rendered-master-15961f1da260f7be141006404d17d39b 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m rendered-worker-5aff604cb1381a4fe07feaf1595a797e 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m rendered-worker-5de4837625b1cbc237de6b22bc0bc873 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 4s 2 1 New machine config 2 New rendered machine config Check that the osImageURL value in the new machine config points to the expected image: USD oc describe mc rendered-worker-5de4837625b1cbc237de6b22bc0bc873 Example output Name: rendered-worker-5de4837625b1cbc237de6b22bc0bc873 Namespace: Labels: <none> Annotations: machineconfiguration.openshift.io/generated-by-controller-version: 5bdb57489b720096ef912f738b46330a8f577803 machineconfiguration.openshift.io/release-image-version: {product-version}.0-ec.3 API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig ... Os Image URL: quay.io/my-registry/custom-image@sha256... Check that the associated machine config pool is updated with the new machine config: USD oc get mcp Sample output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-15961f1da260f7be141006404d17d39b True False False 3 3 3 0 39m worker rendered-worker-5de4837625b1cbc237de6b22bc0bc873 True False False 3 0 0 0 39m 1 1 When the UPDATING field is True , the machine config pool is updating with the new machine config. In this case, you will not see the new machine config listed in the output. When the field becomes False , the worker machine config pool has rolled out to the new machine config. Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.26.0 ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.26.0 ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.26.0 ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.26.0 ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.26.0 ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.26.0 When the node is back in the Ready state, check that the node is using the custom layered image: Open an oc debug session to the node. For example: USD oc debug node/ip-10-0-155-125.us-west-1.compute.internal Set /host as the root directory within the debug shell: sh-4.4# chroot /host Run the rpm-ostree status command to view that the custom layered image is in use: sh-4.4# sudo rpm-ostree status Example output Additional resources Updating with a RHCOS custom layered image 16.2. Removing a RHCOS custom layered image You can easily revert Red Hat Enterprise Linux CoreOS (RHCOS) image layering from the nodes in specific machine config pools. The Machine Config Operator (MCO) reboots those nodes with the cluster base Red Hat Enterprise Linux CoreOS (RHCOS) image, overriding the custom layered image. To remove a Red Hat Enterprise Linux CoreOS (RHCOS) custom layered image from your cluster, you need to delete the machine config that applied the image. Procedure Delete the machine config that applied the custom layered image. USD oc delete mc os-layer-custom After deleting the machine config, the nodes reboot. Verification You can verify that the custom layered image is removed by performing any of the following checks: Check that the worker machine config pool is updating with the machine config: USD oc get mcp Sample output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-6faecdfa1b25c114a58cf178fbaa45e2 True False False 3 3 3 0 39m worker rendered-worker-6b000dbc31aaee63c6a2d56d04cd4c1b False True False 3 0 0 0 39m 1 1 When the UPDATING field is True , the machine config pool is updating with the machine config. When the field becomes False , the worker machine config pool has rolled out to the machine config. Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.26.0 ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.26.0 ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.26.0 ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.26.0 ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.26.0 ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.26.0 When the node is back in the Ready state, check that the node is using the base image: Open an oc debug session to the node. For example: USD oc debug node/ip-10-0-155-125.us-west-1.compute.internal Set /host as the root directory within the debug shell: sh-4.4# chroot /host Run the rpm-ostree status command to view that the custom layered image is in use: sh-4.4# sudo rpm-ostree status Example output 16.3. Updating with a RHCOS custom layered image When you configure Red Hat Enterprise Linux CoreOS (RHCOS) image layering, OpenShift Container Platform no longer automatically updates the node pool that uses the custom layered image. You become responsible to manually update your nodes as appropriate. To update a node that uses a custom layered image, follow these general steps: The cluster automatically upgrades to version x.y.z+1, except for the nodes that use the custom layered image. You could then create a new Containerfile that references the updated OpenShift Container Platform image and the RPM that you had previously applied. Create a new machine config that points to the updated custom layered image. Updating a node with a custom layered image is not required. However, if that node gets too far behind the current OpenShift Container Platform version, you could experience unexpected results.
[ "Using a 4.12.0 image FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256 #Install hotfix rpm RUN rpm-ostree override replace https://example.com/myrepo/haproxy-1.0.16-5.el8.src.rpm && rpm-ostree cleanup -m && ostree container commit", "FROM quay.io/openshift-release-dev/ocp-release@sha256 ADD configure-firewall-playbook.yml . RUN rpm-ostree install firewalld ansible && ansible-playbook configure-firewall-playbook.yml && rpm -e ansible && ostree container commit", "Get RHCOS base image of target cluster `oc adm release info --image-for rhel-coreos` hadolint ignore=DL3006 FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256 Install our config file COPY my-host-to-host.conf /etc/ipsec.d/ RHEL entitled host is needed here to access RHEL packages Install libreswan as extra RHEL package RUN rpm-ostree install libreswan && systemctl enable ipsec && ostree container commit", "Get RHCOS base image of target cluster `oc adm release info --image-for rhel-coreos` hadolint ignore=DL3006 FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256 Install our config file COPY my-host-to-host.conf /etc/ipsec.d/ RHEL entitled host is needed here to access RHEL packages Install libreswan as extra RHEL package RUN rpm-ostree install libreswan && systemctl enable ipsec && ostree container commit", "Get RHCOS base image of target cluster `oc adm release info --image-for rhel-coreos` hadolint ignore=DL3006 FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256 Install our config file COPY my-host-to-host.conf /etc/ipsec.d/ RHEL entitled host is needed here to access RHEL packages Install libreswan as extra RHEL package RUN rpm-ostree install libreswan && systemctl enable ipsec && ostree container commit", "Using a 4.13.0 image FROM quay.io/openshift-release/ocp-release@sha256... 1 #Install hotfix rpm RUN rpm-ostree cliwrap install-to-root / && \\ 2 rpm-ostree override replace http://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/Packages/kernel-{,core-,modules-,modules-core-,modules-extra-}5.14.0-295.el9.x86_64.rpm && \\ 3 rpm-ostree cleanup -m && ostree container commit", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: os-layer-custom spec: osImageURL: quay.io/my-registry/custom-image@sha256... 2", "oc create -f <file_name>.yaml", "oc get mc", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 00-worker 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-master-container-runtime 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-master-kubelet 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-worker-container-runtime 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-worker-kubelet 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 99-master-generated-registries 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 99-master-ssh 3.2.0 98m 99-worker-generated-registries 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 99-worker-ssh 3.2.0 98m os-layer-custom 10s 1 rendered-master-15961f1da260f7be141006404d17d39b 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m rendered-worker-5aff604cb1381a4fe07feaf1595a797e 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m rendered-worker-5de4837625b1cbc237de6b22bc0bc873 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 4s 2", "oc describe mc rendered-worker-5de4837625b1cbc237de6b22bc0bc873", "Name: rendered-worker-5de4837625b1cbc237de6b22bc0bc873 Namespace: Labels: <none> Annotations: machineconfiguration.openshift.io/generated-by-controller-version: 5bdb57489b720096ef912f738b46330a8f577803 machineconfiguration.openshift.io/release-image-version: {product-version}.0-ec.3 API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Os Image URL: quay.io/my-registry/custom-image@sha256", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-15961f1da260f7be141006404d17d39b True False False 3 3 3 0 39m worker rendered-worker-5de4837625b1cbc237de6b22bc0bc873 True False False 3 0 0 0 39m 1", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.26.0 ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.26.0 ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.26.0 ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.26.0 ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.26.0 ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.26.0", "oc debug node/ip-10-0-155-125.us-west-1.compute.internal", "sh-4.4# chroot /host", "sh-4.4# sudo rpm-ostree status", "State: idle Deployments: * ostree-unverified-registry:quay.io/my-registry/ Digest: sha256:", "oc delete mc os-layer-custom", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-6faecdfa1b25c114a58cf178fbaa45e2 True False False 3 3 3 0 39m worker rendered-worker-6b000dbc31aaee63c6a2d56d04cd4c1b False True False 3 0 0 0 39m 1", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.26.0 ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.26.0 ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.26.0 ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.26.0 ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.26.0 ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.26.0", "oc debug node/ip-10-0-155-125.us-west-1.compute.internal", "sh-4.4# chroot /host", "sh-4.4# sudo rpm-ostree status", "State: idle Deployments: * ostree-unverified-registry:podman pull quay.io/openshift-release-dev/ocp-release@sha256:e2044c3cfebe0ff3a99fc207ac5efe6e07878ad59fd4ad5e41f88cb016dacd73 Digest: sha256:e2044c3cfebe0ff3a99fc207ac5efe6e07878ad59fd4ad5e41f88cb016dacd73" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/post-installation_configuration/coreos-layering
Chapter 6. Subscriptions
Chapter 6. Subscriptions 6.1. Subscription offerings Red Hat OpenShift Data Foundation subscription is based on "core-pairs," similar to Red Hat OpenShift Container Platform. The Red Hat OpenShift Data Foundation 2-core subscription is based on the number of logical cores on the CPUs in the system where OpenShift Container Platform runs. As with OpenShift Container Platform: OpenShift Data Foundation subscriptions are stackable to cover larger hosts. Cores can be distributed across as many virtual machines (VMs) as needed. For example, ten 2-core subscriptions will provide 20 cores and in case of IBM Power a 2-core subscription at SMT level of 8 will provide 2 cores or 16 vCPUs that can be used across any number of VMs. OpenShift Data Foundation subscriptions are available with Premium or Standard support. 6.2. Disaster recovery subscription requirement Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription Any Red Hat OpenShift Data Foundation Cluster containing PVs participating in active replication either as a source or destination requires OpenShift Data Foundation Advanced entitlement. This subscription should be active on both source and destination clusters. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 6.3. Cores versus vCPUs and hyperthreading Making a determination about whether or not a particular system consumes one or more cores is currently dependent on whether or not that system has hyperthreading available. Hyperthreading is only a feature of Intel CPUs. Visit the Red Hat Customer Portal to determine whether a particular system supports hyperthreading. Virtualized OpenShift nodes using logical CPU threads, also known as simultaneous multithreading (SMT) for AMD EPYC CPUs or hyperthreading with Intel CPUs, calculate their core utilization for OpenShift subscriptions based on the number of cores/CPUs assigned to the node, however each subscription covers 4 vCPUs/cores when logical CPU threads are used. Red Hat's subscription management tools assume logical CPU threads are enabled by default on all systems. For systems where hyperthreading is enabled and where one hyperthread equates to one visible system core, the calculation of cores is a ratio of 2 cores to 4 vCPUs. Therefore, a 2-core subscription covers 4 vCPUs in a hyperthreaded system. A large virtual machine (VM) might have 8 vCPUs, equating to 4 subscription cores. As subscriptions come in 2-core units, you will need two 2-core subscriptions to cover these 4 cores or 8 vCPUs. Where hyperthreading is not enabled, and where each visible system core correlates directly to an underlying physical core, the calculation of cores is a ratio of 2 cores to 2 vCPUs. 6.3.1. Cores versus vCPUs and simultaneous multithreading (SMT) for IBM Power Making a determination about whether or not a particular system consumes one or more cores is currently dependent on the level of simultaneous multithreading configured (SMT). IBM Power provides simultaneous multithreading levels of 1, 2, 4 or 8 for each core which correspond to the number of vCPUs as in the table below. Table 6.1. Different SMT levels and their corresponding vCPUs SMT level SMT=1 SMT=2 SMT=4 SMT=8 1 Core # vCPUs=1 # vCPUs=2 # vCPUs=4 # vCPUs=8 2 Cores # vCPUs=2 # vCPUs=4 # vCPUs=8 # vCPUs=16 4 Cores # vCPUs=4 # vCPUs=8 # vCPUs=16 # vCPUs=32 For systems where SMT is configured the calculation for the number of cores required for subscription purposes depends on the SMT level. Therefore, a 2-core subscription corresponds to 2 vCPUs on SMT level of 1, and to 4 vCPUs on SMT level of 2, and to 8 vCPUs on SMT level of 4 and to 16 vCPUs on SMT level of 8 as seen in the table above. A large virtual machine (VM) might have 16 vCPUs, which at a SMT level 8 will require a 2 core subscription based on dividing the # of vCPUs by the SMT level (16 vCPUs / 8 for SMT-8 = 2). As subscriptions come in 2-core units, you will need one 2-core subscription to cover these 2 cores or 16 vCPUs. 6.4. Splitting cores Systems that require an odd number of cores need to consume a full 2-core subscription. For example, a system that is calculated to require only 1 core will end up consuming a full 2-core subscription once it is registered and subscribed. When a single virtual machine (VM) with 2 vCPUs uses hyperthreading resulting in 1 calculated vCPU, a full 2-core subscription is required; a single 2-core subscription may not be split across two VMs with 2 vCPUs using hyperthreading. See section Cores versus vCPUs and hyperthreading for more information. It is recommended that virtual instances be sized so that they require an even number of cores. 6.4.1. Shared Processor Pools for IBM Power IBM Power have a notion of shared processor pools. The processors in a shared processor pool can be shared across the nodes in the cluster. The aggregate compute capacity required for a Red Hat OpenShift Data Foundation should be a multiple of core-pairs. 6.5. Subscription requirements Red Hat OpenShift Data Foundation components can run on either OpenShift Container Platform worker or infrastructure nodes, for which you can use either Red Hat CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 8.4 as the host operating system. RHEL 7 is now deprecated. OpenShift Data Foundation subscriptions are required for every OpenShift Container Platform subscribed core with a ratio of 1:1. When using infrastructure nodes, the rule to subscribe all OpenShift worker node cores for OpenShift Data Foundation applies even though they don't need any OpenShift Container Platform or any OpenShift Data Foundation subscriptions. You can use labels to state whether a node is a worker or an infrastructure node. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation in the Managing and Allocating Storage Resources guide.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/planning_your_deployment/subscriptions_rhodf
Chapter 7. Troubleshooting monitoring issues
Chapter 7. Troubleshooting monitoring issues Find troubleshooting steps for common issues with core platform and user-defined project monitoring. 7.1. Investigating why user-defined project metrics are unavailable ServiceMonitor resources enable you to determine how to use the metrics exposed by a service in user-defined projects. Follow the steps outlined in this procedure if you have created a ServiceMonitor resource but cannot see any corresponding metrics in the Metrics UI. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have enabled and configured monitoring for user-defined projects. You have created a ServiceMonitor resource. Procedure Check that the corresponding labels match in the service and ServiceMonitor resource configurations. Obtain the label defined in the service. The following example queries the prometheus-example-app service in the ns1 project: USD oc -n ns1 get service prometheus-example-app -o yaml Example output labels: app: prometheus-example-app Check that the matchLabels definition in the ServiceMonitor resource configuration matches the label output in the preceding step. The following example queries the prometheus-example-monitor service monitor in the ns1 project: USD oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml Example output apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app Note You can check service and ServiceMonitor resource labels as a developer with view permissions for the project. Inspect the logs for the Prometheus Operator in the openshift-user-workload-monitoring project. List the pods in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get pods Example output NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m Obtain the logs from the prometheus-operator container in the prometheus-operator pod. In the following example, the pod is called prometheus-operator-776fcbbd56-2nbfm : USD oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator If there is a issue with the service monitor, the logs might include an error similar to this example: level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg="skipping servicemonitor" error="it accesses file system via bearer token file which Prometheus specification prohibits" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload Review the target status for your endpoint on the Metrics targets page in the OpenShift Container Platform web console UI. Log in to the OpenShift Container Platform web console and navigate to Observe Targets in the Administrator perspective. Locate the metrics endpoint in the list, and review the status of the target in the Status column. If the Status is Down , click the URL for the endpoint to view more information on the Target Details page for that metrics target. Configure debug level logging for the Prometheus Operator in the openshift-user-workload-monitoring project. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add logLevel: debug for prometheusOperator under data/config.yaml to set the log level to debug : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug # ... Save the file to apply the changes. The affected prometheus-operator pod is automatically redeployed. Confirm that the debug log-level has been applied to the prometheus-operator deployment in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level" Example output - --log-level=debug Debug level logging will show all calls made by the Prometheus Operator. Check that the prometheus-operator pod is running: USD oc -n openshift-user-workload-monitoring get pods Note If an unrecognized Prometheus Operator loglevel value is included in the config map, the prometheus-operator pod might not restart successfully. Review the debug logs to see if the Prometheus Operator is using the ServiceMonitor resource. Review the logs for other related errors. Additional resources Enabling monitoring for user-defined projects Specifying how a service is monitored Getting detailed information about a metrics target 7.2. Determining why Prometheus is consuming a lot of disk space Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id attribute is unbound because it has an infinite number of possible values. Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space. You can use the following measures when Prometheus consumes a lot of disk: Check the time series database (TSDB) status using the Prometheus HTTP API for more information about which labels are creating the most time series data. Doing so requires cluster administrator privileges. Check the number of scrape samples that are being collected. Reduce the number of unique time series that are created by reducing the number of unbound attributes that are assigned to user-defined metrics. Note Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations. Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure In the Administrator perspective, navigate to Observe Metrics . Enter a Prometheus Query Language (PromQL) query in the Expression field. The following example queries help to identify high cardinality metrics that might result in high disk space consumption: By running the following query, you can identify the ten jobs that have the highest number of scrape samples: topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling))) By running the following query, you can pinpoint time series churn by identifying the ten jobs that have created the most time series data in the last hour: topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h]))) Investigate the number of unbound label values assigned to metrics with higher than expected scrape sample counts: If the metrics relate to a user-defined project , review the metrics key-value pairs assigned to your workload. These are implemented through Prometheus client libraries at the application level. Try to limit the number of unbound attributes referenced in your labels. If the metrics relate to a core OpenShift Container Platform project , create a Red Hat support case on the Red Hat Customer Portal . Review the TSDB status using the Prometheus HTTP API by following these steps when logged in as a cluster administrator: Get the Prometheus API route URL by running the following command: USD HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath='{.status.ingress[].host}') Extract an authentication token by running the following command: USD TOKEN=USD(oc whoami -t) Query the TSDB status for Prometheus by running the following command: USD curl -H "Authorization: Bearer USDTOKEN" -k "https://USDHOST/api/v1/status/tsdb" Example output "status": "success","data":{"headStats":{"numSeries":507473, "numLabelPairs":19832,"chunkCount":946298,"minTime":1712253600010, "maxTime":1712257935346},"seriesCountByMetricName": [{"name":"etcd_request_duration_seconds_bucket","value":51840}, {"name":"apiserver_request_sli_duration_seconds_bucket","value":47718}, ... Additional resources Accessing monitoring APIs by using the CLI Setting scrape sample and label limits for user-defined projects Submitting a support case 7.3. Resolving the KubePersistentVolumeFillingUp alert firing for Prometheus As a cluster administrator, you can resolve the KubePersistentVolumeFillingUp alert being triggered for Prometheus. The critical alert fires when a persistent volume (PV) claimed by a prometheus-k8s-* pod in the openshift-monitoring project has less than 3% total space remaining. This can cause Prometheus to function abnormally. Note There are two KubePersistentVolumeFillingUp alerts: Critical alert : The alert with the severity="critical" label is triggered when the mounted PV has less than 3% total space remaining. Warning alert : The alert with the severity="warning" label is triggered when the mounted PV has less than 15% total space remaining and is expected to fill up within four days. To address this issue, you can remove Prometheus time-series database (TSDB) blocks to create more space for the PV. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure List the size of all TSDB blocks, sorted from oldest to newest, by running the following command: USD oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \ 1 -c prometheus --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \ 2 -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') \ -- sh -c 'cd /prometheus/;du -hs USD(ls -dtr */ | grep -Eo "[0-9|A-Z]{26}")' 1 2 Replace <prometheus_k8s_pod_name> with the pod mentioned in the KubePersistentVolumeFillingUp alert description. Example output 308M 01HVKMPKQWZYWS8WVDAYQHNMW6 52M 01HVK64DTDA81799TBR9QDECEZ 102M 01HVK64DS7TRZRWF2756KHST5X 140M 01HVJS59K11FBVAPVY57K88Z11 90M 01HVH2A5Z58SKT810EM6B9AT50 152M 01HV8ZDVQMX41MKCN84S32RRZ1 354M 01HV6Q2N26BK63G4RYTST71FBF 156M 01HV664H9J9Z1FTZD73RD1563E 216M 01HTHXB60A7F239HN7S2TENPNS 104M 01HTHMGRXGS0WXA3WATRXHR36B Identify which and how many blocks could be removed, then remove the blocks. The following example command removes the three oldest Prometheus TSDB blocks from the prometheus-k8s-0 pod: USD oc debug prometheus-k8s-0 -n openshift-monitoring \ -c prometheus --image=USD(oc get po -n openshift-monitoring prometheus-k8s-0 \ -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') \ -- sh -c 'ls -latr /prometheus/ | egrep -o "[0-9|A-Z]{26}" | head -3 | \ while read BLOCK; do rm -r /prometheus/USDBLOCK; done' Verify the usage of the mounted PV and ensure there is enough space available by running the following command: USD oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \ 1 --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \ 2 -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') -- df -h /prometheus/ 1 2 Replace <prometheus_k8s_pod_name> with the pod mentioned in the KubePersistentVolumeFillingUp alert description. The following example output shows the mounted PV claimed by the prometheus-k8s-0 pod that has 63% of space remaining: Example output Starting pod/prometheus-k8s-0-debug-j82w4 ... Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod ...
[ "oc -n ns1 get service prometheus-example-app -o yaml", "labels: app: prometheus-example-app", "oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml", "apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app", "oc -n openshift-user-workload-monitoring get pods", "NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m", "oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator", "level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug", "oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"", "- --log-level=debug", "oc -n openshift-user-workload-monitoring get pods", "topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))", "topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))", "HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath='{.status.ingress[].host}')", "TOKEN=USD(oc whoami -t)", "curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/status/tsdb\"", "\"status\": \"success\",\"data\":{\"headStats\":{\"numSeries\":507473, \"numLabelPairs\":19832,\"chunkCount\":946298,\"minTime\":1712253600010, \"maxTime\":1712257935346},\"seriesCountByMetricName\": [{\"name\":\"etcd_request_duration_seconds_bucket\",\"value\":51840}, {\"name\":\"apiserver_request_sli_duration_seconds_bucket\",\"value\":47718},", "oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 -c prometheus --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'cd /prometheus/;du -hs USD(ls -dtr */ | grep -Eo \"[0-9|A-Z]{26}\")'", "308M 01HVKMPKQWZYWS8WVDAYQHNMW6 52M 01HVK64DTDA81799TBR9QDECEZ 102M 01HVK64DS7TRZRWF2756KHST5X 140M 01HVJS59K11FBVAPVY57K88Z11 90M 01HVH2A5Z58SKT810EM6B9AT50 152M 01HV8ZDVQMX41MKCN84S32RRZ1 354M 01HV6Q2N26BK63G4RYTST71FBF 156M 01HV664H9J9Z1FTZD73RD1563E 216M 01HTHXB60A7F239HN7S2TENPNS 104M 01HTHMGRXGS0WXA3WATRXHR36B", "oc debug prometheus-k8s-0 -n openshift-monitoring -c prometheus --image=USD(oc get po -n openshift-monitoring prometheus-k8s-0 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'ls -latr /prometheus/ | egrep -o \"[0-9|A-Z]{26}\" | head -3 | while read BLOCK; do rm -r /prometheus/USDBLOCK; done'", "oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- df -h /prometheus/", "Starting pod/prometheus-k8s-0-debug-j82w4 Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/monitoring/troubleshooting-monitoring-issues
function::ansi_new_line
function::ansi_new_line Name function::ansi_new_line - Move cursor to new line. Synopsis Arguments None Description Sends ansi code new line.
[ "ansi_new_line()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ansi-new-line
Chapter 13. Troubleshooting builds
Chapter 13. Troubleshooting builds Use the following to troubleshoot build issues. 13.1. Resolving denial for access to resources If your request for access to resources is denied: Issue A build fails with: requested access to the resource is denied Resolution You have exceeded one of the image quotas set on your project. Check your current quota and verify the limits applied and storage in use: USD oc describe quota 13.2. Service certificate generation failure If your request for access to resources is denied: Issue If a service certificate generation fails with (service's service.beta.openshift.io/serving-cert-generation-error annotation contains): Example output secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60 Resolution The service that generated the certificate no longer exists, or has a different serviceUID . You must force certificates regeneration by removing the old secret, and clearing the following annotations on the service: service.beta.openshift.io/serving-cert-generation-error and service.beta.openshift.io/serving-cert-generation-error-num . To clear the annotations, enter the following commands: USD oc delete secret <secret_name> USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error- USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num- Note The command removing an annotation has a - after the annotation name to be removed.
[ "requested access to the resource is denied", "oc describe quota", "secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60", "oc delete secret <secret_name>", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/builds_using_buildconfig/troubleshooting-builds_build-configuration
Chapter 6. Backing OpenShift Container Platform applications with OpenShift Data Foundation
Chapter 6. Backing OpenShift Container Platform applications with OpenShift Data Foundation You cannot directly install OpenShift Data Foundation during the OpenShift Container Platform installation. However, you can install OpenShift Data Foundation on an existing OpenShift Container Platform by using the Operator Hub and then configure the OpenShift Container Platform applications to be backed by OpenShift Data Foundation. Prerequisites OpenShift Container Platform is installed and you have administrative access to OpenShift Web Console. OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure In the OpenShift Web Console, perform one of the following: Click Workloads Deployments . In the Deployments page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. Click Workloads Deployment Configs . In the Deployment Configs page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment Config to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. In the Add Storage page, you can choose one of the following options: Click the Use existing claim option and select a suitable PVC from the drop-down list. Click the Create new claim option. Select the appropriate CephFS or RBD storage class from the Storage Class drop-down list. Provide a name for the Persistent Volume Claim. Select ReadWriteOnce (RWO) or ReadWriteMany (RWX) access mode. Note ReadOnlyMany (ROX) is deactivated as it is not supported. Select the size of the desired storage capacity. Note You can expand the block PVs but cannot reduce the storage capacity after the creation of Persistent Volume Claim. Specify the mount path and subpath (if required) for the mount path volume inside the container. Click Save . Verification steps Depending on your configuration, perform one of the following: Click Workloads Deployments . Click Workloads Deployment Configs . Set the Project as required. Click the deployment for which you added storage to display the deployment details. Scroll down to Volumes and verify that your deployment has a Type that matches the Persistent Volume Claim that you assigned. Click the Persistent Volume Claim name and verify the storage class name in the Persistent Volume Claim Overview page.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_and_allocating_storage_resources/backing-openshift-container-platform-applications-with-openshift-data-foundation_rhodf
Chapter 29. Ruby (DEPRECATED)
Chapter 29. Ruby (DEPRECATED) Overview Ruby is a dynamic, open source programming language with a focus on simplicity and productivity. It has an elegant syntax that is natural to read and easy to write. The Ruby support is part of the camel-script module. Important Ruby in Apache Camel is deprecated and will be removed in a future release. Adding the script module To use Ruby in your routes you need to add a dependency on camel-script to your project as shown in Example 29.1, "Adding the camel-script dependency" . Example 29.1. Adding the camel-script dependency Static import To use the ruby() static method in your application code, include the following import statement in your Java source files: Built-in attributes Table 29.1, "Ruby attributes" lists the built-in attributes that are accessible when using Ruby. Table 29.1. Ruby attributes Attribute Type Value context org.apache.camel.CamelContext The Camel Context exchange org.apache.camel.Exchange The current Exchange request org.apache.camel.Message The IN message response org.apache.camel.Message The OUT message properties org.apache.camel.builder.script.PropertiesFunction Function with a resolve method to make it easier to use the properties component inside scripts. The attributes all set at ENGINE_SCOPE . Example Example 29.2, "Route using Ruby" shows a route that uses Ruby. Example 29.2. Route using Ruby Using the properties component To access a property value from the properties component, invoke the resolve method on the built-in properties attribute, as follows: Where PropKey is the key of the property you want to resolve, where the key value is of String type. For more details about the properties component, see Properties in the Apache Camel Component Reference Guide .
[ "<!-- Maven POM File --> <dependencies> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-script</artifactId> <version>USD{camel-version}</version> </dependency> </dependencies>", "import static org.apache.camel.builder.script.ScriptBuilder.*;", "<camelContext> <route> <from uri=\"direct:start\"/> <choice> <when> <langauge langauge=\"ruby\">USDrequest.headers['user'] == 'admin'</langauge> <to uri=\"seda:adminQueue\"/> </when> <otherwise> <to uri=\"seda:regularQueue\"/> </otherwise> </choice> </route> </camelContext>", ".setHeader(\"myHeader\").ruby(\"properties.resolve( PropKey )\")" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/Ruby
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/updating_openshift_data_foundation/making-open-source-more-inclusive
Chapter 13. Renaming Satellite Server or Capsule Server
Chapter 13. Renaming Satellite Server or Capsule Server To rename Satellite Server or Capsule Server, use the satellite-change-hostname script. Important When changing the domain name of your Satellite Server or Capsule Server, update the hostname using satellite-change-hostname to avoid networking issues. 13.1. Renaming Satellite Server The host name of Satellite Server is used by Satellite Server components, all Capsule Servers, and hosts registered to it for communication. This procedure ensures that in addition to renaming Satellite Server, you also update all references to point to the new host name. Warning Renaming your Satellite Server host shuts down all Satellite services on that host. The services restart after the renaming is complete. Prerequisites Back up your Satellite Server before changing its host name. If you fail to successfully rename it, restore it from the backup. For more information, see Chapter 11, Backing up Satellite Server and Capsule Server . Run the hostname and hostname -f commands on Satellite Server. If both commands do not return the FQDN of Satellite Server, the satellite-change-hostname script will fail to complete. If the hostname command returns the shortname of Satellite Server instead of the FQDN, use hostnamectl set-hostname My_Old_FQDN to set the old FQDN correctly before using the satellite-change-hostname script. If Satellite Server has a custom SSL certificate installed, obtain a new certificate for the new FQDN of the host. For more information, see Configuring Satellite Server with a Custom SSL Certificate in Installing Satellite Server in a connected network environment . Procedure On Satellite Server, run the satellite-change-hostname script, and provide the new host name. Choose one of the following methods: If your Satellite Server is installed with the default self-signed SSL certificates: If your Satellite Server is installed with custom SSL certificates: If you have created a custom SSL certificate for the new Satellite Server host name, run the Satellite installation script to install the certificate. For more information about installing a custom SSL certificate, see Deploying a Custom SSL Certificate to Satellite Server in Installing Satellite Server in a connected network environment . Reregister all hosts and Capsule Servers that are registered to Satellite Server. For more information, see Registering Hosts in Managing hosts . On all Capsule Servers, run the Satellite installation script to update references to the new host name: On Satellite Server, list all Capsule Servers: On Satellite Server, synchronize content for each Capsule Server: If you use the virt-who agent, update the virt-who configuration files with the new host name. For more information, see Modifying a virt-who Configuration in Configuring virtual machine subscriptions . If you use external authentication, reconfigure Satellite Server for external authentication after you run the satellite-change-hostname script. For more information, see Configuring External Authentication in Installing Satellite Server in a connected network environment . 13.2. Renaming Capsule Server The host name of Capsule Server is referenced by Satellite Server components and all hosts registered to it. This procedure ensures that in addition to renaming Capsule Server, you also update all references to the new host name. Warning Renaming your Capsule Server host shuts down all Satellite services on that host. The services restart after the renaming is complete. Prerequisites Back up your Capsule Server before renaming. If you fail to successfully rename it, restore it from the backup. For more information, see Chapter 11, Backing up Satellite Server and Capsule Server . Run the hostname and hostname -f commands on Capsule Server. If both commands do not return the FQDN of Capsule Server, the satellite-change-hostname script will fail to complete. If the hostname command returns the shortname of Capsule Server instead of the FQDN, use hostnamectl set-hostname My_Old_FQDN to set the old FQDN correctly before attempting to use the satellite-change-hostname script. Procedure On your Satellite Server, generate a new certificates archive file for your Capsule Server. If you are using the default SSL certificate, regenerate the default SSL certificates: Ensure that you enter the full path to the .tar file. If you are using a custom SSL certificate, create a new SSL certificate for your Capsule Server. For more information, see Configuring Capsule Server with a Custom SSL Certificate in Installing Capsule Server . On your Satellite Server, copy the certificates archive file to your Capsule Server. For example, to copy the archive file to the root user's home directory: On your Capsule Server, run the satellite-change-hostname script and provide the host's new name, Satellite credentials, and certificates archive file name. Ensure that you enter the full path to the .tar file. If you have created a custom certificate for your Capsule Server, deploy the certificate to your Capsule Server by entering the satellite-installer command that the capsule-certs-generate command returned in a step. For more information, see Deploying a Custom SSL Certificate to Capsule Server in Installing Capsule Server . On all hosts registered to your Capsule Server, enter the following commands to reinstall the bootstrap RPM, reregister clients, and refresh their subscriptions. You can use the remote execution feature to perform this step. For more information, see Configuring and Setting up Remote Jobs in Managing hosts . Update the Capsule host name in the Satellite web UI. In the Satellite web UI, navigate to Infrastructure > Capsules . Locate Capsule Server in the list, and click Edit . Edit the Name and URL fields to match Capsule Server's new host name, then click Submit . On your DNS server, add a record for the new hostname of your Capsule Server, and delete the record of the host name.
[ "satellite-change-hostname new-satellite --username My_Username --password My_Password", "satellite-change-hostname new-satellite --username My_Username --password My_Password --custom-cert \"/root/ownca/test.com/test.com.crt\" --custom-key \"/root/ownca/test.com/test.com.key\"", "satellite-installer --foreman-proxy-foreman-base-url https:// new-satellite.example.com --foreman-proxy-trusted-hosts new-satellite.example.com", "hammer capsule list", "hammer capsule content synchronize --id My_capsule_ID", "capsule-certs-generate --certs-tar /root/ new-capsule.example.com-certs.tar --foreman-proxy-fqdn new-capsule.example.com", "scp /root/ new-capsule.example.com-certs.tar root@ capsule.example.com :", "satellite-change-hostname new-capsule.example.com --certs-tar /root/ new-capsule.example.com-certs.tar --password My_Password --username My_Username", "dnf remove katello-ca-consumer* dnf install http:// new-capsule.example.com /pub/katello-ca-consumer-latest.noarch.rpm subscription-manager register --environment=\" My_Lifecycle_Environment \" --force --org=\" My_Organization \" subscription-manager refresh" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/renaming-satellite-or-capsule_admin
Chapter 14. Resolving common problems in RHUI 4
Chapter 14. Resolving common problems in RHUI 4 The following table lists known issues with Red Hat Update Infrastructure. If you encounter any of these issues, report the problem through Bugzilla. Table 14.1. Common problems in Red Hat Update Infrastructure Event Description of known issue Recommendation Installation and Configuration You experience communication issues between the RHUA and the CDSs. Verify the fully qualified domain name (FQDN) is set for the RHUA and CDS and is resolvable. Configure the HTTP proxy properly. Synchronization You cannot synchronize repositories with Red Hat. Verify the RHUI SKUs are in your account. Verify the proper content certificates are loaded to the RHUA. Look for temporary CDN issues. Look for any HTTP proxy in your environment. Red Hat Update Appliance/Content Delivery Network Communication The Red Hat Update Appliance is not communicating with the Content Delivery Network. Use the content certificate in /etc/pki/rhui/redhat (the .pem file) to test connectivity and access between the RHUA and the CDN. wget -O - --certificate /etc/pki/rhui/redhat/* --ca-certificate /etc/rhsm/ca/redhat-uep.pem https://cdn.redhat.com/content/dist/rhel8/8/x86_64/baseos/os/repodata/repomd.xml
null
https://docs.redhat.com/en/documentation/red_hat_update_infrastructure/4/html/configuring_and_managing_red_hat_update_infrastructure/assembly_cmg-resolving-common-problems-rhui4
Release Notes
Release Notes Red Hat Certificate System 10 Highlighted features and updates related to Red Hat Certificate System 10 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/release_notes/index
A. Revision History
A. Revision History Revision History Revision 1-1 Wed Feb 25 2015 Laura Bailey Adding metadata to improve document display on the portal. Revision 1-0 Wed Aug 12 2010 Ryan Lerch Initial version of the Red Hat Enterprise Linux 6 Release Notes
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_release_notes/appe-publican-revision_history