title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_amazon_web_services/making-open-source-more-inclusive |
Chapter 19. Configuring Clients | Chapter 19. Configuring Clients 19.1. Client Configuration Using the wildfly-config.xml File JBoss EAP 7.1 introduced the wildfly-config.xml file with the purpose of unifying all client configurations into one single configuration file. The following table describes the clients and types of configuration that can be done using the wildfly-config.xml file in JBoss EAP and a link to the reference schema link for each. Client Configuration Schema Location / Configuration Information Authentication client The schema reference is provided in the product installation at EAP_HOME /docs/schema/elytron-client-1_2.xsd . The schema is also published at http://www.jboss.org/schema/jbossas/elytron-client-1_2.xsd . See Client Authentication Configuration Using the wildfly-config.xml File for more information and for an example configuration. Additional information can be found in Configure Client Authentication with Elytron Client in How to Configure Identity Management for JBoss EAP. Jakarta Enterprise Beans client The schema reference is provided in the product installation at EAP_HOME /docs/schema/wildfly-client-ejb_3_0.xsd . The schema is also published at http://www.jboss.org/schema/jbossas/wildfly-client-ejb_3_0.xsd . See Jakarta Enterprise Beans Client Configuration Using the wildfly-config.xml File for more information and for an example configuration. Another simple example is located in the Migrate a Jakarta Enterprise Beans Client to Elytron section of the Migration Guide for JBoss EAP. HTTP client The schema reference is provided in the product installation at EAP_HOME /docs/schema/wildfly-http-client_1_0.xsd . The schema is also published at http://www.jboss.org/schema/jbossas/wildfly-http-client_1_0.xsd . Note This feature is provided as a Technology Preview only. See HTTP Client Configuration Using the wildfly-config.xml File for more information and for an example configuration. Remoting client The schema reference is provided in the product installation at EAP_HOME /docs/schema/jboss-remoting_5_0.xsd . The schema is also published at http://www.jboss.org/schema/jbossas/jboss-remoting_5_0.xsd . See Remoting Client Configuration Using the wildfly-config.xml File for more information and for an example configuration. XNIO worker client The schema reference is provided in the product installation at EAP_HOME /docs/schema/xnio_3_5.xsd . The schema is also published at http://www.jboss.org/schema/jbossas/xnio_3_5.xsd . See Default XNIO Worker Configuration Using the wildfly-config.xml File for more information and for an example configuration. 19.1.1. Client Authentication Configuration Using the wildfly-config.xml File You can use the authentication-client element, which is in the urn:elytron:client:1.2 namespace, to configure client authentication information using the wildfly-config.xml file. This section describes how to configure client authentication using this element. authentication-client Elements and Attributes The authentication-client element can optionally contain the following top level child elements, along with their child elements: credential-stores credential-store providers global use-service-loader attributes protection-parameter-credentials key-store-reference credential-store-reference clear-password key-pair public-key-pem private-key-pem certificate public-key-pem bearer-token oauth2-bearer-token client-credentials resource-owner-credentials key-stores key-store file load-from resource key-store-clear-password key-store-credential authentication-rules rule match-no-user match-user match-protocol match-host match-path match-port match-urn match-domain-name match-abstract-type authentication-configurations configuration set-host-name set-port-number set-protocol set-user-name set-anonymous set-mechanism-realm-name rewrite-user-name-regex sasl-mechanism-selector set-mechanism-properties property credentials key-store-reference credential-store-reference clear-password key-pair certificate public-key-pem bearer-token oauth2-bearer-token set-authorization-name providers global use-service-loader use-provider-sasl-factory use-service-loader-sasl-factory net-authenticator ssl-context-rules rule match-no-user match-user match-protocol match-host match-path match-port match-urn match-domain-name match-abstract-type ssl-contexts default-ssl-context ssl-context key-store-ssl-certificate trust-store cipher-suite protocol provider-name certificate-revocation-list providers global use-service-loader providers global use-service-loader credential-stores This optional element defines credential stores that are referenced from elsewhere in the configuration as an alternative to embedding credentials within the configuration. It can contain any number of credential-store elements. Example: credential-stores Configuration <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <credential-stores> <credential-store name="..." type="..." provider="..." > <attributes> <attribute name="..." value="..." /> </attributes> <protection-parameter-credentials>...</protection-parameter-credentials> </credential-store> </credential-stores> </authentication-client> </configuration> credential-store This element defines a credential store that is referenced from elsewhere in the configuration. It has the following attributes. Attribute Name Attribute Description name The name of the credential store. This attribute is required. type The type of credential store. This attribute is optional. provider The name of the java.security.Provider to use to load the credential store. This attribute is optional. It can contain one and only one of each of the following child elements. providers attributes protection-parameter-credentials attributes This element defines the configuration attributes used to initialize the credential store and can be repeated as many times as is required for the configuration. Example: attributes Configuration <attributes> <attribute name="..." value="..." /> </attributes> protection-parameter-credentials This element contains one or more credentials to be assembled into a protection parameter to be used when initializing the credential store. It can contain one or more of the following child elements, which are dependent on the credential store implementation: key-store-reference credential-store-reference clear-password key-pair certificate public-key-pem bearer-token oauth2-bearer-token Example: protection-parameter-credentials Configuration <protection-parameter-credentials> <key-store-reference>...</key-store-reference> <credential-store-reference store="..." alias="..." clear-text="..." /> <clear-password password="..." /> <key-pair public-key-pem="..." private-key-pem="..." /> <certificate private-key-pem="..." pem="..." /> <public-key-pem>...</public-key-pem> <bearer-token value="..." /> <oauth2-bearer-token token-endpoint-uri="...">...</oauth2-bearer-token> </protection-parameter-credentials> key-store-reference This element, which is not currently used by any authentication mechanisms in JBoss EAP, defines a reference to a keystore. It has the following attributes. Attribute Name Attribute Description key-store-name The keystore name. This attribute is required. alias The alias of the entry to load from the referenced keystore. This can be omitted only for keystores that contain just a single entry. It can contain one and only one of the following child elements. key-store-clear-password credential-store-reference key-store-credential Example: key-store-reference Configuration <key-store-reference key-store-name="..." alias="..."> <key-store-clear-password password="..." /> <key-store-credential>...</key-store-credential> </key-store-reference> credential-store-reference This element defines a reference to a credential store. It has the following attributes. Attribute Name Attribute Description store The credential store name. alias The alias of the entry to load from the referenced credential store. This can be omitted only for keystores that contain just a single entry. clear-text The clear text password. clear-password This element defines a clear text password. key-pair This element, which is not currently used by any authentication mechanisms in JBoss EAP, defines a public and private key pair. It can contain the following child elements. public-key-pem private-key-pem public-key-pem This element, which is not currently used by any authentication mechanisms in JBoss EAP, defines the PEM-encoded public key. private-key-pem This element defines the PEM-encoded private key. certificate This element, which is not currently used by any authentication mechanisms in JBoss EAP, specifies a certificate. It has the following attributes. Attribute Name Attribute Description private-key-pem A PEM-encoded private key. pem The corresponding certificate. bearer-token This element defines a bearer token. oauth2-bearer-token This element defines an OAuth 2 bearer token. It has the following attribute. Attribute Name Attribute Description token-endpoint-uri The URI of the token endpoint. It can contain one and only one of each of the following child elements. client-credentials resource-owner-credentials client-credentials This element defines the client credentials. It has the following attributes. Attribute Name Attribute Description client-id The client ID. This attribute is required. client-secret The client secret. This attribute is required. resource-owner-credentials This element defines the resource owner credentials. It has the following attributes. Attribute Name Attribute Description name The resource name. This attribute is required. pasword The password. This attribute is required. key-stores This optional element defines keystores that are referenced from elsewhere in the configuration. Example: key-stores Configuration <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <key-stores> <key-store name="..."> <!-- The following 3 elements specify where to load the keystore from. --> <file name="..." /> <load-from uri="..." /> <resource name="..." /> <!-- One of the following to specify the protection parameter to unlock the keystore. --> <key-store-clear-password password="..." /> <key-store-credential>...</key-store-credential> </key-store> </key-stores> ... </authentication-client> </configuration> key-store This optional element defines a keystore that is referenced from elsewhere in the configuration. The key-store has the following attributes. Attribute Name Attribute Description name The name of the keystore. This attribute is required. type The keystore type, for example, JCEKS . This attribute is required. provider The name of the java.security.Provider to use to load the credential store. This attribute is optional. wrap-passwords If true, passwords will wrap. The passwords are stored by taking the clear password contents, encoding them in UTF-8, and storing the resultant bytes as a secret key. Defaults to false . It must contain exactly one of the following elements, which define where to load the keystore from. file load-from resource It must also contain one of the following elements, which specifies the protection parameter to use when initializing the keystore. key-store-clear-password key-store-credential file This element specifies the name of the keystore file. It has the following attribute. Attribute Name Attribute Description name The fully qualified file path and name of the file. load-from This element specifies the URI of the keystore file. It has the following attribute. Attribute Name Attribute Description uri The URI for the keystore file. resource This element specifies the name of the resource to load from the Thread context class loader. It has the following attribute. Attribute Name Attribute Description name The name of the resource. key-store-clear-password This element specifies the clear text password. It has the following attribute. Attribute Name Attribute Description password The clear text password. key-store-credential This element specifies a reference to another keystore that obtains an entry to use as the protection parameter to access this keystore. The key-store-credential element has the following attributes. Attribute Name Attribute Description key-store-name The keystore name. This attribute is required. alias The alias of the entry to load from the referenced keystore. This can be omitted only for keystores that contain just a single entry. It can contain one and only one of the following child elements. key-store-clear-password credential-store-reference key-store-credential Example: key-store-credential Configuration <key-store-credential key-store-name="..." alias="..."> <key-store-clear-password password="..." /> <key-store-credential>...</key-store-credential> </key-store-credential> authentication-rules This element defines the rules to match against the outbound connection to apply the appropriate authentication configuration. When an authentication-configuration is required, the URI of the accessed resources as well as an optional abstract type and abstract type authority are matched against the rules defined in the configuration to identify which authentication-configuration should be used. This element can contain one or more child rule elements. Example: authentication-rules Configuration <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> ... <authentication-rules> <rule use-configuration="..."> ... </rule> </authentication-rules> ... </authentication-client> </configuration> rule This element defines the rules to match against the outbound connection to apply the appropriate authentication configuration. It has the following attribute. Attribute Name Attribute Description use-configuration The authentication configuration that is chosen when rules match. Authentication configuration rule matching is independent of SSL context rule matching. The authentication rule structure is identical to the SSL context rule structure, except that it references an authentication configuration, while the SSL context rule references an SSL context. It can contain the following child elements. match-no-user match-user match-protocol match-host match-path match-port match-urn match-domain-name match-abstract-type Example: rule Configuration for Authentication <rule use-configuration="..."> <!-- At most one of the following two can be defined. --> <match-no-user /> <match-user name="..." /> <!-- Each of the following can be defined at most once. --> <match-protocol name="..." /> <match-host name="..." /> <match-path name="..." /> <match-port number="..." /> <match-urn name="..." /> <match-domain name="..." /> <match-abstract-type name="..." authority="..." /> </rule> match-no-user This rule matches when there is no user-info embedded within the URI. match-user This rule matches when the user-info embedded in the URI matches the name attribute specified in this element. match-protocol This rule matches when the protocol within the URI matches the protocol name attribute specified in this element. match-host This rule matches when the host name specified within the URI matches the host name attribute specified in this element. match-path This rule matches when the path specified within the URI matches the path name attribute specified in this element. match-port This rule matches when the port number specified within the URI matches the port number attribute specified in this element. This only matches against the number specified within the URI and not against any default port number derived from the protocol. match-urn This rule matches when the scheme specific part of the URI matches the name attribute specified in this element. match-domain-name This rule matches when the protocol of the URI is domain and the scheme specific part of the URI matches the name attribute specified in this element. match-abstract-type This rule matches when the abstract type matches the name attribute and the authority matches the authority attribute specified in this element. authentication-configurations This element defines named authentication configurations that are to be chosen by the authentication rules. It can contain one or more configuration elements. Example: authentication-configurations Configuration <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <authentication-configurations> <configuration name="..."> <!-- Destination Overrides. --> <set-host name="..." /> <set-port number="..." /> <set-protocol name="..." /> <!-- At most one of the following two elements. --> <set-user-name name="..." /> <set-anonymous /> <set-mechanism-realm name="..." /> <rewrite-user-name-regex pattern="..." replacement="..." /> <sasl-mechanism-selector selector="..." /> <set-mechanism-properties> <property key="..." value="..." /> </set-mechanism-properties> <credentials>...</credentials> <set-authorization-name name="..." /> <providers>...</providers> <!-- At most one of the following two elements. --> <use-provider-sasl-factory /> <use-service-loader-sasl-factory module-name="..." /> </configuration> </authentication-configurations> </authentication-client> </configuration> configuration This element defines named authentication configurations that are to be chosen by the authentication rules. It can contain the following child elements. The optional set-host-name , set-port-number , and set-protocol elements can override the destination. The optional set-user-name and set-anonymous elements are mutually exclusive and can be used to set the name for authentication or switch to anonymous authentication. are the set-mechanism-realm-name , rewrite-user-name-regex , sasl-mechanism-selector , set-mechanism-properties , credentials , set-authorization-name , and providers optional elements. The final two optional use-provider-sasl-factory and use-service-loader-sasl-factory elements are mutually exclusive and define how the SASL mechanism factories are discovered for authentication. set-host-name This element overrides the host name for the authenticated call. It has the following attribute. Attribute Name Attribute Description name The host name. set-port-number This element overrides the port number for the authenticated call. It has the following attribute. Attribute Name Attribute Description number The port number. set-protocol This element overrides the protocol for the authenticated call. It has the following attribute. Attribute Name Attribute Description name The protocol. set-user-name This element sets the user name to use for the authentication. It should not be used with the set-anonymous element. It has the following attribute. Attribute Name Attribute Description name The user name to use for authentication. set-anonymous The element is used to switch to anonymous authentication. It should not be used with the set-user-name element. set-mechanism-realm-name This element specifies the name of the realm that will be selected by the SASL mechanism if required. It has the following attribute. Attribute Name Attribute Description name The name of the realm. rewrite-user-name-regex This element defines a regular expression pattern and replacement to rewrite the user name used for authentication. It has the following attributes. Attribute Name Attribute Description pattern A regular expression pattern. replacement The replacement to use to rewrite the user name used for authentication. sasl-mechanism-selector This element specifies a SASL mechanism selector using the syntax from the org.wildfly.security.sasl.SaslMechanismSelector.fromString(string) method. It has the following attribute. Attribute Name Attribute Description selector The SASL mechanism selector. For more information about the grammar required for the sasl-mechanism-selector , see sasl-mechanism-selector Grammar in How to Configure Server Security for JBoss EAP. set-mechanism-properties This element can contain one or more property elements that are to be passed to the authentication mechanisms. property This element defines a property to be passed to the authentication mechanisms. It has the following attributes. Attribute Name Attribute Description key The property name. value The property value. credentials This element defines one or more credentials available for use during authentication. It can contain one or more of the following child elements, which are dependent on the credential store implementation: key-store-reference credential-store-reference clear-password key-pair certificate public-key-pem bearer-token oauth2-bearer-token . These are the same child elements as those contained in the protection-parameter-credentials element. See the protection-parameter-credentials element for details and an example configuration. set-authorization-name This element specifies the name that should be used for authorization if it is different from the authentication identity. It has the following attributes. Attribute Name Attribute Description name The name that should be used for authorization. use-provider-sasl-factory This element specifies the java.security.Provider instances that are either inherited or defined in this configuration and that are to be used to locate the available SASL client factories. This element should not be used with the use-service-loader-sasl-factory element. use-service-loader-sasl-factory This element specifies the module that is to be used to discover the SASL client factories using the service loader discovery mechanism. If no module is specified, the class loader that loaded the configuration is used. This element should not be used with the use-provider-sasl-factory element. It has the following attribute. Attribute Name Attribute Description module-name The name of the module. net-authenticator This element contains no configuration. If present, the org.wildfly.security.auth.util.ElytronAuthenticator is registered with java.net.Authenticator.setDefault(Authenticator) . This allows the Elytron authentication client configuration to be used for authentication when JDK APIs are used for HTTP calls that require authentication. Note Because the JDK caches the authentication on the first call across the JVM, it is better to use this approach only on standalone processes that do not require different credentials for different calls to the same URI. ssl-context-rules This optional element defines the SSL context rules. When an ssl-context is required, the URI of the accessed resources as well as an optional abstract type and abstract type authority are matched against the rules defined in the configuration to identify which ssl-context should be used. This element can contain one or more child rule elements. Example: ssl-context-rules Configuration <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <ssl-context-rules> <rule use-ssl-context="..."> ... </rule> </ssl-context-rules> ... </authentication-client> </configuration> rule This element defines the rule to match on the SSL context definitions. It has the following attribute. Attribute Name Attribute Description use-ssl-context The SSL context definition that is chosen when rules match. SSL context rule matching is independent of authentication rule matching. The SSL context rule structure is identical to the authentication configuration rule structure, except that it references an SSL context, while the authentication rule references an authentication configuration. It can contain the following child elements. match-no-user match-user match-protocol match-host match-path match-port match-urn match-domain-name match-abstract-type Example: rule Configuration for SSL Context <rule use-ssl-context="..."> <!-- At most one of the following two can be defined. --> <match-no-user /> <match-user name="..." /> <!-- Each of the following can be defined at most once. --> <match-protocol name="..." /> <match-host name="..." /> <match-path name="..." /> <match-port number="..." /> <match-urn name="..." /> <match-domain name="..." /> <match-abstract-type name="..." authority="..." /> </rule> ssl-contexts This optional element defines SSL context definitions that are to be chosen by the SSL context rules. Example: ssl-contexts Configuration <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <ssl-contexts> <default-ssl-context name="..."/> <ssl-context name="..."> <key-store-ssl-certificate>...</key-store-ssl-certificate> <trust-store key-store-name="..." /> <cipher-suite selector="..." /> <protocol names="... ..." /> <provider-name name="..." /> <providers>...</providers> <certificate-revocation-list path="..." maximum-cert-path="..." /> </ssl-context> </ssl-contexts> </authentication-client> </configuration> default-ssl-context This element takes the SSLContext returned by javax.net.ssl.SSLContext.getDefault() and assigns it a name so it can referenced from the ssl-context-rules . This element can be repeated, meaning the default SSL context can be referenced using different names. ssl-context This element defines an SSL context to use for connections. It can optionally contain one of each of the following child elements. key-store-ssl-certificate trust-store cipher-suite protocol provider-name providers certificate-revocation-list key-store-ssl-certificate This element defines a reference to an entry within a keystore for the key and certificate to use for this SSL context. It has the following attributes. Attribute Name Attribute Description key-store-name The keystore name. This attribute is required. alias The alias of the entry to load from the referenced keystore. This can be omitted only for keystores that contain just a single entry. It can contain the following child elements: key-store-clear-password credential-store-reference key-store-credential This structure is nearly identical to the structure used in the key-store-credential configuration with the exception that here it obtains the entry for the key and for the certificate. However, the nested key-store-clear-password and key-store-credential elements still provide the protection parameter to unlock the entry. Example: key-store-ssl-certificate Configuration <key-store-ssl-certificate key-store-name="..." alias="..."> <key-store-clear-password password="..." /> <key-store-credential>...</key-store-credential> </key-store-ssl-certificate> trust-store This element is a reference to the keystore that is to be used to initialize the TrustManager . It has the following attribute. Attribute Name Attribute Description key-store-name The keystore name. This attribute is required. cipher-suite This element configures the filter for the enabled cipher suites. It has the following attribute. Attribute Name Attribute Description selector The selector to filter the cipher suites. The selector uses the format of the OpenSSL-style cipher list string created by the org.wildfly.security.ssl.CipherSuiteSelector.fromString(selector) method. Example: cipher-suite Configuration Using Default Filtering <cipher-suite selector="DEFAULT" /> protocol This element defines a space separated list of the protocols to be supported. See the client-ssl-context Attributes table in How to Configure Server Security for JBoss EAP for the list of available protocols. Red Hat recommends that you use TLSv1.2 . provider-name Once the available providers have been identified, only the provider with the name defined on this element is used. certificate-revocation-list This element defines both the path to the certificate revocation list and the maximum number of non-self-issued intermediate certificates that can exist in a certification path. The presence of this element enables checking the peer's certificate against the certificate revocation list. It has the following attributes. Attribute Name Attribute Description path The path to the certification list. This attribute is optional. maximum-cert-path The maximum number of non-self-issued intermediate certificates that can exist in a certification path. This attribute is optional. providers This element defines how java.security.Provider instances are located when required. It can contain the following child elements. global use-service-loader Because the configuration sections of authentication-client are independent of each other, this element can be configured in the following locations. Example: Locations of providers Configuration <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <providers /> ... <credential-stores> <credential-store name="..."> ... <providers /> </credential-store> </credential-stores> ... <authentication-configurations> <authentication-configuration name="..."> ... <providers /> </authentication-configuration> </authentication-configurations> ... <ssl-contexts> <ssl-context name="..."> ... <providers /> </ssl-context> </ssl-contexts> </authentication-client> </configuration> The providers configuration applies to the element in which it is defined and to any of its child elements unless it is overridden. The specification of a providers in a child element overrides a providers specified in any of its parent elements. If no providers configuration is specified, the default behavior is the equivalent of the following, which gives the Elytron provider priority over any globally registered providers, but also allows for the use of globally registered providers. Example: providers Configuration <providers> <use-service-loader /> <global /> </providers> global This empty element specifies to use the global providers loaded by the java.security.Security.getProviders() method call. use-service-loader This empty element specifies to use the providers that are loaded by the specified module. If no module is specified, the class loader that loaded the authentication client is used. Important Elements Not Currently Used By Any JBoss EAP Authentication Mechanisms The following child elements of the credentials element in the Elytron client configuration are not currently used by any authentication mechanisms in JBoss EAP. They can be used in your own custom implementations of authentication mechanism; however, they are not supported. key-pair public-key-pem key-store-reference certificate 19.1.2. Jakarta Enterprise Beans Client Configuration Using the wildfly-config.xml File You can use the jboss-ejb-client element, which is in the urn:jboss:wildfly-client-ejb:3.0 namespace, to configure Jakarta Enterprise Beans client connections, global interceptors, and invocation timeouts using the wildfly-config.xml file. This section describes how to configure a Jakarta Enterprise Beans client using this element. jboss-ejb-client Elements and Attributes The jboss-ejb-client element can optionally contain the following three top level child elements, along with their child elements: invocation-timeout global-interceptors interceptor connections connection interceptors interceptor invocation-timeout This optional element specifies the Jakarta Enterprise Beans invocation timeout. It has the following attribute. Attribute Name Attribute Description seconds The timeout, in seconds, for the Jakarta Enterprise Beans handshake or the method invocation request/response cycle. This attribute is required. If the execution of a method takes longer than the timeout period, the invocation throws a java.util.concurrent.TimeoutException ; however, the server side will not be interrupted. global-interceptors This optional element specifies the global Jakarta Enterprise Beans client interceptors. It can contain any number of interceptor elements. interceptor This element is used to specify a Jakarta Enterprise Beans client interceptor. It has the following attributes. Attribute Name Attribute Description class The name of a class that implements the org.jboss.ejb.client.EJBClientInterceptor interface. This attribute is required. module The name of the module that should be used to load the interceptor class. This attribute is optional. connections This element is used to specify Jakarta Enterprise Beans client connections. It can contain any number of connection elements. connection This element is used to specify a Jakarta Enterprise Beans client connection. It can optionally contain an interceptors element. It has the following attribute. Attribute Name Attribute Description uri The destination URI for the connection. This attribute is required. interceptors This element is used to specify Jakarta Enterprise Beans client interceptors and can contain any number of interceptor elements. Example Jakarta Enterprise Beans Client Configuration in the wildfly-config.xml File The following is an example that configures the Jakarta Enterprise Beans client connections, global interceptors, and invocation timeout using the jboss-ejb-client element in the wildfly-config.xml file. <configuration> ... <jboss-ejb-client xmlns="urn:jboss:wildfly-client-ejb:3.0"> <invocation-timeout seconds="10"/> <connections> <connection uri="remote+http://10.20.30.40:8080"/> </connections> <global-interceptors> <interceptor class="org.jboss.example.ExampleInterceptor"/> </global-interceptors> </jboss-ejb-client> ... </configuration> 19.1.3. HTTP Client Configuration Using the wildfly-config.xml File The following is an example of how to configure HTTP clients using the wildfly-config.xml file. <configuration> ... <http-client xmlns="urn:wildfly-http-client:1.0"> <defaults> <eagerly-acquire-session value="true" /> <buffer-pool buffer-size="2000" max-size="10" direct="true" thread-local-size="1" /> </defaults> </http-client> ... </configuration> Important HTTP client configuration using the wildfly-config.xml file is provided as Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. 19.1.4. Remoting Client Configuration Using the wildfly-config.xml File You can use the endpoint element, which is in the urn:jboss-remoting:5.0 namespace, to configure a remoting client using the wildfly-config.xml file. This section describes how to configure a remoting client using this element. endpoint Elements and Attributes The endpoint element can optionally contain the following two top level child elements, along with their child elements. providers provider connections connection It also has the following attribute: Attribute Name Attribute Description name The endpoint name. This attribute is optional. If not provided, an endpoint name is derived from the system's host name, if possible. providers This optional element specifies transport providers for the remote endpoint. It can contain any number of provider elements. provider This element defines a remote transport provider. It has the following attributes. Attribute Name Attribute Description scheme The primary URI scheme that corresponds to this provider. This attribute is required. aliases The space-separated list of of other URI scheme names that are also recognized for this provider. This attribute is optional. module The name of the module that contains the provider implementation. This attribute is optional. If not provided, the class loader that loads JBoss Remoting searches for the provider class. class The name of the class that implements the transport provider. This attribute is optional. If not provided, the java.util.ServiceLoader facility is used to search for the provider class. connections This optional element specifies connections for the remote endpoint. It can contain any number of connection elements. connection This element defines a connection for the remote endpoint. It has the following attributes. Attribute Name Attribute Description destination The destination URI for the endpoint. This attribute is required. read-timeout The timeout, in seconds, for read operations on the corresponding socket. This attribute is optional; however, it should be provided only if a heartbeat-interval is defined. write-timeout The timeout, in seconds, for a write operation. This attribute is optional; however, it should be provided only if a heartbeat-interval is defined. ip-traffic-class Defines the numeric IP traffic class to use for this connection's traffic. This attribute is optional. tcp-keepalive Boolean setting that determines whether to use TCP keepalive. This attribute is optional. heartbeat-interval The interval, in milliseconds, to use when checking for a connection heartbeat. This attribute is optional. Example Remoting Client Configuration in the wildfly-config.xml File The following is an example that configures a remoting client using the wildfly-config.xml file. <configuration> ... <endpoint xmlns="urn:jboss-remoting:5.0"> <connections> <connection destination="remote+http://10.20.30.40:8080" read-timeout="50" write-timeout="50" heartbeat-interval="10000"/> </connections> </endpoint> ... </configuration> 19.1.5. Default XNIO Worker Configuration Using the wildfly-config.xml File You can use the worker element, which is in the urn:xnio:3.5 namespace, to configure an XNIO worker using the wildfly-config.xml file. This section describes how to configure an XNIO worker client using this element. worker Elements and Attributes The worker element can optionally contain the following top level child elements, along with their child elements: daemon-threads worker-name pool-size task-keepalive io-threads stack-size outbound-bind-addresses bind-address daemon-threads This optional element specifies whether worker and task threads should be daemon threads. This element has no content. It has the following attribute. Attribute Name Attribute Description value A boolean value that specifies whether worker and task threads should be daemon threads. A value of true indicates that worker and task threads should be daemon threads. A value of false indicates that they should not be daemon threads. This attribute is required. If this element is not provided, a value of true is assumed. worker-name This element defines the name of the worker. The worker name appears in thread dumps and in Jakarta Management. This element has no content. It has the following attribute. Attribute Name Attribute Description value The name of the worker. This attribute is required. pool-size This optional element defines the maximum size of the worker's task thread pool. This element has no content. It has the following attribute. Attribute Name Attribute Description max-threads A positive integer that specifies the maximum number of threads that should be created. This attribute is required. task-keepalive This optional element establishes the keep-alive time of task threads before they can be expired. It has the following attribute. Attribute Name Attribute Description value A positive integer that specifies the minimum number of seconds to keep idle threads alive. This attribute is required. io-threads This optional element determines how many I/O selector threads should be maintained. Generally this number should be a small constant that is a multiple of the number of available cores. It has the following attribute. Attribute Name Attribute Description value A positive integer that specifies the number of I/O threads. This attribute is required. stack-size This optional element establishes the desired minimum thread stack size for worker threads. This element should only be defined in very specialized situations where density is at a premium. It has the following attribute. Attribute Name Attribute Description value A positive integer that specifies the requested stack size, in bytes. This attribute is required. outbound-bind-addresses This optional element specifies the bind addresses to use for outbound connections. Each bind address mapping consists of a destination IP address block, and a bind address and optional port number to use for connections to destinations within that block. It can contain any number of bind-address elements. bind-address This optional element defines an individual bind address mapping. It has the following attributes. Attribute Name Attribute Description match The IP address block, in CIDR notation, to match. bind-address The IP address to bind to if the address block matches. This attribute is required. bind-port The port number to bind to if the address block matches. This value defauts to 0 , meaning it binds to any port. This attribute is optional. Example XNIO Worker Configuration in the wildfly-config.xml File The following is an example of how to configure the default XNIO worker using the wildfly-config.xml file. <configuration> ... <worker xmlns="urn:xnio:3.5"> <io-threads value="10"/> <task-keepalive value="100"/> </worker> ... </configuration> | [
"<configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <credential-stores> <credential-store name=\"...\" type=\"...\" provider=\"...\" > <attributes> <attribute name=\"...\" value=\"...\" /> </attributes> <protection-parameter-credentials>...</protection-parameter-credentials> </credential-store> </credential-stores> </authentication-client> </configuration>",
"<attributes> <attribute name=\"...\" value=\"...\" /> </attributes>",
"<protection-parameter-credentials> <key-store-reference>...</key-store-reference> <credential-store-reference store=\"...\" alias=\"...\" clear-text=\"...\" /> <clear-password password=\"...\" /> <key-pair public-key-pem=\"...\" private-key-pem=\"...\" /> <certificate private-key-pem=\"...\" pem=\"...\" /> <public-key-pem>...</public-key-pem> <bearer-token value=\"...\" /> <oauth2-bearer-token token-endpoint-uri=\"...\">...</oauth2-bearer-token> </protection-parameter-credentials>",
"<key-store-reference key-store-name=\"...\" alias=\"...\"> <key-store-clear-password password=\"...\" /> <key-store-credential>...</key-store-credential> </key-store-reference>",
"<configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <key-stores> <key-store name=\"...\"> <!-- The following 3 elements specify where to load the keystore from. --> <file name=\"...\" /> <load-from uri=\"...\" /> <resource name=\"...\" /> <!-- One of the following to specify the protection parameter to unlock the keystore. --> <key-store-clear-password password=\"...\" /> <key-store-credential>...</key-store-credential> </key-store> </key-stores> </authentication-client> </configuration>",
"<key-store-credential key-store-name=\"...\" alias=\"...\"> <key-store-clear-password password=\"...\" /> <key-store-credential>...</key-store-credential> </key-store-credential>",
"<configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <authentication-rules> <rule use-configuration=\"...\"> </rule> </authentication-rules> </authentication-client> </configuration>",
"<rule use-configuration=\"...\"> <!-- At most one of the following two can be defined. --> <match-no-user /> <match-user name=\"...\" /> <!-- Each of the following can be defined at most once. --> <match-protocol name=\"...\" /> <match-host name=\"...\" /> <match-path name=\"...\" /> <match-port number=\"...\" /> <match-urn name=\"...\" /> <match-domain name=\"...\" /> <match-abstract-type name=\"...\" authority=\"...\" /> </rule>",
"<configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <authentication-configurations> <configuration name=\"...\"> <!-- Destination Overrides. --> <set-host name=\"...\" /> <set-port number=\"...\" /> <set-protocol name=\"...\" /> <!-- At most one of the following two elements. --> <set-user-name name=\"...\" /> <set-anonymous /> <set-mechanism-realm name=\"...\" /> <rewrite-user-name-regex pattern=\"...\" replacement=\"...\" /> <sasl-mechanism-selector selector=\"...\" /> <set-mechanism-properties> <property key=\"...\" value=\"...\" /> </set-mechanism-properties> <credentials>...</credentials> <set-authorization-name name=\"...\" /> <providers>...</providers> <!-- At most one of the following two elements. --> <use-provider-sasl-factory /> <use-service-loader-sasl-factory module-name=\"...\" /> </configuration> </authentication-configurations> </authentication-client> </configuration>",
"<configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <ssl-context-rules> <rule use-ssl-context=\"...\"> </rule> </ssl-context-rules> </authentication-client> </configuration>",
"<rule use-ssl-context=\"...\"> <!-- At most one of the following two can be defined. --> <match-no-user /> <match-user name=\"...\" /> <!-- Each of the following can be defined at most once. --> <match-protocol name=\"...\" /> <match-host name=\"...\" /> <match-path name=\"...\" /> <match-port number=\"...\" /> <match-urn name=\"...\" /> <match-domain name=\"...\" /> <match-abstract-type name=\"...\" authority=\"...\" /> </rule>",
"<configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <ssl-contexts> <default-ssl-context name=\"...\"/> <ssl-context name=\"...\"> <key-store-ssl-certificate>...</key-store-ssl-certificate> <trust-store key-store-name=\"...\" /> <cipher-suite selector=\"...\" /> <protocol names=\"... ...\" /> <provider-name name=\"...\" /> <providers>...</providers> <certificate-revocation-list path=\"...\" maximum-cert-path=\"...\" /> </ssl-context> </ssl-contexts> </authentication-client> </configuration>",
"<key-store-ssl-certificate key-store-name=\"...\" alias=\"...\"> <key-store-clear-password password=\"...\" /> <key-store-credential>...</key-store-credential> </key-store-ssl-certificate>",
"<cipher-suite selector=\"DEFAULT\" />",
"<configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <providers /> <credential-stores> <credential-store name=\"...\"> <providers /> </credential-store> </credential-stores> <authentication-configurations> <authentication-configuration name=\"...\"> <providers /> </authentication-configuration> </authentication-configurations> <ssl-contexts> <ssl-context name=\"...\"> <providers /> </ssl-context> </ssl-contexts> </authentication-client> </configuration>",
"<providers> <use-service-loader /> <global /> </providers>",
"<configuration> <jboss-ejb-client xmlns=\"urn:jboss:wildfly-client-ejb:3.0\"> <invocation-timeout seconds=\"10\"/> <connections> <connection uri=\"remote+http://10.20.30.40:8080\"/> </connections> <global-interceptors> <interceptor class=\"org.jboss.example.ExampleInterceptor\"/> </global-interceptors> </jboss-ejb-client> </configuration>",
"<configuration> <http-client xmlns=\"urn:wildfly-http-client:1.0\"> <defaults> <eagerly-acquire-session value=\"true\" /> <buffer-pool buffer-size=\"2000\" max-size=\"10\" direct=\"true\" thread-local-size=\"1\" /> </defaults> </http-client> </configuration>",
"<configuration> <endpoint xmlns=\"urn:jboss-remoting:5.0\"> <connections> <connection destination=\"remote+http://10.20.30.40:8080\" read-timeout=\"50\" write-timeout=\"50\" heartbeat-interval=\"10000\"/> </connections> </endpoint> </configuration>",
"<configuration> <worker xmlns=\"urn:xnio:3.5\"> <io-threads value=\"10\"/> <task-keepalive value=\"100\"/> </worker> </configuration>"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/development_guide/configuring_clients |
Working with model registries | Working with model registries Red Hat OpenShift AI Self-Managed 2.18 Working with model registries in Red Hat OpenShift AI Self-Managed | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_with_model_registries/index |
Chapter 3. Installing a cluster on OpenStack with customizations | Chapter 3. Installing a cluster on OpenStack with customizations In OpenShift Container Platform version 4.14, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP). To customize the installation, modify parameters in the install-config.yaml before you install the cluster. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.14 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have a storage service installed in RHOSP, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . You have the metadata service enabled in RHOSP. 3.2. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 3.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 3.2.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 3.2.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 3.2.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 3.2.4. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you can provision your own API and application ingress load balancing infrastructure to use in place of the default, internal load balancing solution. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 3.2. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 3.3. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 3.2.4.1. Example load balancer configuration for clusters that are deployed with user-managed load balancers This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters that are deployed with user-managed load balancers. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 3.1. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 3.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.4. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 3.5. Configuring an image registry with custom storage on clusters that run on RHOSP After you install a cluster on Red Hat OpenStack Platform (RHOSP), you can use a Cinder volume that is in a specific availability zone for registry storage. Procedure Create a YAML file that specifies the storage class and availability zone to use. For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name> Note OpenShift Container Platform does not verify the existence of the availability zone you choose. Verify the name of the availability zone before you apply the configuration. From a command line, apply the configuration: USD oc apply -f <storage_class_file_name> Example output storageclass.storage.k8s.io/custom-csi-storageclass created Create a YAML file that specifies a persistent volume claim (PVC) that uses your storage class and the openshift-image-registry namespace. For example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: "true" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3 1 Enter the namespace openshift-image-registry . This namespace allows the Cluster Image Registry Operator to consume the PVC. 2 Optional: Adjust the volume size. 3 Enter the name of the storage class that you created. From a command line, apply the configuration: USD oc apply -f <pvc_file_name> Example output persistentvolumeclaim/csi-pvc-imageregistry created Replace the original persistent volume claim in the image registry configuration with the new claim: USD oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{"op": "replace", "path": "/spec/storage/pvc/claim", "value": "csi-pvc-imageregistry"}]' Example output config.imageregistry.operator.openshift.io/cluster patched Over the several minutes, the configuration is updated. Verification To confirm that the registry is using the resources that you defined: Verify that the PVC claim value is identical to the name that you provided in your PVC definition: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output ... status: ... managementState: Managed pvc: claim: csi-pvc-imageregistry ... Verify that the status of the PVC is Bound : USD oc get pvc -n openshift-image-registry csi-pvc-imageregistry Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m 3.6. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Important If the external network's CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process. The default network ranges are: Network Range machineNetwork 10.0.0.0/16 serviceNetwork 172.30.0.0/16 clusterNetwork 10.128.0.0/14 Warning If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP. Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 3.7. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 3.8. Setting OpenStack Cloud Controller Manager options Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options according to the CCM reference guide. Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example: #... [LoadBalancer] lb-provider = "amphora" 1 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #... 1 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 2 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 3 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.2, this feature is only available for the Amphora provider. 4 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 5 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Important For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 3.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for OpenStack 3.10.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.10.2. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 3.10.3. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the install-config.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . If your cluster runs on an RHOSP version that is more than 16.1.6 and less than 16.2.4, bare metal workers do not function due to a known issue that causes the metadata service to be unavailable for services on OpenShift Container Platform nodes. The RHOSP network supports both VM and bare metal server attachment. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an install-config.yaml file as part of the OpenShift Container Platform installation process. Procedure In the install-config.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of controlPlane.platform.openstack.type to a bare metal flavor. Change the value of compute.platform.openstack.type to a bare metal flavor. If you want to deploy your machines on a pre-existing network, change the value of platform.openstack.machinesSubnet to the RHOSP subnet UUID of the network. Control plane and compute machines must use the same subnet. An example bare metal install-config.yaml file controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 ... compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 ... platform: openstack: machinesSubnet: <subnet_UUID> 3 ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. 3 If you want to use a pre-existing network, change this value to the UUID of the RHOSP subnet. Use the updated install-config.yaml file to complete the installation process. The compute machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 3.10.4. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 3.10.4.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 3.10.4.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 3.10.5. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 3.10.6. Optional: Configuring a cluster with dual-stack networking Important Dual-stack configuration for OpenStack is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can create a dual-stack cluster on RHOSP. However, the dual-stack configuration is enabled only if you are using an RHOSP network with IPv4 and IPv6 subnets. Note RHOSP does not support the following configurations: Conversion of an IPv4 single-stack cluster to a dual-stack cluster network. IPv6 as the primary address family for dual-stack cluster network. 3.10.6.1. Deploying the dual-stack cluster For dual-stack networking in OpenShift Container Platform clusters, you can configure IPv4 and IPv6 address endpoints for cluster nodes. Prerequisites You enabled Dynamic Host Configuration Protocol (DHCP) on the subnets. Procedure Create a network with IPv4 and IPv6 subnets. The available address modes for ipv6-ra-mode and ipv6-address-mode fields are: stateful , stateless and slaac . Note The dual-stack network MTU must accommodate both the minimum MTU for IPv6, which is 1280 , and the OVN-Kubernetes encapsulation overhead, which is 100 . Create the API and Ingress VIPs ports. Add the IPv6 subnet to the router to enable router advertisements. If you are using a provider network, you can enable router advertisements by adding the network as an external gateway, which also enables external connectivity. For an IPv4/IPv6 dual-stack cluster where you set IPv4 as the primary endpoint for your cluster nodes, edit the install-config.yaml file like the following example: apiVersion: v1 baseDomain: mydomain.test featureSet: TechPreviewNoUpgrade 1 compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 2 - cidr: "192.168.25.0/24" - cidr: "fd2e:6f44:5dd8:c956::/64" clusterNetwork: 3 - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 serviceNetwork: 4 - 172.30.0.0/16 - fd02::/112 platform: openstack: ingressVIPs: ['192.168.25.79', 'fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad'] 5 apiVIPs: ['192.168.25.199', 'fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36'] 6 controlPlanePort: 7 fixedIPs: 8 - subnet: 9 name: subnet-v4 id: subnet-v4-id - subnet: 10 name: subnet-v6 id: subnet-v6-id network: 11 name: dualstack id: network-id 1 Dual-stack clusters are supported only with the TechPreviewNoUpgrade value. 2 3 4 You must specify an IP address range in the cidr field for both IPv4 and IPv6 address families. 5 Specify the virtual IP (VIP) address endpoints for the Ingress VIP services to provide an interface to the cluster. 6 Specify the virtual IP (VIP) address endpoints for the API VIP services to provide an interface to the cluster. 7 Specify the dual-stack network details that are used by all the nodes across the cluster. 8 The Classless Inter-Domain Routing (CIDR) of any subnet specified in this field must match the CIDRs listed on networks.machineNetwork . 9 10 11 You can specify a value for name , id , or both. Note The ip=dhcp,dhcp6 kernel argument, which is set on all of the nodes, results in a single Network Manager connection profile that is activated on multiple interfaces simultaneously. Because of this behavior, any additional network has the same connection enforced with an identical UUID. If you need an interface-specific configuration, create a new connection profile for that interface so that the default connection is no longer enforced on it. 3.10.7. Installation configuration for a cluster on OpenStack with a user-managed load balancer The following example install-config.yaml file demonstrates how to configure a cluster that uses an external, user-managed load balancer rather than the default internal load balancer. apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2 1 Regardless of which load balancer you use, the load balancer is deployed to this subnet. 2 The UserManaged value indicates that you are using an user-managed load balancer. 3.11. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.12. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 3.12.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 3.12.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 3.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.14. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 3.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.17. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . | [
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"openstack role add --user <user> --project <project> swiftoperator",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>",
"oc apply -f <storage_class_file_name>",
"storageclass.storage.k8s.io/custom-csi-storageclass created",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3",
"oc apply -f <pvc_file_name>",
"persistentvolumeclaim/csi-pvc-imageregistry created",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'",
"config.imageregistry.operator.openshift.io/cluster patched",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"status: managementState: Managed pvc: claim: csi-pvc-imageregistry",
"oc get pvc -n openshift-image-registry csi-pvc-imageregistry",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 platform: openstack: machinesSubnet: <subnet_UUID> 3",
"./openshift-install wait-for install-complete --log-level debug",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: mydomain.test featureSet: TechPreviewNoUpgrade 1 compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 2 - cidr: \"192.168.25.0/24\" - cidr: \"fd2e:6f44:5dd8:c956::/64\" clusterNetwork: 3 - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 serviceNetwork: 4 - 172.30.0.0/16 - fd02::/112 platform: openstack: ingressVIPs: ['192.168.25.79', 'fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad'] 5 apiVIPs: ['192.168.25.199', 'fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36'] 6 controlPlanePort: 7 fixedIPs: 8 - subnet: 9 name: subnet-v4 id: subnet-v4-id - subnet: 10 name: subnet-v6 id: subnet-v6-id network: 11 name: dualstack id: network-id",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_openstack/installing-openstack-installer-custom |
Chapter 16. Accessing odf-console with the ovs-multitenant plugin by manually enabling global pod networking | Chapter 16. Accessing odf-console with the ovs-multitenant plugin by manually enabling global pod networking In OpenShift Container Platform, when ovs-multitenant plugin is used for software-defined networking (SDN), pods from different projects cannot send packets to or receive packets from pods and services of a different project. By default, pods can not communicate between namespaces or projects because a project's pod networking is not global. To access odf-console, the OpenShift console pod in the openshift-console namespace needs to connect with the OpenShift Data Foundation odf-console in the openshift-storage namespace. This is possible only when you manually enable global pod networking. Issue When`ovs-multitenant` plugin is used in the OpenShift Container Platform, the odf-console plugin fails with the following message: Resolution Make the pod networking for the OpenShift Data Foundation project global: | [
"GET request for \"odf-console\" plugin failed: Get \"https://odf-console-service.openshift-storage.svc.cluster.local:9001/locales/en/plugin__odf-console.json\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)",
"oc adm pod-network make-projects-global openshift-storage"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/troubleshooting_openshift_data_foundation/accessing-odf-console-with-ovs-multitenant-plugin-by-manually-enabling-global-pod-networking_rhodf |
Chapter 7. Scaling the Cluster Monitoring Operator | Chapter 7. Scaling the Cluster Monitoring Operator OpenShift Container Platform exposes metrics that the Cluster Monitoring Operator collects and stores in the Prometheus-based monitoring stack. As an administrator, you can view system resources, containers, and components metrics in one dashboard interface, Grafana. 7.1. Prometheus database storage requirements Red Hat performed various tests for different scale sizes. Note The Prometheus storage requirements below are not prescriptive. Higher resource consumption might be observed in your cluster depending on workload activity and resource use. Table 7.1. Prometheus Database storage requirements based on number of nodes/pods in the cluster Number of Nodes Number of pods Prometheus storage growth per day Prometheus storage growth per 15 days RAM Space (per scale size) Network (per tsdb chunk) 50 1800 6.3 GB 94 GB 6 GB 16 MB 100 3600 13 GB 195 GB 10 GB 26 MB 150 5400 19 GB 283 GB 12 GB 36 MB 200 7200 25 GB 375 GB 14 GB 46 MB Approximately 20 percent of the expected size was added as overhead to ensure that the storage requirements do not exceed the calculated value. The above calculation is for the default OpenShift Container Platform Cluster Monitoring Operator. Note CPU utilization has minor impact. The ratio is approximately 1 core out of 40 per 50 nodes and 1800 pods. Recommendations for OpenShift Container Platform Use at least three infrastructure (infra) nodes. Use at least three openshift-container-storage nodes with non-volatile memory express (NVMe) drives. 7.2. Configuring cluster monitoring You can increase the storage capacity for the Prometheus component in the cluster monitoring stack. Procedure To increase the storage capacity for Prometheus: Create a YAML configuration file, cluster-monitoring-config.yaml . For example: apiVersion: v1 kind: ConfigMap data: config.yaml: | prometheusK8s: retention: {{PROMETHEUS_RETENTION_PERIOD}} 1 nodeSelector: node-role.kubernetes.io/infra: "" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 2 resources: requests: storage: {{PROMETHEUS_STORAGE_SIZE}} 3 alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: "" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 4 resources: requests: storage: {{ALERTMANAGER_STORAGE_SIZE}} 5 metadata: name: cluster-monitoring-config namespace: openshift-monitoring 1 A typical value is PROMETHEUS_RETENTION_PERIOD=15d . Units are measured in time using one of these suffixes: s, m, h, d. 2 4 The storage class for your cluster. 3 A typical value is PROMETHEUS_STORAGE_SIZE=2000Gi . Storage values can be a plain integer or as a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. 5 A typical value is ALERTMANAGER_STORAGE_SIZE=20Gi . Storage values can be a plain integer or as a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. Add values for the retention period, storage class, and storage sizes. Save the file. Apply the changes by running: USD oc create -f cluster-monitoring-config.yaml | [
"apiVersion: v1 kind: ConfigMap data: config.yaml: | prometheusK8s: retention: {{PROMETHEUS_RETENTION_PERIOD}} 1 nodeSelector: node-role.kubernetes.io/infra: \"\" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 2 resources: requests: storage: {{PROMETHEUS_STORAGE_SIZE}} 3 alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: \"\" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 4 resources: requests: storage: {{ALERTMANAGER_STORAGE_SIZE}} 5 metadata: name: cluster-monitoring-config namespace: openshift-monitoring",
"oc create -f cluster-monitoring-config.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/scalability_and_performance/scaling-cluster-monitoring-operator |
19.6. Device Options | 19.6. Device Options This section provides information about device options. General Device -device <driver>[,<prop>[=<value>][,...]] All drivers support following properties id bus Following drivers are supported (with available properties): pci-assign host bootindex configfd addr rombar romfile multifunction If the device has multiple functions, all of them need to be assigned to the same guest. rtl8139 mac netdev bootindex addr e1000 mac netdev bootindex addr virtio-net-pci ioeventfd vectors indirect event_idx csum guest_csum gso guest_tso4 guest_tso6 guest_ecn guest_ufo host_tso4 host_tso6 host_ecn host_ufo mrg_rxbuf status ctrl_vq ctrl_rx ctrl_vlan ctrl_rx_extra mac netdev bootindex x-txtimer x-txburst tx addr qxl ram_size vram_size revision cmdlog addr ide-drive unit drive physical_block_size bootindex ver wwn virtio-blk-pci class drive logical_block_size physical_block_size min_io_size opt_io_size bootindex ioeventfd vectors indirect_desc event_idx scsi addr virtio-scsi-pci - Technology Preview in 6.3, supported since 6.4. For Windows guests, Windows Server 2003, which was Technology Preview, is no longer supported since 6.5. However, Windows Server 2008 and 2012, and Windows desktop 7 and 8 are fully supported since 6.5. vectors indirect_desc event_idx num_queues addr isa-debugcon isa-serial index iobase irq chardev virtserialport nr chardev name virtconsole nr chardev name virtio-serial-pci vectors class indirect_desc event_idx max_ports flow_control addr ES1370 addr AC97 addr intel-hda addr hda-duplex cad hda-micro cad hda-output cad i6300esb addr ib700 - no properties sga - no properties virtio-balloon-pci indirect_desc event_idx addr usb-tablet migrate port usb-kbd migrate port usb-mouse migrate port usb-ccid - supported since 6.2 port slot usb-host - Technology Preview since 6.2 hostbus hostaddr hostport vendorid productid isobufs port usb-hub - supported since 6.2 port usb-ehci - Technology Preview since 6.2 freq maxframes port usb-storage - Technology Preview since 6.2 drive bootindex serial removable port usb-redir - Technology Preview for 6.3, supported since 6.4 chardev filter scsi-cd - Technology Preview for 6.3, supported since 6.4 drive logical_block_size physical_block_size min_io_size opt_io_size bootindex ver serial scsi-id lun channel-scsi wwn scsi-hd -Technology Preview for 6.3, supported since 6.4 drive logical_block_size physical_block_size min_io_size opt_io_size bootindex ver serial scsi-id lun channel-scsi wwn scsi-block -Technology Preview for 6.3, supported since 6.4 drive bootindex scsi-disk -Technology Preview for 6.3 drive=drive logical_block_size physical_block_size min_io_size opt_io_size bootindex ver serial scsi-id lun channel-scsi wwn piix3-usb-uhci piix4-usb-uhci ccid-card-passthru Global Device Setting -global <device>.<property>=<value> Supported devices and properties as in "General device" section with these additional devices: isa-fdc driveA driveB bootindexA bootindexB qxl-vga ram_size vram_size revision cmdlog addr Character Device -chardev back end,id=<id>[,<options>] Supported back ends are: null ,id=<id> - null device socket ,id=<id>,port=<port>[,host=<host>][,to=<to>][,ipv4][,ipv6][,nodelay][,server][,nowait][,telnet] - tcp socket socket ,id=<id>,path=<path>[,server][,nowait][,telnet] - unix socket file ,id=<id>,path=<path> - trafit to file. stdio ,id=<id> - standard i/o spicevmc ,id=<id>,name=<name> - spice channel Enable USB -usb | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-whitelist-device-options |
Chapter 2. Managing pipeline experiments | Chapter 2. Managing pipeline experiments 2.1. Overview of pipeline experiments A pipeline experiment is a workspace where you can try different configurations of your pipelines. You can use experiments to organize your runs into logical groups. As a data scientist, you can use OpenShift AI to define, manage, and track pipeline experiments. You can view a record of previously created and archived experiments from the Experiments page in the OpenShift AI user interface. Pipeline experiments contain pipeline runs, including recurring runs. This allows you to try different configurations of your pipelines. When you work with data science pipelines, it is important to monitor and record your pipeline experiments to track the performance of your data science pipelines. You can compare the results of up to 10 pipeline runs at one time, and view available parameter, scalar metric, confusion matrix, and receiver operating characteristic (ROC) curve data for all selected runs. You can view artifacts for an executed pipeline run from the OpenShift AI dashboard. Pipeline artifacts can help you to evaluate the performance of your pipeline runs and make it easier to understand your pipeline components. Pipeline artifacts can range from plain text data to detailed, interactive data visualizations. 2.2. Creating a pipeline experiment Pipeline experiments are workspaces where you can try different configurations of your pipelines. You can also use experiments to organize your pipeline runs into logical groups. Pipeline experiments contain pipeline runs, including recurring runs. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have previously created a data science project that is available and contains a configured pipeline server. You have imported a pipeline to an active pipeline server. Procedure From the OpenShift AI dashboard, click Experiments Experiments and runs . On the Experiments page, from the Project drop-down list, select the project to create the pipeline experiment in. Click Create experiment . In the Create experiment dialog, configure the pipeline experiment: In the Experiment name field, enter a name for the pipeline experiment. In the Description field, enter a description for the pipeline experiment. Click Create experiment . Verification The pipeline experiment that you created appears on the Experiments tab. 2.3. Archiving a pipeline experiment You can retain records of your pipeline experiments by archiving them. If required, you can restore pipeline experiments from your archive to reuse, or delete pipeline experiments that are no longer required. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have previously created a data science project that is available and has a pipeline server. You have imported a pipeline to an active pipeline server. A pipeline experiment is available to archive. Procedure From the OpenShift AI dashboard, click Experiments Experiments and runs . On the Experiments page, from the Project drop-down list, select the project that contains the pipeline experiment that you want to archive. Click the action menu ( ... ) beside the pipeline experiment that you want to archive, and then click Archive . In the Archiving experiment dialog, enter the pipeline experiment name in the text field to confirm that you intend to archive it. Click Archive . Verification The archived pipeline experiment does not appear on the Experiments tab, and instead appears on the Archive tab on the Experiments page for the pipeline experiment. 2.4. Deleting an archived pipeline experiment You can delete pipeline experiments from the OpenShift AI experiment archive. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have previously created a data science project that is available and contains a configured pipeline server. You have imported a pipeline to an active pipeline server. A pipeline experiment is available in the pipeline archive. Procedure From the OpenShift AI dashboard, click Experiments Experiments and runs . On the Experiments page, from the Project drop-down list, select the project that contains the archived pipeline experiment that you want to delete. Click the Archive tab. Click the action menu ( ... ) beside the pipeline experiment that you want to delete, and then click Delete . In the Delete experiment? dialog, enter the pipeline experiment name in the text field to confirm that you intend to delete it. Click Delete . Verification The pipeline experiment that you deleted no longer appears on the Archive tab on the Experiments page. 2.5. Restoring an archived pipeline experiment You can restore an archived pipeline experiment to the active state. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have previously created a data science project that is available and has a pipeline server. An archived pipeline experiment exists in your project. Procedure From the OpenShift AI dashboard, click Experiments Experiments and runs . On the Experiments page, from the Project drop-down list, select the project that contains the archived pipeline experiment that you want to restore. Click the Archive tab. Click the action menu ( ... ) beside the pipeline experiment that you want to restore, and then click Restore . In the Restore experiment dialog, click Restore . Verification The restored pipeline experiment appears on the Experiments tab on the Experiments page. 2.6. Viewing pipeline task executions When a pipeline run executes, you can view details of executed tasks in each step in a pipeline run from the OpenShift AI dashboard. A step forms part of a task in a pipeline. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have previously created a data science project that is available and contains a pipeline server. You have imported a pipeline to an active pipeline server. You have previously triggered a pipeline run. Procedure From the OpenShift AI dashboard, click Experiments Executions . On the Executions page, from the Project drop-down list, select the project that contains the experiment for the pipeline task executions that you want to view. Verification On the Executions page, you can view the execution details of each pipeline task execution, such as its name, status, unique ID, and execution type. The execution status indicates whether the pipeline task has successfully executed. For further information about the details of the task execution, click the execution name. 2.7. Viewing pipeline artifacts After a pipeline run executes, you can view its pipeline artifacts from the OpenShift AI dashboard. Pipeline artifacts can help you to evaluate the performance of your pipeline runs and make it easier to understand your pipeline components. Pipeline artifacts can range from plain text data to detailed, interactive data visualizations. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have previously created a data science project that is available and contains a pipeline server. You have imported a pipeline to an active pipeline server. You have previously triggered a pipeline run. Procedure From the OpenShift AI dashboard, click Experiments Artifacts . On the Artifacts page, from the Project drop-down list, select the project that contains the pipeline experiment for the pipeline artifacts that you want to view. Verification On the Artifacts page, you can view the details of each pipeline artifact, such as its name, unique ID, type, and URI. 2.8. Comparing runs in an experiment You can compare up to 10 pipeline runs in the same experiment at one time, and view available parameter, scalar metric, confusion matrix, and receiver operating characteristic (ROC) curve data for all selected runs. To compare runs from different experiments or pipelines, or to view every pipeline run in a project, see Comparing runs in different experiments . Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have previously created a data science project that is available and has a pipeline server. You have imported a pipeline to an active pipeline server. You have created at least two pipeline runs. Procedure In the OpenShift AI dashboard, select Experiments Experiments and runs . The Experiments page opens. From the Project drop-down list, select the project that contains the runs that you want to compare. On the Experiments tab, in the Experiment column, click the experiment that you want to compare runs for. To select runs that are not in an experiment, click Default . All runs that are created without specifying an experiment will appear in the Default group. The Runs page opens. Select the checkbox to each run that you want to compare, and then click Compare runs . You can compare a maximum of 10 runs at one time. The Compare runs page opens and displays data for the runs that you selected. The Run list section displays a list of selected runs. You can filter the list by run name, pipeline version, start date, and status. The Parameters section displays parameter information for each selected run. Set the Hide parameters with no differences switch to On to hide parameters that have the same values. The Metrics section displays scalar metric, confusion matrix, and ROC curve data for all selected runs. On the Scalar metrics tab, set the Hide parameters with no differences switch to On to hide parameters that have the same values. On the ROC curve tab, in the artifacts list, adjust the ROC curve chart by clearing the checkbox to artifacts that you want to remove from the chart. To select different runs for comparison, click Manage runs . The Manage runs dialog opens. From the filter drop-down list, select Run , Pipeline version , Created after , or Status to filter the run list by each value. Clear the checkbox to each run that you want to remove from your comparison. Select the checkbox to each run that you want to add to your comparison. Click Update . Verification The Compare runs page opens and displays data for the runs that you selected. 2.9. Comparing runs in different experiments You can compare up to 10 pipeline runs from any experiment or pipeline in a project, including runs that do not have a corresponding pipeline, and view available parameter, scalar metric, confusion matrix, and receiver operating characteristic (ROC) curve data for all selected runs. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have previously created a data science project that is available and has a pipeline server. You have imported a pipeline to an active pipeline server. You have created at least two pipeline runs. Procedure In the OpenShift AI dashboard, select Data Science Pipelines Runs . The Runs page opens. From the Project drop-down list, select the project that contains the runs that you want to compare. On the Runs tab, in the Run column, select the checkbox to each run that you want to compare, and then click Compare runs . You can compare a maximum of 10 runs at one time. The Compare runs page opens and displays data for the runs that you selected. The Run list section displays a list of selected runs. You can filter the list by run name, experiment, pipeline version, start date, and status. The Parameters section displays parameter information for each selected run. Set the Hide parameters with no differences switch to On to hide parameters that have the same values. The Metrics section displays scalar metric, confusion matrix, and ROC curve data for all selected runs. On the Scalar metrics tab, set the Hide parameters with no differences switch to On to hide parameters that have the same values. On the ROC curve tab, in the artifacts list, adjust the ROC curve chart by clearing the checkbox to artifacts that you want to remove from the chart. To select different runs for comparison, click Manage runs . The Manage runs page opens. From the filter drop-down list, select Run , Experiment , Pipeline version , Created after , or Status to filter the run list by each value. Clear the checkbox to each run that you want to remove from your comparison. Select the checkbox to each run that you want to add to your comparison. Click Update . Verification The Compare runs page opens and displays data for the runs that you selected. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_with_data_science_pipelines/managing-pipeline-experiments_ds-pipelines |
Chapter 8. Virtual machines | Chapter 8. Virtual machines 8.1. Creating virtual machines Use one of these procedures to create a virtual machine: Quick Start guided tour Running the wizard Pasting a pre-configured YAML file with the virtual machine wizard Using the CLI Warning Do not create virtual machines in openshift-* namespaces. Instead, create a new namespace or use an existing namespace without the openshift prefix. When you create virtual machines from the web console, select a virtual machine template that is configured with a boot source. Virtual machine templates with a boot source are labeled as Available boot source or they display a customized label text. Using templates with an available boot source expedites the process of creating virtual machines. Templates without a boot source are labeled as Boot source required . You can use these templates if you complete the steps for adding a boot source to the virtual machine . 8.1.1. Using a Quick Start to create a virtual machine The web console provides Quick Starts with instructional guided tours for creating virtual machines. You can access the Quick Starts catalog by selecting the Help menu in the Administrator perspective to view the Quick Starts catalog. When you click on a Quick Start tile and begin the tour, the system guides you through the process. Tasks in a Quick Start begin with selecting a Red Hat template. Then, you can add a boot source and import the operating system image. Finally, you can save the custom template and use it to create a virtual machine. Prerequisites Access to the website where you can download the URL link for the operating system image. Procedure In the web console, select Quick Starts from the Help menu. Click on a tile in the Quick Starts catalog. For example: Creating a Red Hat Linux Enterprise Linux virtual machine . Follow the instructions in the guided tour and complete the tasks for importing an operating system image and creating a virtual machine. The Virtual Machines tab displays the virtual machine. 8.1.2. Running the virtual machine wizard to create a virtual machine The web console features a wizard that guides you through the process of selecting a virtual machine template and creating a virtual machine. Red Hat virtual machine templates are preconfigured with an operating system image, default settings for the operating system, flavor (CPU and memory), and workload type (server). When templates are configured with a boot source, they are labeled with a customized label text or the default label text Available boot source . These templates are then ready to be used for creating virtual machines. You can select a template from the list of preconfigured templates, review the settings, and create a virtual machine in the Create virtual machine from template wizard. If you choose to customize your virtual machine, the wizard guides you through the General , Networking , Storage , Advanced , and Review steps. All required fields displayed by the wizard are marked by a *. Create network interface controllers (NICs) and storage disks later and attach them to virtual machines. Procedure Click Workloads Virtualization from the side menu. From the Virtual Machines tab or the Templates tab, click Create and select Virtual Machine with Wizard . Select a template that is configured with a boot source. Click to go to the Review and create step. Clear the Start this virtual machine after creation checkbox if you do not want to start the virtual machine now. Click Create virtual machine and exit the wizard or continue with the wizard to customize the virtual machine. Click Customize virtual machine to go to the General step. Optional: Edit the Name field to specify a custom name for the virtual machine. Optional: In the Description field, add a description. Click to go to the Networking step. A nic0 NIC is attached by default. Optional: Click Add Network Interface to create additional NICs. Optional: You can remove any or all NICs by clicking the Options menu and selecting Delete . A virtual machine does not need a NIC attached to be created. You can create NICs after the virtual machine has been created. Click to go to the Storage step. Optional: Click Add Disk to create additional disks. These disks can be removed by clicking the Options menu and selecting Delete . Optional: Click the Options menu to edit the disk and save your changes. Click to go to the Advanced step and choose one of the following options: If you selected a Linux template to create the VM, review the details for Cloud-init and configure SSH access. Note Statically inject an SSH key by using the custom script in cloud-init or in the wizard. This allows you to securely and remotely manage virtual machines and manage and transfer information. This step is strongly recommended to secure your VM. If you selected a Windows template to create the VM, use the SysPrep section to upload answer files in XML format for automated Windows setup. Click to go to the Review step and review the settings for the virtual machine. Click Create Virtual Machine . Click See virtual machine details to view the Overview for this virtual machine. The virtual machine is listed in the Virtual Machines tab. Refer to the virtual machine wizard fields section when running the web console wizard. 8.1.2.1. Virtual machine wizard fields Name Parameter Description Name The name can contain lowercase letters ( a-z ), numbers ( 0-9 ), and hyphens ( - ), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods ( . ), or special characters. Description Optional description field. Operating System The operating system that is selected for the virtual machine in the template. You cannot edit this field when creating a virtual machine from a template. Boot Source Import via URL (creates PVC) Import content from an image available from an HTTP or HTTPS endpoint. Example: Obtaining a URL link from the web page with the operating system image. Clone existing PVC (creates PVC) Select an existent persistent volume claim available on the cluster and clone it. Import via Registry (creates PVC) Provision virtual machine from a bootable operating system container located in a registry accessible from the cluster. Example: kubevirt/cirros-registry-disk-demo . PXE (network boot - adds network interface) Boot an operating system from a server on the network. Requires a PXE bootable network attachment definition. Persistent Volume Claim project Project name that you want to use for cloning the PVC. Persistent Volume Claim name PVC name that should apply to this virtual machine template if you are cloning an existing PVC. Mount this as a CD-ROM boot source A CD-ROM requires an additional disk for installing the operating system. Select the checkbox to add a disk and customize it later. Flavor Tiny, Small, Medium, Large, Custom Presets the amount of CPU and memory in a virtual machine template with predefined values that are allocated to the virtual machine, depending on the operating system associated with that template. If you choose a default template, you can override the cpus and memsize values in the template using custom values to create a custom template. Alternatively, you can create a custom template by modifying the cpus and memsize values in the Details tab on the Workloads Virtualization page. Workload Type Note If you choose the incorrect Workload Type , there could be performance or resource utilization issues (such as a slow UI). Desktop A virtual machine configuration for use on a desktop. Ideal for consumption on a small scale. Recommended for use with the web console. Use this template class or the Server template class to prioritize VM density over guaranteed VM performance. Server Balances performance and it is compatible with a wide range of server workloads. Use this template class or the Desktop template class to prioritize VM density over guaranteed VM performance. High-Performance (requires CPU Manager) A virtual machine configuration that is optimized for high-performance workloads. Use this template class to prioritize guaranteed VM performance over VM density. Start this virtual machine after creation. This checkbox is selected by default and the virtual machine starts running after creation. Clear the checkbox if you do not want the virtual machine to start when it is created. Enable the CPU Manager to use the high-performance workload profile. 8.1.2.2. Networking fields Name Description Name Name for the network interface controller. Model Indicates the model of the network interface controller. Supported values are e1000e and virtio . Network List of available network attachment definitions. Type List of available binding methods. For the default pod network, masquerade is the only recommended binding method. For secondary networks, use the bridge binding method. The masquerade method is not supported for non-default networks. Select SR-IOV if you configured an SR-IOV network device and defined that network in the namespace. MAC Address MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. 8.1.2.3. Storage fields Name Selection Description Source Blank (creates PVC) Create an empty disk. Import via URL (creates PVC) Import content via URL (HTTP or HTTPS endpoint). Use an existing PVC Use a PVC that is already available in the cluster. Clone existing PVC (creates PVC) Select an existing PVC available in the cluster and clone it. Import via Registry (creates PVC) Import content via container registry. Container (ephemeral) Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. Name Name of the disk. The name can contain lowercase letters ( a-z ), numbers ( 0-9 ), hyphens ( - ), and periods ( . ), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters. Size Size of the disk in GiB. Type Type of disk. Example: Disk or CD-ROM Interface Type of disk device. Supported interfaces are virtIO , SATA , and SCSI . Storage Class The storage class that is used to create the disk. Advanced storage settings The following advanced storage settings are optional and available for Blank , Import via URL , and Clone existing PVC disks. Before OpenShift Virtualization 4.11, if you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. In OpenShift Virtualization 4.11 and later, the system uses the default values from the storage profile . Note Use storage profiles to ensure consistent advanced storage settings when provisioning storage for OpenShift Virtualization. To manually specify Volume Mode and Access Mode , you must clear the Apply optimized StorageProfile settings checkbox, which is selected by default. Name Mode description Parameter Parameter description Volume Mode Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem . Filesystem Stores the virtual disk on a file system-based volume. Block Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it. Access Mode Access mode of the persistent volume. ReadWriteOnce (RWO) Volume can be mounted as read-write by a single node. ReadWriteMany (RWX) Volume can be mounted as read-write by many nodes at one time. Note This is required for some features, such as live migration of virtual machines between nodes. ReadOnlyMany (ROX) Volume can be mounted as read only by many nodes. 8.1.2.4. Cloud-init fields Name Description Hostname Sets a specific hostname for the virtual machine. Authorized SSH Keys The user's public key that is copied to ~/.ssh/authorized_keys on the virtual machine. Custom script Replaces other options with a field in which you paste a custom cloud-init script. To configure storage class defaults, use storage profiles. For more information, see Customizing the storage profile . 8.1.2.5. Pasting in a pre-configured YAML file to create a virtual machine Create a virtual machine by writing or pasting a YAML configuration file. A valid example virtual machine configuration is provided by default whenever you open the YAML edit screen. If your YAML configuration is invalid when you click Create , an error message indicates the parameter in which the error occurs. Only one error is shown at a time. Note Navigating away from the YAML screen while editing cancels any changes to the configuration you have made. Procedure Click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Click Create and select Virtual Machine With YAML . Write or paste your virtual machine configuration in the editable window. Alternatively, use the example virtual machine provided by default in the YAML screen. Optional: Click Download to download the YAML configuration file in its present state. Click Create to create the virtual machine. The virtual machine is listed in the Virtual Machines tab. 8.1.3. Using the CLI to create a virtual machine You can create a virtual machine from a virtualMachine manifest. Procedure Edit the VirtualMachine manifest for your VM. For example, the following manifest configures a Red Hat Enterprise Linux (RHEL) VM: Example 8.1. Example manifest for a RHEL VM 1 Specify the name of the virtual machine. 2 Specify the password for cloud-user. Create a virtual machine by using the manifest file: USD oc create -f <vm_manifest_file>.yaml Optional: Start the virtual machine: USD virtctl start <vm_name> 8.1.4. Virtual machine storage volume types Storage volume type Description ephemeral A local copy-on-write (COW) image that uses a network volume as a read-only backing store. The backing volume must be a PersistentVolumeClaim . The ephemeral image is created when the virtual machine starts and stores all writes locally. The ephemeral image is discarded when the virtual machine is stopped, restarted, or deleted. The backing volume (PVC) is not mutated in any way. persistentVolumeClaim Attaches an available PV to a virtual machine. Attaching a PV allows for the virtual machine data to persist between sessions. Importing an existing virtual machine disk into a PVC by using CDI and attaching the PVC to a virtual machine instance is the recommended method for importing existing virtual machines into OpenShift Container Platform. There are some requirements for the disk to be used within a PVC. dataVolume Data volumes build on the persistentVolumeClaim disk type by managing the process of preparing the virtual machine disk via an import, clone, or upload operation. VMs that use this volume type are guaranteed not to start until the volume is ready. Specify type: dataVolume or type: "" . If you specify any other value for type , such as persistentVolumeClaim , a warning is displayed, and the virtual machine does not start. cloudInitNoCloud Attaches a disk that contains the referenced cloud-init NoCloud data source, providing user data and metadata to the virtual machine. A cloud-init installation is required inside the virtual machine disk. containerDisk References an image, such as a virtual machine disk, that is stored in the container image registry. The image is pulled from the registry and attached to the virtual machine as a disk when the virtual machine is launched. A containerDisk volume is not limited to a single virtual machine and is useful for creating large numbers of virtual machine clones that do not require persistent storage. Only RAW and QCOW2 formats are supported disk types for the container image registry. QCOW2 is recommended for reduced image size. Note A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. A containerDisk volume is useful for read-only file systems such as CD-ROMs or for disposable virtual machines. emptyDisk Creates an additional sparse QCOW2 disk that is tied to the life-cycle of the virtual machine interface. The data survives guest-initiated reboots in the virtual machine but is discarded when the virtual machine stops or is restarted from the web console. The empty disk is used to store application dependencies and data that otherwise exceeds the limited temporary file system of an ephemeral disk. The disk capacity size must also be provided. 8.1.5. About RunStrategies for virtual machines A RunStrategy for virtual machines determines a virtual machine instance's (VMI) behavior, depending on a series of conditions. The spec.runStrategy setting exists in the virtual machine configuration process as an alternative to the spec.running setting. The spec.runStrategy setting allows greater flexibility for how VMIs are created and managed, in contrast to the spec.running setting with only true or false responses. However, the two settings are mutually exclusive. Only either spec.running or spec.runStrategy can be used. An error occurs if both are used. There are four defined RunStrategies. Always A VMI is always present when a virtual machine is created. A new VMI is created if the original stops for any reason, which is the same behavior as spec.running: true . RerunOnFailure A VMI is re-created if the instance fails due to an error. The instance is not re-created if the virtual machine stops successfully, such as when it shuts down. Manual The start , stop , and restart virtctl client commands can be used to control the VMI's state and existence. Halted No VMI is present when a virtual machine is created, which is the same behavior as spec.running: false . Different combinations of the start , stop and restart virtctl commands affect which RunStrategy is used. The following table follows a VM's transition from different states. The first column shows the VM's initial RunStrategy . Each additional column shows a virtctl command and the new RunStrategy after that command is run. Initial RunStrategy start stop restart Always - Halted Always RerunOnFailure - Halted RerunOnFailure Manual Manual Manual Manual Halted Always - - Note In OpenShift Virtualization clusters installed using installer-provisioned infrastructure, when a node fails the MachineHealthCheck and becomes unavailable to the cluster, VMs with a RunStrategy of Always or RerunOnFailure are rescheduled on a new node. apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: RunStrategy: Always 1 template: ... 1 The VMI's current RunStrategy setting. 8.1.6. Additional resources The VirtualMachineSpec definition in the KubeVirt v0.44.2 API Reference provides broader context for the parameters and hierarchy of the virtual machine specification. Note The KubeVirt API Reference is the upstream project reference and might contain parameters that are not supported in OpenShift Virtualization. See Prepare a container disk before adding it to a virtual machine as a containerDisk volume. See Deploying machine health checks for further details on deploying and enabling machine health checks. See Installer-provisioned infrastructure overview for further details on installer-provisioned infrastructure. Customizing the storage profile 8.2. Editing virtual machines You can update a virtual machine configuration using either the YAML editor in the web console or the OpenShift CLI on the command line. You can also update a subset of the parameters in the Virtual Machine Details screen. 8.2.1. Editing a virtual machine in the web console Edit select values of a virtual machine in the web console by clicking the pencil icon to the relevant field. Other values can be edited using the CLI. Labels and annotations are editable for both preconfigured Red Hat templates and your custom virtual machine templates. All other values are editable only for custom virtual machine templates that users have created using the Red Hat templates or the Create Virtual Machine Template wizard. Procedure Click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine. Click the Details tab. Click the pencil icon to make a field editable. Make the relevant changes and click Save . Note If the virtual machine is running, changes to Boot Order or Flavor will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the relevant field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 8.2.2. Editing a virtual machine YAML configuration using the web console You can edit the YAML configuration of a virtual machine in the web console. Some parameters cannot be modified. If you click Save with an invalid configuration, an error message indicates the parameter that cannot be changed. If you edit the YAML configuration while the virtual machine is running, changes will not take effect until you restart the virtual machine. Note Navigating away from the YAML screen while editing cancels any changes to the configuration you have made. Procedure Click Workloads Virtualization from the side menu. Select a virtual machine. Click the YAML tab to display the editable configuration. Optional: You can click Download to download the YAML file locally in its current state. Edit the file and click Save . A confirmation message shows that the modification has been successful and includes the updated version number for the object. 8.2.3. Editing a virtual machine YAML configuration using the CLI Use this procedure to edit a virtual machine YAML configuration using the CLI. Prerequisites You configured a virtual machine with a YAML object configuration file. You installed the oc CLI. Procedure Run the following command to update the virtual machine configuration: USD oc edit <object_type> <object_ID> Open the object configuration. Edit the YAML. If you edit a running virtual machine, you need to do one of the following: Restart the virtual machine. Run the following command for the new configuration to take effect: USD oc apply <object_type> <object_ID> 8.2.4. Adding a virtual disk to a virtual machine Use this procedure to add a virtual disk to a virtual machine. Procedure Click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine to open the Virtual Machine Overview screen. Click the Disks tab. In the Add Disk window, specify the Source , Name , Size , Type , Interface , and Storage Class . Advanced: You can enable preallocation if you use a blank disk source and require maximum write performance when creating data volumes. To do so, select the Enable preallocation checkbox. Optional: In the Advanced list, specify the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. Click Add . Note If the virtual machine is running, the new disk is in the pending restart state and will not be attached until you restart the virtual machine. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. To configure storage class defaults, use storage profiles. For more information, see Customizing the storage profile . 8.2.4.1. Storage fields Name Selection Description Source Blank (creates PVC) Create an empty disk. Import via URL (creates PVC) Import content via URL (HTTP or HTTPS endpoint). Use an existing PVC Use a PVC that is already available in the cluster. Clone existing PVC (creates PVC) Select an existing PVC available in the cluster and clone it. Import via Registry (creates PVC) Import content via container registry. Container (ephemeral) Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. Name Name of the disk. The name can contain lowercase letters ( a-z ), numbers ( 0-9 ), hyphens ( - ), and periods ( . ), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters. Size Size of the disk in GiB. Type Type of disk. Example: Disk or CD-ROM Interface Type of disk device. Supported interfaces are virtIO , SATA , and SCSI . Storage Class The storage class that is used to create the disk. Advanced storage settings The following advanced storage settings are optional and available for Blank , Import via URL , and Clone existing PVC disks. Before OpenShift Virtualization 4.11, if you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. In OpenShift Virtualization 4.11 and later, the system uses the default values from the storage profile . Note Use storage profiles to ensure consistent advanced storage settings when provisioning storage for OpenShift Virtualization. To manually specify Volume Mode and Access Mode , you must clear the Apply optimized StorageProfile settings checkbox, which is selected by default. Name Mode description Parameter Parameter description Volume Mode Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem . Filesystem Stores the virtual disk on a file system-based volume. Block Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it. Access Mode Access mode of the persistent volume. ReadWriteOnce (RWO) Volume can be mounted as read-write by a single node. ReadWriteMany (RWX) Volume can be mounted as read-write by many nodes at one time. Note This is required for some features, such as live migration of virtual machines between nodes. ReadOnlyMany (ROX) Volume can be mounted as read only by many nodes. 8.2.5. Adding a network interface to a virtual machine Use this procedure to add a network interface to a virtual machine. Procedure Click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine to open the Virtual Machine Overview screen. Click the Network Interfaces tab. Click Add Network Interface . In the Add Network Interface window, specify the Name , Model , Network , Type , and MAC Address of the network interface. Click Add . Note If the virtual machine is running, the new network interface is in the pending restart state and changes will not take effect until you restart the virtual machine. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 8.2.5.1. Networking fields Name Description Name Name for the network interface controller. Model Indicates the model of the network interface controller. Supported values are e1000e and virtio . Network List of available network attachment definitions. Type List of available binding methods. For the default pod network, masquerade is the only recommended binding method. For secondary networks, use the bridge binding method. The masquerade method is not supported for non-default networks. Select SR-IOV if you configured an SR-IOV network device and defined that network in the namespace. MAC Address MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. 8.2.6. Editing CD-ROMs for Virtual Machines Use the following procedure to edit CD-ROMs for virtual machines. Procedure Click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine to open the Virtual Machine Overview screen. Click the Disks tab. Click the Options menu for the CD-ROM that you want to edit and select Edit . In the Edit CD-ROM window, edit the fields: Source , Persistent Volume Claim , Name , Type , and Interface . Click Save . 8.2.7. Additional resources Customizing the storage profile 8.3. Editing boot order You can update the values for a boot order list by using the web console or the CLI. With Boot Order in the Virtual Machine Overview page, you can: Select a disk or network interface controller (NIC) and add it to the boot order list. Edit the order of the disks or NICs in the boot order list. Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources. 8.3.1. Adding items to a boot order list in the web console Add items to a boot order list by using the web console. Procedure Click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine to open the Virtual Machine Overview screen. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file. Click Add Source and select a bootable disk or network interface controller (NIC) for the virtual machine. Add any additional disks or NICs to the boot order list. Click Save . Note If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 8.3.2. Editing a boot order list in the web console Edit the boot order list in the web console. Procedure Click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine to open the Virtual Machine Overview screen. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . Choose the appropriate method to move the item in the boot order list: If you do not use a screen reader, hover over the arrow icon to the item that you want to move, drag the item up or down, and drop it in a location of your choice. If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice. Click Save . Note If the virtual machine is running, changes to the boot order list will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 8.3.3. Editing a boot order list in the YAML configuration file Edit the boot order list in a YAML configuration file by using the CLI. Procedure Open the YAML configuration file for the virtual machine by running the following command: USD oc edit vm example Edit the YAML file and modify the values for the boot order associated with a disk or network interface controller (NIC). For example: disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default 1 The boot order value specified for the disk. 2 The boot order value specified for the network interface controller. Save the YAML file. Click reload the content to apply the updated boot order values from the YAML file to the boot order list in the web console. 8.3.4. Removing items from a boot order list in the web console Remove items from a boot order list by using the web console. Procedure Click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine to open the Virtual Machine Overview screen. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . Click the Remove icon to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file. Note If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 8.4. Deleting virtual machines You can delete a virtual machine from the web console or by using the oc command line interface. 8.4.1. Deleting a virtual machine using the web console Deleting a virtual machine permanently removes it from the cluster. Note When you delete a virtual machine, the data volume it uses is automatically deleted. Procedure In the OpenShift Virtualization console, click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Click the Options menu of the virtual machine that you want to delete and select Delete Virtual Machine . Alternatively, click the virtual machine name to open the Virtual Machine Overview screen and click Actions Delete Virtual Machine . In the confirmation pop-up window, click Delete to permanently delete the virtual machine. 8.4.2. Deleting a virtual machine by using the CLI You can delete a virtual machine by using the oc command line interface (CLI). The oc client enables you to perform actions on multiple virtual machines. Note When you delete a virtual machine, the data volume it uses is automatically deleted. Prerequisites Identify the name of the virtual machine that you want to delete. Procedure Delete the virtual machine by running the following command: USD oc delete vm <vm_name> Note This command only deletes objects that exist in the current project. Specify the -n <project_name> option if the object you want to delete is in a different project or namespace. 8.5. Managing virtual machine instances If you have standalone virtual machine instances (VMIs) that were created independently outside of the OpenShift Virtualization environment, you can manage them by using the web console or by using oc or virtctl commands from the command-line interface (CLI). The virtctl command provides more virtualization options than the oc command. For example, you can use virtctl to pause a VM or expose a port. 8.5.1. About virtual machine instances A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the oc command-line interface (CLI). A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the OpenShift Virtualization environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs: List standalone VMIs and their details. Edit labels and annotations for a standalone VMI. Delete a standalone VMI. When you delete a VM, the associated VMI is automatically deleted. You delete a standalone VMI directly because it is not owned by VMs or other objects. Note Before you uninstall OpenShift Virtualization, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs. 8.5.2. Listing all virtual machine instances using the CLI You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI). Procedure List all VMIs by running the following command: USD oc get vmis 8.5.3. Listing standalone virtual machine instances using the web console Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs). Note VMIs that are owned by VMs or other objects are not displayed in the web console. The web console displays only standalone VMIs. If you want to list all VMIs in your cluster, you must use the CLI. Procedure Click Workloads Virtualization from the side menu. A list of VMs and standalone VMIs displays. You can identify standalone VMIs by the dark colored badges that display to the virtual machine instance names. 8.5.4. Editing a standalone virtual machine instance using the web console You can edit annotations and labels for a standalone virtual machine instance (VMI) using the web console. Other items displayed in the Details page for a standalone VMI are not editable. Procedure Click Workloads Virtualization from the side menu. A list of virtual machines (VMs) and standalone VMIs displays. Click the name of a standalone VMI to open the Virtual Machine Instance Overview screen. Click the Details tab. Click the pencil icon that is located on the right side of Annotations . Make the relevant changes and click Save . Note To edit labels for a standalone VMI, click Actions and select Edit Labels. Make the relevant changes and click Save . 8.5.5. Deleting a standalone virtual machine instance using the CLI You can delete a standalone virtual machine instance (VMI) by using the oc command-line interface (CLI). Prerequisites Identify the name of the VMI that you want to delete. Procedure Delete the VMI by running the following command: USD oc delete vmi <vmi_name> 8.5.6. Deleting a standalone virtual machine instance using the web console Delete a standalone virtual machine instance (VMI) from the web console. Procedure In the OpenShift Container Platform web console, click Workloads Virtualization from the side menu. Click the ... button of the standalone virtual machine instance (VMI) that you want to delete and select Delete Virtual Machine Instance . Alternatively, click the name of the standalone VMI. The Virtual Machine Instance Overview page displays. Select Actions Delete Virtual Machine Instance . In the confirmation pop-up window, click Delete to permanently delete the standalone VMI. 8.6. Controlling virtual machine states You can stop, start, restart, and unpause virtual machines from the web console. You can use virtctl to manage virtual machine states and perform other actions from the CLI. For example, you can use virtctl to force stop a VM or expose a port. 8.6.1. Starting a virtual machine You can start a virtual machine from the web console. Procedure Click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Find the row that contains the virtual machine that you want to start. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row. To view comprehensive information about the selected virtual machine before you start it: Access the Virtual Machine Overview screen by clicking the name of the virtual machine. Click Actions . Select Start Virtual Machine . In the confirmation window, click Start to start the virtual machine. Note When you start virtual machine that is provisioned from a URL source for the first time, the virtual machine has a status of Importing while OpenShift Virtualization imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes. 8.6.2. Restarting a virtual machine You can restart a running virtual machine from the web console. Important To avoid errors, do not restart a virtual machine while it has a status of Importing . Procedure Click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Find the row that contains the virtual machine that you want to restart. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row. To view comprehensive information about the selected virtual machine before you restart it: Access the Virtual Machine Overview screen by clicking the name of the virtual machine. Click Actions . Select Restart Virtual Machine . In the confirmation window, click Restart to restart the virtual machine. 8.6.3. Stopping a virtual machine You can stop a virtual machine from the web console. Procedure Click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Find the row that contains the virtual machine that you want to stop. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row. To view comprehensive information about the selected virtual machine before you stop it: Access the Virtual Machine Overview screen by clicking the name of the virtual machine. Click Actions . Select Stop Virtual Machine . In the confirmation window, click Stop to stop the virtual machine. 8.6.4. Unpausing a virtual machine You can unpause a paused virtual machine from the web console. Prerequisites At least one of your virtual machines must have a status of Paused . Note You can pause virtual machines by using the virtctl client. Procedure Click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Find the row that contains the virtual machine that you want to unpause. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: In the Status column, click Paused . To view comprehensive information about the selected virtual machine before you unpause it: Access the Virtual Machine Overview screen by clicking the name of the virtual machine. Click the pencil icon that is located on the right side of Status . In the confirmation window, click Unpause to unpause the virtual machine. 8.7. Accessing virtual machine consoles OpenShift Virtualization provides different virtual machine consoles that you can use to accomplish different product tasks. You can access these consoles through the OpenShift Container Platform web console and by using CLI commands. 8.7.1. Accessing virtual machine consoles in the OpenShift Container Platform web console You can connect to virtual machines by using the serial console or the VNC console in the OpenShift Container Platform web console. You can connect to Windows virtual machines by using the desktop viewer console, which uses RDP (remote desktop protocol), in the OpenShift Container Platform web console. 8.7.1.1. Connecting to the serial console Connect to the serial console of a running virtual machine from the Console tab in the Virtual Machine Overview screen of the web console. Procedure In the OpenShift Virtualization console, click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine to open the Virtual Machine Overview page. Click Console . The VNC console opens by default. Select Disconnect before switching to ensure that only one console session is open at a time. Otherwise, the VNC console session remains active in the background. Click the VNC Console drop-down list and select Serial Console . Click Disconnect to end the console session. Optional: Open the serial console in a separate window by clicking Open Console in New Window . 8.7.1.2. Connecting to the VNC console Connect to the VNC console of a running virtual machine from the Console tab in the Virtual Machine Overview screen of the web console. Procedure In the OpenShift Virtualization console, click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine to open the Virtual Machine Overview page. Click the Console tab. The VNC console opens by default. Optional: Open the VNC console in a separate window by clicking Open Console in New Window . Optional: Send key combinations to the virtual machine by clicking Send Key . Click outside the console window and then click Disconnect to end the session. 8.7.1.3. Connecting to a Windows virtual machine with RDP The desktop viewer console, which utilizes the Remote Desktop Protocol (RDP), provides a better console experience for connecting to Windows virtual machines. To connect to a Windows virtual machine with RDP, download the console.rdp file for the virtual machine from the Consoles tab in the Virtual Machine Details screen of the web console and supply it to your preferred RDP client. Prerequisites A running Windows virtual machine with the QEMU guest agent installed. The qemu-guest-agent is included in the VirtIO drivers. A layer-2 NIC attached to the virtual machine. An RDP client installed on a machine on the same network as the Windows virtual machine. Procedure In the OpenShift Virtualization console, click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a Windows virtual machine to open the Virtual Machine Overview screen. Click the Console tab. In the Console list, select Desktop Viewer . In the Network Interface list, select the layer-2 NIC. Click Launch Remote Desktop to download the console.rdp file. Open an RDP client and reference the console.rdp file. For example, using remmina : USD remmina --connect /path/to/console.rdp Enter the Administrator user name and password to connect to the Windows virtual machine. 8.7.1.4. Copying the SSH command from the web console Copy the command to access a running virtual machine (VM) via SSH from the Actions list in the web console. Procedure In the OpenShift Container Platform console, click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine to open the Virtual Machine Overview page. From the Actions list, select Copy SSH Command . You can now paste this command onto the OpenShift CLI ( oc ). 8.7.2. Accessing virtual machine consoles by using CLI commands 8.7.2.1. Accessing a virtual machine instance via SSH You can use SSH to access a virtual machine (VM) after you expose port 22 on it. The virtctl expose command forwards a virtual machine instance (VMI) port to a node port and creates a service for enabled access. The following example creates the fedora-vm-ssh service that forwards traffic from a specific port of cluster nodes to port 22 of the <fedora-vm> virtual machine. Prerequisites You must be in the same project as the VMI. The VMI you want to access must be connected to the default pod network by using the masquerade binding method. The VMI you want to access must be running. Install the OpenShift CLI ( oc ). Procedure Run the following command to create the fedora-vm-ssh service: 1 <fedora-vm> is the name of the VM that you run the fedora-vm-ssh service on. Check the service to find out which port the service acquired: USD oc get svc Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE fedora-vm-ssh NodePort 127.0.0.1 <none> 22:32551/TCP 6s In this example, the service acquired the 32551 port. Log in to the VMI via SSH. Use the ipAddress of any of the cluster nodes and the port that you found in the step: USD ssh username@<node_IP_address> -p 32551 8.7.2.2. Accessing a virtual machine via SSH with YAML configurations You can enable an SSH connection to a virtual machine (VM) without the need to run the virtctl expose command. When the YAML file for the VM and the YAML file for the service are configured and applied, the service forwards the SSH traffic to the VM. The following examples show the configurations for the VM's YAML file and the service YAML file. Prerequisites Install the OpenShift CLI ( oc ). Create a namespace for the VM's YAML file by using the oc create namespace command and specifying a name for the namespace. Procedure In the YAML file for the VM, add the label and a value for exposing the service for SSH connections. Enable the masquerade feature for the interface: Example VirtualMachine definition apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: namespace: ssh-ns 1 name: vm-ssh spec: running: false template: metadata: labels: kubevirt.io/vm: vm-ssh special: vm-ssh 2 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} 3 name: testmasquerade 4 rng: {} machine: type: "" resources: requests: memory: 1024M networks: - name: testmasquerade pod: {} volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: userData: | #cloud-config user: fedora password: fedora chpasswd: {expire: False} # ... 1 Name of the namespace created by the oc create namespace command. 2 Label used by the service to identify the virtual machine instances that are enabled for SSH traffic connections. The label can be any key:value pair that is added as a label to this YAML file and as a selector in the service YAML file. 3 The interface type is masquerade . 4 The name of this interface is testmasquerade . Create the VM: USD oc create -f <path_for_the_VM_YAML_file> Start the VM: USD virtctl start vm-ssh In the YAML file for the service, specify the service name, port number, and the target port. Example Service definition apiVersion: v1 kind: Service metadata: name: svc-ssh 1 namespace: ssh-ns 2 spec: ports: - targetPort: 22 3 protocol: TCP port: 27017 selector: special: vm-ssh 4 type: NodePort # ... 1 Name of the SSH service. 2 Name of the namespace created by the oc create namespace command. 3 The target port number for the SSH connection. 4 The selector name and value must match the label specified in the YAML file for the VM. Create the service: USD oc create -f <path_for_the_service_YAML_file> Verify that the VM is running: USD oc get vmi Example output NAME AGE PHASE IP NODENAME vm-ssh 6s Running 10.244.196.152 node01 Check the service to find out which port the service acquired: USD oc get svc Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc-ssh NodePort 10.106.236.208 <none> 27017:30093/TCP 22s In this example, the service acquired the port number 30093. Run the following command to obtain the IP address for the node: USD oc get node <node_name> -o wide Example output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node01 Ready worker 6d22h v1.22.1 192.168.55.101 <none> Log in to the VM via SSH by specifying the IP address of the node where the VM is running and the port number. Use the port number displayed by the oc get svc command and the IP address of the node displayed by the oc get node command. The following example shows the ssh command with the username, node's IP address, and the port number: USD ssh [email protected] -p 30093 8.7.2.3. Accessing the serial console of a virtual machine instance The virtctl console command opens a serial console to the specified virtual machine instance. Prerequisites The virt-viewer package must be installed. The virtual machine instance you want to access must be running. Procedure Connect to the serial console with virtctl : USD virtctl console <VMI> 8.7.2.4. Accessing the graphical console of a virtual machine instances with VNC The virtctl client utility can use the remote-viewer function to open a graphical console to a running virtual machine instance. This capability is included in the virt-viewer package. Prerequisites The virt-viewer package must be installed. The virtual machine instance you want to access must be running. Note If you use virtctl via SSH on a remote machine, you must forward the X session to your machine. Procedure Connect to the graphical interface with the virtctl utility: USD virtctl vnc <VMI> If the command failed, try using the -v flag to collect troubleshooting information: USD virtctl vnc <VMI> -v 4 8.7.2.5. Connecting to a Windows virtual machine with an RDP console The Remote Desktop Protocol (RDP) provides a better console experience for connecting to Windows virtual machines. To connect to a Windows virtual machine with RDP, specify the IP address of the attached L2 NIC to your RDP client. Prerequisites A running Windows virtual machine with the QEMU guest agent installed. The qemu-guest-agent is included in the VirtIO drivers. A layer 2 NIC attached to the virtual machine. An RDP client installed on a machine on the same network as the Windows virtual machine. Procedure Log in to the OpenShift Virtualization cluster through the oc CLI tool as a user with an access token. USD oc login -u <user> https://<cluster.example.com>:8443 Use oc describe vmi to display the configuration of the running Windows virtual machine. USD oc describe vmi <windows-vmi-name> Example output ... spec: networks: - name: default pod: {} - multus: networkName: cnv-bridge name: bridge-net ... status: interfaces: - interfaceName: eth0 ipAddress: 198.51.100.0/24 ipAddresses: 198.51.100.0/24 mac: a0:36:9f:0f:b1:70 name: default - interfaceName: eth1 ipAddress: 192.0.2.0/24 ipAddresses: 192.0.2.0/24 2001:db8::/32 mac: 00:17:a4:77:77:25 name: bridge-net ... Identify and copy the IP address of the layer 2 network interface. This is 192.0.2.0 in the above example, or 2001:db8:: if you prefer IPv6. Open an RDP client and use the IP address copied in the step for the connection. Enter the Administrator user name and password to connect to the Windows virtual machine. 8.8. Triggering virtual machine failover by resolving a failed node If a node fails and machine health checks are not deployed on your cluster, virtual machines (VMs) with RunStrategy: Always configured are not automatically relocated to healthy nodes. To trigger VM failover, you must manually delete the Node object. Note If you installed your cluster by using installer-provisioned infrastructure and you properly configured machine health checks: Failed nodes are automatically recycled. Virtual machines with RunStrategy set to Always or RerunOnFailure are automatically scheduled on healthy nodes. 8.8.1. Prerequisites A node where a virtual machine was running has the NotReady condition . The virtual machine that was running on the failed node has RunStrategy set to Always . You have installed the OpenShift CLI ( oc ). 8.8.2. Deleting nodes from a bare metal cluster When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Procedure Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps: Mark the node as unschedulable: USD oc adm cordon <node_name> Drain all pods on the node: USD oc adm drain <node_name> --force=true This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed. Delete the node from the cluster: USD oc delete node <node_name> Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node . If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster. 8.8.3. Verifying virtual machine failover After all resources are terminated on the unhealthy node, a new virtual machine instance (VMI) is automatically created on a healthy node for each relocated VM. To confirm that the VMI was created, view all VMIs by using the oc CLI. 8.8.3.1. Listing all virtual machine instances using the CLI You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI). Procedure List all VMIs by running the following command: USD oc get vmis 8.9. Installing the QEMU guest agent on virtual machines The QEMU guest agent is a daemon that runs on the virtual machine and passes information to the host about the virtual machine, users, file systems, and secondary networks. 8.9.1. Installing QEMU guest agent on a Linux virtual machine The qemu-guest-agent is widely available and available by default in Red Hat virtual machines. Install the agent and start the service. To check if your virtual machine (VM) has the QEMU guest agent installed and running, verify that AgentConnected is listed in the VM spec. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM's file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. Procedure Access the virtual machine command line through one of the consoles or by SSH. Install the QEMU guest agent on the virtual machine: USD yum install -y qemu-guest-agent Ensure the service is persistent and start it: USD systemctl enable --now qemu-guest-agent You can also install and start the QEMU guest agent by using the custom script field in the cloud-init section of the wizard when creating either virtual machines or virtual machines templates in the web console. 8.9.2. Installing QEMU guest agent on a Windows virtual machine For Windows virtual machines, the QEMU guest agent is included in the VirtIO drivers. Install the drivers on an existng or new Windows system. To check if your virtual machine (VM) has the QEMU guest agent installed and running, verify that AgentConnected is listed in the VM spec. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM's file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. 8.9.2.1. Installing VirtIO drivers on an existing Windows virtual machine Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine. Note This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps. Procedure Start the virtual machine and connect to a graphical console. Log in to a Windows user session. Open Device Manager and expand Other devices to list any Unknown device . Open the Device Properties to identify the unknown device. Right-click the device and select Properties . Click the Details tab and select Hardware Ids in the Property list. Compare the Value for the Hardware Ids with the supported VirtIO drivers. Right-click the device and select Update Driver Software . Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Click to install the driver. Repeat this process for all the necessary VirtIO drivers. After the driver installs, click Close to close the window. Reboot the virtual machine to complete the driver installation. 8.9.2.2. Installing VirtIO drivers during Windows installation Install the VirtIO drivers from the attached SATA CD driver during Windows installation. Note This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing. Procedure Start the virtual machine and connect to a graphical console. Begin the Windows installation process. Select the Advanced installation. The storage destination will not be recognized until the driver is loaded. Click Load driver . The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Repeat the two steps for all required drivers. Complete the Windows installation. 8.10. Viewing the QEMU guest agent information for virtual machines When the QEMU guest agent runs on the virtual machine, you can use the web console to view information about the virtual machine, users, file systems, and secondary networks. 8.10.1. Prerequisites Install the QEMU guest agent on the virtual machine. 8.10.2. About the QEMU guest agent information in the web console When the QEMU guest agent is installed, the Details pane within the Virtual Machine Overview tab and the Details tab display information about the hostname, operating system, time zone, and logged in users. The Virtual Machine Overview shows information about the guest operating system installed on the virtual machine. The Details tab displays a table with information for logged in users. The Disks tab displays a table with information for file systems. Note If the QEMU guest agent is not installed, the Virtual Machine Overview tab and the Details tab display information about the operating system that was specified when the virtual machine was created. 8.10.3. Viewing the QEMU guest agent information in the web console You can use the web console to view information for virtual machines that is passed by the QEMU guest agent to the host. Procedure Click Workloads Virtual Machines from the side menu. Click the Virtual Machines tab. Select a virtual machine name to open the Virtual Machine Overview screen and view the Details pane. Click Logged in users to view the Details tab that shows information for users. Click the Disks tab to view information about the file systems. 8.11. Managing config maps, secrets, and service accounts in virtual machines You can use secrets, config maps, and service accounts to pass configuration data to virtual machines. For example, you can: Give a virtual machine access to a service that requires credentials by adding a secret to the virtual machine. Store non-confidential configuration data in a config map so that a pod or another object can consume the data. Allow a component to access the API server by associating a service account with that component. Note OpenShift Virtualization exposes secrets, config maps, and service accounts as virtual machine disks so that you can use them across platforms without additional overhead. 8.11.1. Adding a secret, config map, or service account to a virtual machine Add a secret, config map, or service account to a virtual machine by using the OpenShift Container Platform web console. Prerequisites The secret, config map, or service account that you want to add must exist in the same namespace as the target virtual machine. Procedure Click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine to open the Virtual Machine Overview screen. Click the Environment tab. Click Select a resource and select a secret, config map, or service account from the list. A six character serial number is automatically generated for the selected resource. Click Save . Optional. Add another object by clicking Add Config Map, Secret or Service Account . Note You can reset the form to the last saved state by clicking Reload . The Environment resources are added to the virtual machine as disks. You can mount the secret, config map, or service account as you would mount any other disk. If the virtual machine is running, changes will not take effect until you restart the virtual machine. The newly added resources are marked as pending changes for both the Environment and Disks tab in the Pending Changes banner at the top of the page. Verification From the Virtual Machine Overview page, click the Disks tab. Check to ensure that the secret, config map, or service account is included in the list of disks. Optional. Choose the appropriate method to apply your changes: If the virtual machine is running, restart the virtual machine by clicking Actions Restart Virtual Machine . If the virtual machine is stopped, start the virtual machine by clicking Actions Start Virtual Machine . You can now mount the secret, config map, or service account as you would mount any other disk. 8.11.2. Removing a secret, config map, or service account from a virtual machine Remove a secret, config map, or service account from a virtual machine by using the OpenShift Container Platform web console. Prerequisites You must have at least one secret, config map, or service account that is attached to a virtual machine. Procedure Click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine to open the Virtual Machine Overview screen. Click the Environment tab. Find the item that you want to delete in the list, and click Remove on the right side of the item. Click Save . Note You can reset the form to the last saved state by clicking Reload . Verification From the Virtual Machine Overview page, click the Disks tab. Check to ensure that the secret, config map, or service account that you removed is no longer included in the list of disks. 8.11.3. Additional resources Providing sensitive data to pods Understanding and creating service accounts Understanding config maps 8.12. Installing VirtIO driver on an existing Windows virtual machine 8.12.1. About VirtIO drivers VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in OpenShift Virtualization. The supported drivers are available in the container-native-virtualization/virtio-win container disk of the Red Hat Ecosystem Catalog . The container-native-virtualization/virtio-win container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation. After the drivers are installed, the container-native-virtualization/virtio-win container disk can be removed from the virtual machine. See also: Installing Virtio drivers on a new Windows virtual machine . 8.12.2. Supported VirtIO drivers for Microsoft Windows virtual machines Table 8.1. Supported drivers Driver name Hardware ID Description viostor VEN_1AF4&DEV_1001 VEN_1AF4&DEV_1042 The block driver. Sometimes displays as an SCSI Controller in the Other devices group. viorng VEN_1AF4&DEV_1005 VEN_1AF4&DEV_1044 The entropy source driver. Sometimes displays as a PCI Device in the Other devices group. NetKVM VEN_1AF4&DEV_1000 VEN_1AF4&DEV_1041 The network driver. Sometimes displays as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. 8.12.3. Adding VirtIO drivers container disk to a virtual machine OpenShift Virtualization distributes VirtIO drivers for Microsoft Windows as a container disk, which is available from the Red Hat Ecosystem Catalog . To install these drivers to a Windows virtual machine, attach the container-native-virtualization/virtio-win container disk to the virtual machine as a SATA CD drive in the virtual machine configuration file. Prerequisites Download the container-native-virtualization/virtio-win container disk from the Red Hat Ecosystem Catalog . This is not mandatory, because the container disk will be downloaded from the Red Hat registry if it not already present in the cluster, but it can reduce installation time. Procedure Add the container-native-virtualization/virtio-win container disk as a cdrom disk in the Windows virtual machine configuration file. The container disk will be downloaded from the registry if it is not already present in the cluster. spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk 1 OpenShift Virtualization boots virtual machine disks in the order defined in the VirtualMachine configuration file. You can either define other disks for the virtual machine before the container-native-virtualization/virtio-win container disk or use the optional bootOrder parameter to ensure the virtual machine boots from the correct disk. If you specify the bootOrder for a disk, it must be specified for all disks in the configuration. The disk is available once the virtual machine has started: If you add the container disk to a running virtual machine, use oc apply -f <vm.yaml> in the CLI or reboot the virtual machine for the changes to take effect. If the virtual machine is not running, use virtctl start <vm> . After the virtual machine has started, the VirtIO drivers can be installed from the attached SATA CD drive. 8.12.4. Installing VirtIO drivers on an existing Windows virtual machine Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine. Note This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps. Procedure Start the virtual machine and connect to a graphical console. Log in to a Windows user session. Open Device Manager and expand Other devices to list any Unknown device . Open the Device Properties to identify the unknown device. Right-click the device and select Properties . Click the Details tab and select Hardware Ids in the Property list. Compare the Value for the Hardware Ids with the supported VirtIO drivers. Right-click the device and select Update Driver Software . Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Click to install the driver. Repeat this process for all the necessary VirtIO drivers. After the driver installs, click Close to close the window. Reboot the virtual machine to complete the driver installation. 8.12.5. Removing the VirtIO container disk from a virtual machine After installing all required VirtIO drivers to the virtual machine, the container-native-virtualization/virtio-win container disk no longer needs to be attached to the virtual machine. Remove the container-native-virtualization/virtio-win container disk from the virtual machine configuration file. Procedure Edit the configuration file and remove the disk and the volume . USD oc edit vm <vm-name> spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk Reboot the virtual machine for the changes to take effect. 8.13. Installing VirtIO driver on a new Windows virtual machine 8.13.1. Prerequisites Windows installation media accessible by the virtual machine, such as importing an ISO into a data volume and attaching it to the virtual machine. 8.13.2. About VirtIO drivers VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in OpenShift Virtualization. The supported drivers are available in the container-native-virtualization/virtio-win container disk of the Red Hat Ecosystem Catalog . The container-native-virtualization/virtio-win container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation. After the drivers are installed, the container-native-virtualization/virtio-win container disk can be removed from the virtual machine. See also: Installing VirtIO driver on an existing Windows virtual machine . 8.13.3. Supported VirtIO drivers for Microsoft Windows virtual machines Table 8.2. Supported drivers Driver name Hardware ID Description viostor VEN_1AF4&DEV_1001 VEN_1AF4&DEV_1042 The block driver. Sometimes displays as an SCSI Controller in the Other devices group. viorng VEN_1AF4&DEV_1005 VEN_1AF4&DEV_1044 The entropy source driver. Sometimes displays as a PCI Device in the Other devices group. NetKVM VEN_1AF4&DEV_1000 VEN_1AF4&DEV_1041 The network driver. Sometimes displays as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. 8.13.4. Adding VirtIO drivers container disk to a virtual machine OpenShift Virtualization distributes VirtIO drivers for Microsoft Windows as a container disk, which is available from the Red Hat Ecosystem Catalog . To install these drivers to a Windows virtual machine, attach the container-native-virtualization/virtio-win container disk to the virtual machine as a SATA CD drive in the virtual machine configuration file. Prerequisites Download the container-native-virtualization/virtio-win container disk from the Red Hat Ecosystem Catalog . This is not mandatory, because the container disk will be downloaded from the Red Hat registry if it not already present in the cluster, but it can reduce installation time. Procedure Add the container-native-virtualization/virtio-win container disk as a cdrom disk in the Windows virtual machine configuration file. The container disk will be downloaded from the registry if it is not already present in the cluster. spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk 1 OpenShift Virtualization boots virtual machine disks in the order defined in the VirtualMachine configuration file. You can either define other disks for the virtual machine before the container-native-virtualization/virtio-win container disk or use the optional bootOrder parameter to ensure the virtual machine boots from the correct disk. If you specify the bootOrder for a disk, it must be specified for all disks in the configuration. The disk is available once the virtual machine has started: If you add the container disk to a running virtual machine, use oc apply -f <vm.yaml> in the CLI or reboot the virtual machine for the changes to take effect. If the virtual machine is not running, use virtctl start <vm> . After the virtual machine has started, the VirtIO drivers can be installed from the attached SATA CD drive. 8.13.5. Installing VirtIO drivers during Windows installation Install the VirtIO drivers from the attached SATA CD driver during Windows installation. Note This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing. Procedure Start the virtual machine and connect to a graphical console. Begin the Windows installation process. Select the Advanced installation. The storage destination will not be recognized until the driver is loaded. Click Load driver . The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Repeat the two steps for all required drivers. Complete the Windows installation. 8.13.6. Removing the VirtIO container disk from a virtual machine After installing all required VirtIO drivers to the virtual machine, the container-native-virtualization/virtio-win container disk no longer needs to be attached to the virtual machine. Remove the container-native-virtualization/virtio-win container disk from the virtual machine configuration file. Procedure Edit the configuration file and remove the disk and the volume . USD oc edit vm <vm-name> spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk Reboot the virtual machine for the changes to take effect. 8.14. Advanced virtual machine management 8.14.1. Working with resource quotas for virtual machines Create and manage resource quotas for virtual machines. 8.14.1.1. Setting resource quota limits for virtual machines Resource quotas that only use requests automatically work with virtual machines (VMs). If your resource quota uses limits, you must manually set resource limits on VMs. Resource limits must be at least 100 MiB larger than resource requests. Procedure Set limits for a VM by editing the VirtualMachine manifest. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: # ... resources: requests: memory: 128Mi limits: memory: 256Mi 1 1 This configuration is supported because the limits.memory value is at least 100Mi larger than the requests.memory value. Save the VirtualMachine manifest. 8.14.1.2. Additional resources Resource quotas per project Resource quotas across multiple projects 8.14.2. Specifying nodes for virtual machines You can place virtual machines (VMs) on specific nodes by using node placement rules. 8.14.2.1. About node placement for virtual machines To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. You might want to do this if: You have several VMs. To ensure fault tolerance, you want them to run on different nodes. You have two chatty VMs. To avoid redundant inter-node routing, you want the VMs to run on the same node. Your VMs require specific hardware features that are not present on all available nodes. You have a pod that adds capabilities to a node, and you want to place a VM on that node so that it can use those capabilities. Note Virtual machine placement relies on any existing node placement rules for workloads. If workloads are excluded from specific nodes on the component level, virtual machines cannot be placed on those nodes. You can use the following rule types in the spec field of a VirtualMachine manifest: nodeSelector Allows virtual machines to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs. affinity Enables you to use more expressive syntax to set rules that match nodes with virtual machines. For example, you can specify that a rule is a preference, rather than a hard requirement, so that virtual machines are still scheduled if the rule is not satisfied. Pod affinity, pod anti-affinity, and node affinity are supported for virtual machine placement. Pod affinity works for virtual machines because the VirtualMachine workload type is based on the Pod object. Note Affinity rules only apply during scheduling. OpenShift Container Platform does not reschedule running workloads if the constraints are no longer met. tolerations Allows virtual machines to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts virtual machines that tolerate the taint. 8.14.2.2. Node placement examples The following example YAML file snippets use nodePlacement , affinity , and tolerations fields to customize node placement for virtual machines. 8.14.2.2.1. Example: VM node placement with nodeSelector In this example, the virtual machine requires a node that has metadata containing both example-key-1 = example-value-1 and example-key-2 = example-value-2 labels. Warning If there are no nodes that fit this description, the virtual machine is not scheduled. Example VM manifest metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2 ... 8.14.2.2.2. Example: VM node placement with pod affinity and pod anti-affinity In this example, the VM must be scheduled on a node that has a running pod with the label example-key-1 = example-value-1 . If there is no such pod running on any node, the VM is not scheduled. If possible, the VM is not scheduled on a node that has any pod with the label example-key-2 = example-value-2 . However, if all candidate nodes have a pod with this label, the scheduler ignores this constraint. Example VM manifest metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname ... 1 If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met. 2 If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met. 8.14.2.2.3. Example: VM node placement with node affinity In this example, the VM must be scheduled on a node that has the label example.io/example-key = example-value-1 or the label example.io/example-key = example-value-2 . The constraint is met if only one of the labels is present on the node. If neither label is present, the VM is not scheduled. If possible, the scheduler avoids nodes that have the label example-node-label-key = example-node-label-value . However, if all candidate nodes have this label, the scheduler ignores this constraint. Example VM manifest metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value ... 1 If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met. 2 If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met. 8.14.2.2.4. Example: VM node placement with tolerations In this example, nodes that are reserved for virtual machines are already labeled with the key=virtualization:NoSchedule taint. Because this virtual machine has matching tolerations , it can schedule onto the tainted nodes. Note A virtual machine that tolerates a taint is not required to schedule onto a node with that taint. Example VM manifest metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule" ... 8.14.2.3. Additional resources Specifying nodes for virtualization components Placing pods on specific nodes using node selectors Controlling pod placement on nodes using node affinity rules Controlling pod placement using node taints 8.14.3. Configuring certificate rotation Configure certificate rotation parameters to replace existing certificates. 8.14.3.1. Configuring certificate rotation You can do this during OpenShift Virtualization installation in the web console or after installation in the HyperConverged custom resource (CR). Procedure Open the HyperConverged CR by running the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Edit the spec.certConfig fields as shown in the following example. To avoid overloading the system, ensure that all values are greater than or equal to 10 minutes. Express all values as strings that comply with the golang ParseDuration format . apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3 1 The value of ca.renewBefore must be less than or equal to the value of ca.duration . 2 The value of server.duration must be less than or equal to the value of ca.duration . 3 The value of server.renewBefore must be less than or equal to the value of server.duration . Apply the YAML file to your cluster. 8.14.3.2. Troubleshooting certificate rotation parameters Deleting one or more certConfig values causes them to revert to the default values, unless the default values conflict with one of the following conditions: The value of ca.renewBefore must be less than or equal to the value of ca.duration . The value of server.duration must be less than or equal to the value of ca.duration . The value of server.renewBefore must be less than or equal to the value of server.duration . If the default values conflict with these conditions, you will receive an error. If you remove the server.duration value in the following example, the default value of 24h0m0s is greater than the value of ca.duration , conflicting with the specified conditions. Example certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s This results in the following error message: error: hyperconvergeds.hco.kubevirt.io "kubevirt-hyperconverged" could not be patched: admission webhook "validate-hco.kubevirt.io" denied the request: spec.certConfig: ca.duration is smaller than server.duration The error message only mentions the first conflict. Review all certConfig values before you proceed. 8.14.4. Automating management tasks You can automate OpenShift Virtualization management tasks by using Red Hat Ansible Automation Platform. Learn the basics by using an Ansible Playbook to create a new virtual machine. 8.14.4.1. About Red Hat Ansible Automation Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. Ansible includes support for OpenShift Virtualization, and Ansible modules enable you to automate cluster management tasks such as template, persistent volume claim, and virtual machine operations. Ansible provides a way to automate OpenShift Virtualization management, which you can also accomplish by using the oc CLI tool or APIs. Ansible is unique because it allows you to integrate KubeVirt modules with other Ansible modules. 8.14.4.2. Automating virtual machine creation You can use the kubevirt_vm Ansible Playbook to create virtual machines in your OpenShift Container Platform cluster using Red Hat Ansible Automation Platform. Prerequisites Red Hat Ansible Engine version 2.8 or newer Procedure Edit an Ansible Playbook YAML file so that it includes the kubevirt_vm task: kubevirt_vm: namespace: name: cpu_cores: memory: disks: - name: volume: containerDisk: image: disk: bus: Note This snippet only includes the kubevirt_vm portion of the playbook. Edit the values to reflect the virtual machine you want to create, including the namespace , the number of cpu_cores , the memory , and the disks . For example: kubevirt_vm: namespace: default name: vm1 cpu_cores: 1 memory: 64Mi disks: - name: containerdisk volume: containerDisk: image: kubevirt/cirros-container-disk-demo:latest disk: bus: virtio If you want the virtual machine to boot immediately after creation, add state: running to the YAML file. For example: kubevirt_vm: namespace: default name: vm1 state: running 1 cpu_cores: 1 1 Changing this value to state: absent deletes the virtual machine, if it already exists. Run the ansible-playbook command, using your playbook's file name as the only argument: USD ansible-playbook create-vm.yaml Review the output to determine if the play was successful: Example output (...) TASK [Create my first VM] ************************************************************************ changed: [localhost] PLAY RECAP ******************************************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 If you did not include state: running in your playbook file and you want to boot the VM now, edit the file so that it includes state: running and run the playbook again: USD ansible-playbook create-vm.yaml To verify that the virtual machine was created, try to access the VM console . 8.14.4.3. Example: Ansible Playbook for creating virtual machines You can use the kubevirt_vm Ansible Playbook to automate virtual machine creation. The following YAML file is an example of the kubevirt_vm playbook. It includes sample values that you must replace with your own information if you run the playbook. --- - name: Ansible Playbook 1 hosts: localhost connection: local tasks: - name: Create my first VM kubevirt_vm: namespace: default name: vm1 cpu_cores: 1 memory: 64Mi disks: - name: containerdisk volume: containerDisk: image: kubevirt/cirros-container-disk-demo:latest disk: bus: virtio Additional information Intro to Playbooks Tools for Validating Playbooks 8.14.5. Using EFI mode for virtual machines You can boot a virtual machine (VM) in Extensible Firmware Interface (EFI) mode. 8.14.5.1. About EFI mode for virtual machines Extensible Firmware Interface (EFI), like legacy BIOS, initializes hardware components and operating system image files when a computer starts. EFI supports more modern features and customization options than BIOS, enabling faster boot times. It stores all the information about initialization and startup in a file with a .efi extension, which is stored on a special partition called EFI System Partition (ESP). The ESP also contains the boot loader programs for the operating system that is installed on the computer. 8.14.5.2. Booting virtual machines in EFI mode You can configure a virtual machine to boot in EFI mode by editing the VM manifest. Prerequisites Install the OpenShift CLI ( oc ). Procedure Create a YAML file that defines a VM object. Use the firmware stanza of the example YAML file: Booting in EFI mode with secure boot active apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2 #... 1 OpenShift Virtualization requires System Management Mode ( SMM ) to be enabled for Secure Boot in EFI mode to occur. 2 OpenShift Virtualization supports a VM with or without Secure Boot when using EFI mode. If Secure Boot is enabled, then EFI mode is required. However, EFI mode can be enabled without using Secure Boot. Apply the manifest to your cluster by running the following command: USD oc create -f <file_name>.yaml 8.14.6. Configuring PXE booting for virtual machines PXE booting, or network booting, is available in OpenShift Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host. 8.14.6.1. Prerequisites A Linux bridge must be connected . The PXE server must be connected to the same VLAN as the bridge. 8.14.6.2. PXE booting with a specified MAC address As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition object for your PXE network. Then, reference the network attachment definition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server. Prerequisites A Linux bridge must be connected. The PXE server must be connected to the same VLAN as the bridge. Procedure Configure a PXE network on the cluster: Create the network attachment definition file for PXE network pxe-net-conf : apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf spec: config: '{ "cniVersion": "0.3.1", "name": "pxe-net-conf", "plugins": [ { "type": "cnv-bridge", "bridge": "br1", "vlan": 1 1 }, { "type": "cnv-tuning" 2 } ] }' 1 Optional: The VLAN tag. 2 The cnv-tuning plugin provides support for custom MAC addresses. Note The virtual machine instance will be attached to the bridge br1 through an access port with the requested VLAN. Create the network attachment definition by using the file you created in the step: USD oc create -f pxe-net-conf.yaml Edit the virtual machine instance configuration file to include the details of the interface and network. Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically. Ensure that bootOrder is set to 1 so that the interface boots first. In this example, the interface is connected to a network called <pxe-net> : interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1 Note Boot order is global for interfaces and disks. Assign a boot device number to the disk to ensure proper booting after operating system provisioning. Set the disk bootOrder value to 2 : devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2 Specify that the network is connected to the previously created network attachment definition. In this scenario, <pxe-net> is connected to the network attachment definition called <pxe-net-conf> : networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf Create the virtual machine instance: USD oc create -f vmi-pxe-boot.yaml Example output virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created Wait for the virtual machine instance to run: USD oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running View the virtual machine instance using VNC: USD virtctl vnc vmi-pxe-boot Watch the boot screen to verify that the PXE boot is successful. Log in to the virtual machine instance: USD virtctl console vmi-pxe-boot Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used eth1 for the PXE boot, without an IP address. The other interface, eth0 , got an IP address from OpenShift Container Platform. USD ip addr Example output ... 3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff 8.14.6.3. OpenShift Virtualization networking glossary OpenShift Virtualization provides advanced networking functionality by using custom resources and plugins. The following terms are used throughout OpenShift Virtualization documentation: Container Network Interface (CNI) a Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality. Multus a "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs. Custom resource definition (CRD) a Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource. Network attachment definition a CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks. Preboot eXecution Environment (PXE) an interface that enables an administrator to boot a client machine from a server over the network. Network booting allows you to remotely load operating systems and other software onto the client. 8.14.7. Managing guest memory If you want to adjust guest memory settings to suit a specific use case, you can do so by editing the guest's YAML configuration file. OpenShift Virtualization allows you to configure guest memory overcommitment and disable guest memory overhead accounting. Warning The following procedures increase the chance that virtual machine processes will be killed due to memory pressure. Proceed only if you understand the risks. 8.14.7.1. Configuring guest memory overcommitment If your virtual workload requires more memory than available, you can use memory overcommitment to allocate all or most of the host's memory to your virtual machine instances (VMIs). Enabling memory overcommitment means that you can maximize resources that are normally reserved for the host. For example, if the host has 32 GB RAM, you can use memory overcommitment to fit 8 virtual machines (VMs) with 4 GB RAM each. This allocation works under the assumption that the virtual machines will not use all of their memory at the same time. Important Memory overcommitment increases the potential for virtual machine processes to be killed due to memory pressure (OOM killed). The potential for a VM to be OOM killed varies based on your specific configuration, node memory, available swap space, virtual machine memory consumption, the use of kernel same-page merging (KSM), and other factors. Procedure To explicitly tell the virtual machine instance that it has more memory available than was requested from the cluster, edit the virtual machine configuration file and set spec.domain.memory.guest to a higher value than spec.domain.resources.requests.memory . This process is called memory overcommitment. In this example, 1024M is requested from the cluster, but the virtual machine instance is told that it has 2048M available. As long as there is enough free memory available on the node, the virtual machine instance will consume up to 2048M. kind: VirtualMachine spec: template: domain: resources: requests: memory: 1024M memory: guest: 2048M Note The same eviction rules as those for pods apply to the virtual machine instance if the node is under memory pressure. Create the virtual machine: USD oc create -f <file_name>.yaml 8.14.7.2. Disabling guest memory overhead accounting A small amount of memory is requested by each virtual machine instance in addition to the amount that you request. This additional memory is used for the infrastructure that wraps each VirtualMachineInstance process. Though it is not usually advisable, it is possible to increase the virtual machine instance density on the node by disabling guest memory overhead accounting. Important Disabling guest memory overhead accounting increases the potential for virtual machine processes to be killed due to memory pressure (OOM killed). The potential for a VM to be OOM killed varies based on your specific configuration, node memory, available swap space, virtual machine memory consumption, the use of kernel same-page merging (KSM), and other factors. Procedure To disable guest memory overhead accounting, edit the YAML configuration file and set the overcommitGuestOverhead value to true . This parameter is disabled by default. kind: VirtualMachine spec: template: domain: resources: overcommitGuestOverhead: true requests: memory: 1024M Note If overcommitGuestOverhead is enabled, it adds the guest overhead to memory limits, if present. Create the virtual machine: USD oc create -f <file_name>.yaml 8.14.8. Using huge pages with virtual machines You can use huge pages as backing memory for virtual machines in your cluster. 8.14.8.1. Prerequisites Nodes must have pre-allocated huge pages configured . 8.14.8.2. What huge pages do Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size. A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP. In OpenShift Virtualization, virtual machines can be configured to consume pre-allocated huge pages. 8.14.8.3. Configuring huge pages for virtual machines You can configure virtual machines to use pre-allocated huge pages by including the memory.hugepages.pageSize and resources.requests.memory parameters in your virtual machine configuration. The memory request must be divisible by the page size. For example, you cannot request 500Mi memory with a page size of 1Gi . Note The memory layouts of the host and the guest OS are unrelated. Huge pages requested in the virtual machine manifest apply to QEMU. Huge pages inside the guest can only be configured based on the amount of available memory of the virtual machine instance. If you edit a running virtual machine, the virtual machine must be rebooted for the changes to take effect. Prerequisites Nodes must have pre-allocated huge pages configured. Procedure In your virtual machine configuration, add the resources.requests.memory and memory.hugepages.pageSize parameters to the spec.domain . The following configuration snippet is for a virtual machine that requests a total of 4Gi memory with a page size of 1Gi : kind: VirtualMachine ... spec: domain: resources: requests: memory: "4Gi" 1 memory: hugepages: pageSize: "1Gi" 2 ... 1 The total amount of memory requested for the virtual machine. This value must be divisible by the page size. 2 The size of each huge page. Valid values for x86_64 architecture are 1Gi and 2Mi . The page size must be smaller than the requested memory. Apply the virtual machine configuration: USD oc apply -f <virtual_machine>.yaml 8.14.9. Enabling dedicated resources for virtual machines To improve performance, you can dedicate node resources, such as CPU, to a virtual machine. 8.14.9.1. About dedicated resources When you enable dedicated resources for your virtual machine, your virtual machine's workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions. 8.14.9.2. Prerequisites The CPU Manager must be configured on the node. Verify that the node has the cpumanager = true label before scheduling virtual machine workloads. The virtual machine must be powered off. 8.14.9.3. Enabling dedicated resources for a virtual machine You can enable dedicated resources for a virtual machine in the Details tab. Virtual machines that were created by using a Red Hat template or the wizard can be enabled with dedicated resources. Procedure Click Workloads Virtual Machines from the side menu. Select a virtual machine to open the Virtual Machine tab. Click the Details tab. Click the pencil icon to the right of the Dedicated Resources field to open the Dedicated Resources window. Select Schedule this workload with dedicated resources (guaranteed policy) . Click Save . 8.14.10. Scheduling virtual machines You can schedule a virtual machine (VM) on a node by ensuring that the VM's CPU model and policy attribute are matched for compatibility with the CPU models and policy attributes supported by the node. 8.14.10.1. Policy attributes You can schedule a virtual machine (VM) by specifying a policy attribute and a CPU feature that is matched for compatibility when the VM is scheduled on a node. A policy attribute specified for a VM determines how that VM is scheduled on a node. Policy attribute Description force The VM is forced to be scheduled on a node. This is true even if the host CPU does not support the VM's CPU. require Default policy that applies to a VM if the VM is not configured with a specific CPU model and feature specification. If a node is not configured to support CPU node discovery with this default policy attribute or any one of the other policy attributes, VMs are not scheduled on that node. Either the host CPU must support the VM's CPU or the hypervisor must be able to emulate the supported CPU model. optional The VM is added to a node if that VM is supported by the host's physical machine CPU. disable The VM cannot be scheduled with CPU node discovery. forbid The VM is not scheduled even if the feature is supported by the host CPU and CPU node discovery is enabled. 8.14.10.2. Setting a policy attribute and CPU feature You can set a policy attribute and CPU feature for each virtual machine (VM) to ensure that it is scheduled on a node according to policy and feature. The CPU feature that you set is verified to ensure that it is supported by the host CPU or emulated by the hypervisor. Procedure Edit the domain spec of your VM configuration file. The following example sets the CPU feature and the require policy for a virtual machine (VM): apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2 1 Name of the CPU feature for the VM. 2 Policy attribute for the VM. 8.14.10.3. Scheduling virtual machines with the supported CPU model You can configure a CPU model for a virtual machine (VM) to schedule it on a node where its CPU model is supported. Procedure Edit the domain spec of your virtual machine configuration file. The following example shows a specific CPU model defined for a VM: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1 1 CPU model for the VM. 8.14.10.4. Scheduling virtual machines with the host model When the CPU model for a virtual machine (VM) is set to host-model , the VM inherits the CPU model of the node where it is scheduled. Procedure Edit the domain spec of your VM configuration file. The following example shows host-model being specified for the virtual machine: apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1 1 The VM that inherits the CPU model of the node where it is scheduled. 8.14.11. Configuring PCI passthrough The Peripheral Component Interconnect (PCI) passthrough feature enables you to access and manage hardware devices from a virtual machine. When PCI passthrough is configured, the PCI devices function as if they were physically attached to the guest operating system. Cluster administrators can expose and manage host devices that are permitted to be used in the cluster by using the oc command-line interface (CLI). 8.14.11.1. About preparing a host device for PCI passthrough To prepare a host device for PCI passthrough by using the CLI, create a MachineConfig object and add kernel arguments to enable the Input-Output Memory Management Unit (IOMMU). Bind the PCI device to the Virtual Function I/O (VFIO) driver and then expose it in the cluster by editing the permittedHostDevices field of the HyperConverged custom resource (CR). The permittedHostDevices list is empty when you first install the OpenShift Virtualization Operator. To remove a PCI host device from the cluster by using the CLI, delete the PCI device information from the HyperConverged CR. 8.14.11.1.1. Adding kernel arguments to enable the IOMMU driver To enable the IOMMU (Input-Output Memory Management Unit) driver in the kernel, create the MachineConfig object and add the kernel arguments. Prerequisites Administrative privilege to a working OpenShift Container Platform cluster. Intel or AMD CPU hardware. Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS (Basic Input/Output System) is enabled. Procedure Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3 ... 1 Applies the new kernel argument only to worker nodes. 2 The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on . 3 Identifies the kernel argument as intel_iommu for an Intel CPU. Create the new MachineConfig object: USD oc create -f 100-worker-kernel-arg-iommu.yaml Verification Verify that the new MachineConfig object was added. USD oc get MachineConfig 8.14.11.1.2. Binding PCI devices to the VFIO driver To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values for vendor-ID and device-ID from each device and create a list with the values. Add this list to the MachineConfig object. The MachineConfig Operator generates the /etc/modprobe.d/vfio.conf on the nodes with the PCI devices, and binds the PCI devices to the VFIO driver. Prerequisites You added kernel arguments to enable IOMMU for the CPU. Procedure Run the lspci command to obtain the vendor-ID and the device-ID for the PCI device. USD lspci -nnv | grep -i nvidia Example output 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1) Create a Butane config file, 100-worker-vfiopci.bu , binding the PCI device to the VFIO driver. Note See "Creating machine configs with Butane" for information about Butane. Example variant: openshift version: 4.9.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci 1 Applies the new kernel argument only to worker nodes. 2 Specify the previously determined vendor-ID value ( 10de ) and the device-ID value ( 1eb8 ) to bind a single device to the VFIO driver. You can add a list of multiple devices with their vendor and device information. 3 The file that loads the vfio-pci kernel module on the worker nodes. Use Butane to generate a MachineConfig object file, 100-worker-vfiopci.yaml , containing the configuration to be delivered to the worker nodes: USD butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml Apply the MachineConfig object to the worker nodes: USD oc apply -f 100-worker-vfiopci.yaml Verify that the MachineConfig object was added. USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s Verification Verify that the VFIO driver is loaded. USD lspci -nnk -d 10de: The output confirms that the VFIO driver is being used. Example output 8.14.11.1.3. Exposing PCI host devices in the cluster using the CLI To expose PCI host devices in the cluster, add details about the PCI devices to the spec.permittedHostDevices.pciHostDevices array of the HyperConverged custom resource (CR). Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the PCI device information to the spec.permittedHostDevices.pciHostDevices array. For example: Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: "10DE:1DB6" 3 resourceName: "nvidia.com/GV100GL_Tesla_V100" 4 - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" - pciDeviceSelector: "8086:6F54" resourceName: "intel.com/qat" externalResourceProvider: true 5 ... 1 The host devices that are permitted to be used in the cluster. 2 The list of PCI devices available on the node. 3 The vendor-ID and the device-ID required to identify the PCI device. 4 The name of a PCI host device. 5 Optional: Setting this field to true indicates that the resource is provided by an external device plugin. OpenShift Virtualization allows the usage of this device in the cluster but leaves the allocation and monitoring to an external device plugin. Note The above example snippet shows two PCI host devices that are named nvidia.com/GV100GL_Tesla_V100 and nvidia.com/TU104GL_Tesla_T4 added to the list of permitted host devices in the HyperConverged CR. These devices have been tested and verified to work with OpenShift Virtualization. Save your changes and exit the editor. Verification Verify that the PCI host devices were added to the node by running the following command. The example output shows that there is one device each associated with the nvidia.com/GV100GL_Tesla_V100 , nvidia.com/TU104GL_Tesla_T4 , and intel.com/qat resource names. USD oc describe node <node_name> Example output Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 8.14.11.1.4. Removing PCI host devices from the cluster using the CLI To remove a PCI host device from the cluster, delete the information for that device from the HyperConverged custom resource (CR). Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Remove the PCI device information from the spec.permittedHostDevices.pciHostDevices array by deleting the pciDeviceSelector , resourceName and externalResourceProvider (if applicable) fields for the appropriate device. In this example, the intel.com/qat resource has been deleted. Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: "10DE:1DB6" resourceName: "nvidia.com/GV100GL_Tesla_V100" - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" ... Save your changes and exit the editor. Verification Verify that the PCI host device was removed from the node by running the following command. The example output shows that there are zero devices associated with the intel.com/qat resource name. USD oc describe node <node_name> Example output Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 8.14.11.2. Configuring virtual machines for PCI passthrough After the PCI devices have been added to the cluster, you can assign them to virtual machines. The PCI devices are now available as if they are physically connected to the virtual machines. 8.14.11.2.1. Assigning a PCI device to a virtual machine When a PCI device is available in a cluster, you can assign it to a virtual machine and enable PCI passthrough. Procedure Assign the PCI device to a virtual machine as a host device. Example apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1 1 The name of the PCI device that is permitted on the cluster as a host device. The virtual machine can access this host device. Verification Use the following command to verify that the host device is available from the virtual machine. USD lspci -nnk | grep NVIDIA Example output USD 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1) 8.14.11.3. Additional resources Enabling Intel VT-X and AMD-V Virtualization Hardware Extensions in BIOS Managing file permissions Post-installation machine configuration tasks 8.14.12. Configuring a watchdog Expose a watchdog by configuring the virtual machine (VM) for a watchdog device, installing the watchdog, and starting the watchdog service. 8.14.12.1. Prerequisites The virtual machine must have kernel support for an i6300esb watchdog device. Red Hat Enterprise Linux (RHEL) images support i6300esb . 8.14.12.2. Defining a watchdog device Define how the watchdog proceeds when the operating system (OS) no longer responds. Table 8.3. Available actions poweroff The virtual machine (VM) powers down immediately. If spec.running is set to true , or spec.runStrategy is not set to manual , then the VM reboots. reset The VM reboots in place and the guest OS cannot react. Because the length of time required for the guest OS to reboot can cause liveness probes to timeout, use of this option is discouraged. This timeout can extend the time it takes the VM to reboot if cluster-level protections notice the liveness probe failed and forcibly reschedule it. shutdown The VM gracefully powers down by stopping all services. Procedure Create a YAML file with the following contents: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: "poweroff" 1 ... 1 Specify the watchdog action ( poweroff , reset , or shutdown ). The example above configures the i6300esb watchdog device on a RHEL8 VM with the poweroff action and exposes the device as /dev/watchdog . This device can now be used by the watchdog binary. Apply the YAML file to your cluster by running the following command: USD oc apply -f <file_name>.yaml Important This procedure is provided for testing watchdog functionality only and must not be run on production machines. Run the following command to verify that the VM is connected to the watchdog device: USD lspci | grep watchdog -i Run one of the following commands to confirm the watchdog is active: Trigger a kernel panic: # echo c > /proc/sysrq-trigger Terminate the watchdog service: # pkill -9 watchdog 8.14.12.3. Installing a watchdog device Install the watchdog package on your virtual machine and start the watchdog service. Procedure As a root user, install the watchdog package and dependencies: # yum install watchdog Uncomment the following line in the /etc/watchdog.conf file, and save the changes: #watchdog-device = /dev/watchdog Enable the watchdog service to start on boot: # systemctl enable --now watchdog.service 8.14.12.4. Additional resources Monitoring application health by using health checks 8.15. Importing virtual machines 8.15.1. TLS certificates for data volume imports 8.15.1.1. Adding TLS certificates for authenticating data volume imports TLS certificates for registry or HTTPS endpoints must be added to a config map to import data from these sources. This config map must be present in the namespace of the destination data volume. Create the config map by referencing the relative file path for the TLS certificate. Procedure Ensure you are in the correct namespace. The config map can only be referenced by data volumes if it is in the same namespace. USD oc get ns Create the config map: USD oc create configmap <configmap-name> --from-file=</path/to/file/ca.pem> 8.15.1.2. Example: Config map created from a TLS certificate The following example is of a config map created from ca.pem TLS certificate. apiVersion: v1 kind: ConfigMap metadata: name: tls-certs data: ca.pem: | -----BEGIN CERTIFICATE----- ... <base64 encoded cert> ... -----END CERTIFICATE----- 8.15.2. Importing virtual machine images with data volumes Use the Containerized Data Importer (CDI) to import a virtual machine image into a persistent volume claim (PVC) by using a data volume. You can attach a data volume to a virtual machine for persistent storage. The virtual machine image can be hosted at an HTTP or HTTPS endpoint, or built into a container disk and stored in a container registry. Important When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded. The resizing procedure varies based on the operating system installed on the virtual machine. See the operating system documentation for details. 8.15.2.1. Prerequisites If the endpoint requires a TLS certificate, the certificate must be included in a config map in the same namespace as the data volume and referenced in the data volume configuration. To import a container disk: You might need to prepare a container disk from a virtual machine image and store it in your container registry before importing it. If the container registry does not have TLS, you must add the registry to the insecureRegistries field of the HyperConverged custom resource before you can import a container disk from it. You might need to define a storage class or prepare CDI scratch space for this operation to complete successfully. 8.15.2.2. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required Note CDI now uses the OpenShift Container Platform cluster-wide proxy configuration . 8.15.2.3. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 8.15.2.4. Importing a virtual machine image into storage by using a data volume You can import a virtual machine image into storage by using a data volume. The virtual machine image can be hosted at an HTTP or HTTPS endpoint or the image can be built into a container disk and stored in a container registry. You specify the data source for the image in a VirtualMachine configuration file. When the virtual machine is created, the data volume with the virtual machine image is imported into storage. Prerequisites To import a virtual machine image you must have the following: A virtual machine disk image in RAW, ISO, or QCOW2 format, optionally compressed by using xz or gz . An HTTP or HTTPS endpoint where the image is hosted, along with any authentication credentials needed to access the data source. To import a container disk, you must have a virtual machine image built into a container disk and stored in a container registry, along with any authentication credentials needed to access the data source. If the virtual machine must communicate with servers that use self-signed certificates or certificates not signed by the system CA bundle, you must create a config map in the same namespace as the data volume. Procedure If your data source requires authentication, create a Secret manifest, specifying the data source credentials, and save it as endpoint-secret.yaml : apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 2 secretKey: "" 3 1 Specify the name of the Secret . 2 Specify the Base64-encoded key ID or user name. 3 Specify the Base64-encoded secret key or password. Apply the Secret manifest: USD oc apply -f endpoint-secret.yaml Edit the VirtualMachine manifest, specifying the data source for the virtual machine image you want to import, and save it as vm-fedora-datavolume.yaml : 1 Specify the name of the virtual machine. 2 Specify the name of the data volume. 3 Specify http for an HTTP or HTTPS endpoint. Specify registry for a container disk image imported from a registry. 4 The source of the virtual machine image you want to import. This example references a virtual machine image at an HTTPS endpoint. An example of a container registry endpoint is url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest" . 5 Required if you created a Secret for the data source. 6 Optional: Specify a CA certificate config map. Create the virtual machine: USD oc create -f vm-fedora-datavolume.yaml Note The oc create command creates the data volume and the virtual machine. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes to Succeeded . You can start the virtual machine. Data volume provisioning happens in the background, so there is no need to monitor the process. Verification The importer pod downloads the virtual machine image or container disk from the specified URL and stores it on the provisioned PV. View the status of the importer pod by running the following command: USD oc get pods Monitor the data volume until its status is Succeeded by running the following command: USD oc describe dv fedora-dv 1 1 Specify the data volume name that you defined in the VirtualMachine manifest. Verify that provisioning is complete and that the virtual machine has started by accessing its serial console: USD virtctl console vm-fedora-datavolume 8.15.2.5. Additional resources Configure preallocation mode to improve write performance for data volume operations. 8.15.3. Importing virtual machine images into block storage with data volumes You can import an existing virtual machine image into your OpenShift Container Platform cluster. OpenShift Virtualization uses data volumes to automate the import of data and the creation of an underlying persistent volume claim (PVC). Important When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded. The resizing procedure varies based on the operating system that is installed on the virtual machine. See the operating system documentation for details. 8.15.3.1. Prerequisites If you require scratch space according to the CDI supported operations matrix , you must first define a storage class or prepare CDI scratch space for this operation to complete successfully. 8.15.3.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 8.15.3.3. About block persistent volumes A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead. Raw block volumes are provisioned by specifying volumeMode: Block in the PV and persistent volume claim (PVC) specification. 8.15.3.4. Creating a local block persistent volume Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block volume and use it as a block device for a virtual machine image. Procedure Log in as root to the node on which to create the local PV. This procedure uses node01 for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file loop10 with a size of 2Gb (20 100Mb blocks): USD dd if=/dev/zero of=<loop10> bs=100M count=20 Mount the loop10 file as a loop device. USD losetup </dev/loop10>d3 <loop10> 1 2 1 File path where the loop device is mounted. 2 The file created in the step to be mounted as the loop device. Create a PersistentVolume manifest that references the mounted loop device. kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4 1 The path of the loop device on the node. 2 Specifies it is a block PV. 3 Optional: Set a storage class for the PV. If you omit it, the cluster default is used. 4 The node on which the block device was mounted. Create the block PV. # oc create -f <local-block-pv10.yaml> 1 1 The file name of the persistent volume created in the step. 8.15.3.5. Importing a virtual machine image into block storage by using a data volume You can import a virtual machine image into block storage by using a data volume. You reference the data volume in a VirtualMachine manifest before you create a virtual machine. Prerequisites A virtual machine disk image in RAW, ISO, or QCOW2 format, optionally compressed by using xz or gz . An HTTP or HTTPS endpoint where the image is hosted, along with any authentication credentials needed to access the data source. Procedure If your data source requires authentication, create a Secret manifest, specifying the data source credentials, and save it as endpoint-secret.yaml : apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 2 secretKey: "" 3 1 Specify the name of the Secret . 2 Specify the Base64-encoded key ID or user name. 3 Specify the Base64-encoded secret key or password. Apply the Secret manifest: USD oc apply -f endpoint-secret.yaml Create a DataVolume manifest, specifying the data source for the virtual machine image and Block for storage.volumeMode . apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: import-pv-datavolume 1 spec: storageClassName: local 2 source: http: url: "https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2" 3 secretRef: endpoint-secret 4 storage: volumeMode: Block 5 resources: requests: storage: 10Gi 1 Specify the name of the data volume. 2 Optional: Set the storage class or omit it to accept the cluster default. 3 Specify the HTTP or HTTPS URL of the image to import. 4 Required if you created a Secret for the data source. 5 The volume mode and access mode are detected automatically for known storage provisioners. Otherwise, specify Block . Create the data volume to import the virtual machine image: USD oc create -f import-pv-datavolume.yaml You can reference this data volume in a VirtualMachine manifest before you create a virtual machine. 8.15.3.6. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required Note CDI now uses the OpenShift Container Platform cluster-wide proxy configuration . 8.15.3.7. Additional resources Configure preallocation mode to improve write performance for data volume operations. 8.16. Cloning virtual machines 8.16.1. Enabling user permissions to clone data volumes across namespaces The isolating nature of namespaces means that users cannot by default clone resources between namespaces. To enable a user to clone a virtual machine to another namespace, a user with the cluster-admin role must create a new cluster role. Bind this cluster role to a user to enable them to clone virtual machines to the destination namespace. 8.16.1.1. Prerequisites Only a user with the cluster-admin role can create cluster roles. 8.16.1.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 8.16.1.3. Creating RBAC resources for cloning data volumes Create a new cluster role that enables permissions for all actions for the datavolumes resource. Procedure Create a ClusterRole manifest: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: ["cdi.kubevirt.io"] resources: ["datavolumes/source"] verbs: ["*"] 1 Unique name for the cluster role. Create the cluster role in the cluster: USD oc create -f <datavolume-cloner.yaml> 1 1 The file name of the ClusterRole manifest created in the step. Create a RoleBinding manifest that applies to both the source and destination namespaces and references the cluster role created in the step. apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io 1 Unique name for the role binding. 2 The namespace for the source data volume. 3 The namespace to which the data volume is cloned. 4 The name of the cluster role created in the step. Create the role binding in the cluster: USD oc create -f <datavolume-cloner.yaml> 1 1 The file name of the RoleBinding manifest created in the step. 8.16.2. Cloning a virtual machine disk into a new data volume You can clone the persistent volume claim (PVC) of a virtual machine disk into a new data volume by referencing the source PVC in your data volume configuration file. Warning Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block to a PV with volumeMode: Filesystem . However, you can only clone between different volume modes if they are of the contentType: kubevirt . Tip When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes . 8.16.2.1. Prerequisites Users need additional permissions to clone the PVC of a virtual machine disk into another namespace. 8.16.2.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 8.16.2.3. Cloning the persistent volume claim of a virtual machine disk into a new data volume You can clone a persistent volume claim (PVC) of an existing virtual machine disk into a new data volume. The new data volume can then be used for a new virtual machine. Note When a data volume is created independently of a virtual machine, the lifecycle of the data volume is independent of the virtual machine. If the virtual machine is deleted, neither the data volume nor its associated PVC is deleted. Prerequisites Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it. Install the OpenShift CLI ( oc ). Procedure Examine the virtual machine disk you want to clone to identify the name and namespace of the associated PVC. Create a YAML file for a data volume that specifies the name of the new data volume, the name and namespace of the source PVC, and the size of the new data volume. For example: apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: "<source-namespace>" 2 name: "<my-favorite-vm-disk>" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4 1 The name of the new data volume. 2 The namespace where the source PVC exists. 3 The name of the source PVC. 4 The size of the new data volume. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC. Start cloning the PVC by creating the data volume: USD oc create -f <cloner-datavolume>.yaml Note Data volumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new data volume while the PVC clones. 8.16.2.4. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 8.16.3. Cloning a virtual machine by using a data volume template You can create a new virtual machine by cloning the persistent volume claim (PVC) of an existing VM. By including a dataVolumeTemplate in your virtual machine configuration file, you create a new data volume from the original PVC. Warning Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block to a PV with volumeMode: Filesystem . However, you can only clone between different volume modes if they are of the contentType: kubevirt . Tip When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes . 8.16.3.1. Prerequisites Users need additional permissions to clone the PVC of a virtual machine disk into another namespace. 8.16.3.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 8.16.3.3. Creating a new virtual machine from a cloned persistent volume claim by using a data volume template You can create a virtual machine that clones the persistent volume claim (PVC) of an existing virtual machine into a data volume. Reference a dataVolumeTemplate in the virtual machine manifest and the source PVC is cloned to a data volume, which is then automatically used for the creation of the virtual machine. Note When a data volume is created as part of the data volume template of a virtual machine, the lifecycle of the data volume is then dependent on the virtual machine. If the virtual machine is deleted, the data volume and associated PVC are also deleted. Prerequisites Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it. Install the OpenShift CLI ( oc ). Procedure Examine the virtual machine you want to clone to identify the name and namespace of the associated PVC. Create a YAML file for a VirtualMachine object. The following virtual machine example clones my-favorite-vm-disk , which is located in the source-namespace namespace. The 2Gi data volume called favorite-clone is created from my-favorite-vm-disk . For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: "source-namespace" name: "my-favorite-vm-disk" 1 The virtual machine to create. Create the virtual machine with the PVC-cloned data volume: USD oc create -f <vm-clone-datavolumetemplate>.yaml 8.16.3.4. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 8.16.4. Cloning a virtual machine disk into a new block storage data volume You can clone the persistent volume claim (PVC) of a virtual machine disk into a new block data volume by referencing the source PVC in your data volume configuration file. Warning Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block to a PV with volumeMode: Filesystem . However, you can only clone between different volume modes if they are of the contentType: kubevirt . Tip When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes . 8.16.4.1. Prerequisites Users need additional permissions to clone the PVC of a virtual machine disk into another namespace. 8.16.4.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 8.16.4.3. About block persistent volumes A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead. Raw block volumes are provisioned by specifying volumeMode: Block in the PV and persistent volume claim (PVC) specification. 8.16.4.4. Creating a local block persistent volume Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block volume and use it as a block device for a virtual machine image. Procedure Log in as root to the node on which to create the local PV. This procedure uses node01 for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file loop10 with a size of 2Gb (20 100Mb blocks): USD dd if=/dev/zero of=<loop10> bs=100M count=20 Mount the loop10 file as a loop device. USD losetup </dev/loop10>d3 <loop10> 1 2 1 File path where the loop device is mounted. 2 The file created in the step to be mounted as the loop device. Create a PersistentVolume manifest that references the mounted loop device. kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4 1 The path of the loop device on the node. 2 Specifies it is a block PV. 3 Optional: Set a storage class for the PV. If you omit it, the cluster default is used. 4 The node on which the block device was mounted. Create the block PV. # oc create -f <local-block-pv10.yaml> 1 1 The file name of the persistent volume created in the step. 8.16.4.5. Cloning the persistent volume claim of a virtual machine disk into a new data volume You can clone a persistent volume claim (PVC) of an existing virtual machine disk into a new data volume. The new data volume can then be used for a new virtual machine. Note When a data volume is created independently of a virtual machine, the lifecycle of the data volume is independent of the virtual machine. If the virtual machine is deleted, neither the data volume nor its associated PVC is deleted. Prerequisites Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it. Install the OpenShift CLI ( oc ). At least one available block persistent volume (PV) that is the same size as or larger than the source PVC. Procedure Examine the virtual machine disk you want to clone to identify the name and namespace of the associated PVC. Create a YAML file for a data volume that specifies the name of the new data volume, the name and namespace of the source PVC, volumeMode: Block so that an available block PV is used, and the size of the new data volume. For example: apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: "<source-namespace>" 2 name: "<my-favorite-vm-disk>" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4 volumeMode: Block 5 1 The name of the new data volume. 2 The namespace where the source PVC exists. 3 The name of the source PVC. 4 The size of the new data volume. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC. 5 Specifies that the destination is a block PV Start cloning the PVC by creating the data volume: USD oc create -f <cloner-datavolume>.yaml Note Data volumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new data volume while the PVC clones. 8.16.4.6. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 8.17. Virtual machine networking 8.17.1. Configuring the virtual machine for the default pod network You can connect a virtual machine to the default internal pod network by configuring its network interface to use the masquerade binding mode 8.17.1.1. Configuring masquerade mode from the command line You can use masquerade mode to hide a virtual machine's outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge. Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file. Prerequisites The virtual machine must be configured to use DHCP to acquire IPv4 addresses. The examples below are configured to use DHCP. Procedure Edit the interfaces spec of your virtual machine configuration file: kind: VirtualMachine spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 networks: - name: default pod: {} 1 Connect using masquerade mode. 2 Optional: List the ports that you want to expose from the virtual machine, each specified by the port field. The port value must be a number between 0 and 65536. When the ports array is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port 80 . Note Ports 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped. Create the virtual machine: USD oc create -f <vm-name>.yaml 8.17.1.2. Configuring masquerade mode with dual-stack (IPv4 and IPv6) You can configure a new virtual machine (VM) to use both IPv6 and IPv4 on the default pod network by using cloud-init. The Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration determines the static IPv6 address of the VM and the gateway IP address. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally. The Network.pod.vmIPv6NetworkCIDR field specifies an IPv6 address block in Classless Inter-Domain Routing (CIDR) notation. The default value is fd10:0:2::2/120 . You can edit this value based on your network requirements. When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine. Prerequisites The OpenShift Container Platform cluster must use the OVN-Kubernetes Container Network Interface (CNI) network provider configured for dual-stack. Procedure In a new virtual machine configuration, include an interface with masquerade and configure the IPv6 address and default gateway by using cloud-init. apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 ... interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4 1 Connect using masquerade mode. 2 Allows incoming traffic on port 80 to the virtual machine. 3 The static IPv6 address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::2/120 . 4 The gateway IP address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::1 . Create the virtual machine in the namespace: USD oc create -f example-vm-ipv6.yaml Verification To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address: USD oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}" 8.17.2. Creating a service to expose a virtual machine You can expose a virtual machine within the cluster or outside the cluster by using a Service object. 8.17.2.1. About services A Kubernetes service is an abstract way to expose an application running on a set of pods as a network service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a spec.type in the Service object: ClusterIP Exposes the service on an internal IP address within the cluster. ClusterIP is the default service type . NodePort Exposes the service on the same port of each selected node in the cluster. NodePort makes a service accessible from outside the cluster. LoadBalancer Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service. 8.17.2.1.1. Dual-stack support If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the spec.ipFamilyPolicy and the spec.ipFamilies fields in the Service object. The spec.ipFamilyPolicy field can be set to one of the following values: SingleStack The control plane assigns a cluster IP address for the service based on the first configured service cluster IP range. PreferDualStack The control plane assigns both IPv4 and IPv6 cluster IP addresses for the service on clusters that have dual-stack configured. RequireDualStack This option fails for clusters that do not have dual-stack networking enabled. For clusters that have dual-stack configured, the behavior is the same as when the value is set to PreferDualStack . The control plane allocates cluster IP addresses from both IPv4 and IPv6 address ranges. You can define which IP family to use for single-stack or define the order of IP families for dual-stack by setting the spec.ipFamilies field to one of the following array values: [IPv4] [IPv6] [IPv4, IPv6] [IPv6, IPv4] 8.17.2.2. Exposing a virtual machine as a service Create a ClusterIP , NodePort , or LoadBalancer service to connect to a running virtual machine (VM) from within or outside the cluster. Procedure Edit the VirtualMachine manifest to add the label for service creation: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-ephemeral namespace: example-namespace spec: running: false template: metadata: labels: special: key 1 # ... 1 Add the label special: key in the spec.template.metadata.labels section. Note Labels on a virtual machine are passed through to the pod. The special: key label must match the label in the spec.selector attribute of the Service manifest. Save the VirtualMachine manifest file to apply your changes. Create a Service manifest to expose the VM: apiVersion: v1 kind: Service metadata: name: vmservice 1 namespace: example-namespace 2 spec: externalTrafficPolicy: Cluster 3 ports: - nodePort: 30000 4 port: 27017 protocol: TCP targetPort: 22 5 selector: special: key 6 type: NodePort 7 1 The name of the Service object. 2 The namespace where the Service object resides. This must match the metadata.namespace field of the VirtualMachine manifest. 3 Optional: Specifies how the nodes distribute service traffic that is received on external IP addresses. This only applies to NodePort and LoadBalancer service types. The default value is Cluster which routes traffic evenly to all cluster endpoints. 4 Optional: When set, the nodePort value must be unique across all services. If not specified, a value in the range above 30000 is dynamically allocated. 5 Optional: The VM port to be exposed by the service. It must reference an open port if a port list is defined in the VM manifest. If targetPort is not specified, it takes the same value as port . 6 The reference to the label that you added in the spec.template.metadata.labels stanza of the VirtualMachine manifest. 7 The type of service. Possible values are ClusterIP , NodePort and LoadBalancer . Save the Service manifest file. Create the service by running the following command: USD oc create -f <service_name>.yaml Start the VM. If the VM is already running, restart it. Verification Query the Service object to verify that it is available: USD oc get service -n example-namespace Example output for ClusterIP service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice ClusterIP 172.30.3.149 <none> 27017/TCP 2m Example output for NodePort service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice NodePort 172.30.232.73 <none> 27017:30000/TCP 5m Example output for LoadBalancer service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice LoadBalancer 172.30.27.5 172.29.10.235,172.29.10.235 27017:31829/TCP 5s Choose the appropriate method to connect to the virtual machine: For a ClusterIP service, connect to the VM from within the cluster by using the service IP address and the service port. For example: USD ssh [email protected] -p 27017 For a NodePort service, connect to the VM by specifying the node IP address and the node port outside the cluster network. For example: USD ssh fedora@USDNODE_IP -p 30000 For a LoadBalancer service, use the vinagre client to connect to your virtual machine by using the public IP address and port. External ports are dynamically allocated. 8.17.2.3. Additional resources Configuring ingress cluster traffic using a NodePort Configuring ingress cluster traffic using a load balancer 8.17.3. Connecting a virtual machine to a Linux bridge network You can attach virtual machines to multiple networks by using Linux bridges. You can also import virtual machines with existing workloads that depend on access to multiple interfaces. To attach a virtual machine to an additional network: Configure a bridge network attachment definition for a namespace in the web console or CLI. Note The network attachment definition must be in the same namespace as the pod or virtual machine. Attach the virtual machine to the network attachment definition by using either the web console or the CLI: In the web console, create a NIC for a new or existing virtual machine. In the CLI, include the network information in the virtual machine configuration. 8.17.3.1. OpenShift Virtualization networking glossary OpenShift Virtualization provides advanced networking functionality by using custom resources and plugins. The following terms are used throughout OpenShift Virtualization documentation: Container Network Interface (CNI) a Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality. Multus a "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs. Custom resource definition (CRD) a Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource. Network attachment definition a CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks. Preboot eXecution Environment (PXE) an interface that enables an administrator to boot a client machine from a server over the network. Network booting allows you to remotely load operating systems and other software onto the client. 8.17.3.2. Creating a network attachment definition 8.17.3.2.1. Prerequisites A Linux bridge must be configured and attached on every node. See the node networking section for more information. Warning Configuring ipam in a network attachment definition for virtual machines is not supported. 8.17.3.2.2. Creating a Linux bridge network attachment definition in the web console Network administrators can create network attachment definitions to provide layer-2 networking to pods and virtual machines. Procedure In the web console, click Networking Network Attachment Definitions . Click Create Network Attachment Definition . Note The network attachment definition must be in the same namespace as the pod or virtual machine. Enter a unique Name and optional Description . Click the Network Type list and select CNV Linux bridge . Enter the name of the bridge in the Bridge Name field. Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field. Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod. Click Create . Note A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN. 8.17.3.2.3. Creating a Linux bridge network attachment definition in the CLI As a network administrator, you can configure a network attachment definition of type cnv-bridge to provide layer-2 networking to pods and virtual machines. Prerequisites The node must support nftables and the nft binary must be deployed to enable MAC spoof check. Procedure Create a network attachment definition in the same namespace as the virtual machine. Add the virtual machine to the network attachment definition, as in the following example: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: <bridge-network> 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/<bridge-interface> 2 spec: config: '{ "cniVersion": "0.3.1", "name": "<bridge-network>", 3 "type": "cnv-bridge", 4 "bridge": "<bridge-interface>", 5 "macspoofchk": true, 6 "vlan": 1 7 }' 1 The name for the NetworkAttachmentDefinition object. 2 Optional: Annotation key-value pair for node selection, where bridge-interface is the name of a bridge configured on some nodes. If you add this annotation to your network attachment definition, your virtual machine instances will only run on the nodes that have the bridge-interface bridge connected. 3 The name for the configuration. It is recommended to match the configuration name to the name value of the network attachment definition. 4 The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. Do not change this field unless you want to use a different CNI. 5 The name of the Linux bridge configured on the node. 6 Optional: Flag to enable MAC spoof check. When set to true , you cannot change the MAC address of the pod or guest interface. This attribute provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod. 7 Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy. Note A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN. Create the network attachment definition: USD oc create -f <network-attachment-definition.yaml> 1 1 Where <network-attachment-definition.yaml> is the file name of the network attachment definition manifest. Verification Verify that the network attachment definition was created by running the following command: USD oc get network-attachment-definition <bridge-network> 8.17.3.3. Attaching the virtual machine to the additional network 8.17.3.3.1. Creating a NIC for a virtual machine in the web console Create and attach additional NICs to a virtual machine from the web console. Procedure In the correct project in the OpenShift Virtualization console, click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine to open the Virtual Machine Overview screen. Click Network Interfaces to display the NICs already attached to the virtual machine. Click Add Network Interface to create a new slot in the list. Use the Network drop-down list to select the network attachment definition for the additional network. Fill in the Name , Model , Type , and MAC Address for the new NIC. Click Add to save and attach the NIC to the virtual machine. 8.17.3.3.2. Networking fields Name Description Name Name for the network interface controller. Model Indicates the model of the network interface controller. Supported values are e1000e and virtio . Network List of available network attachment definitions. Type List of available binding methods. For the default pod network, masquerade is the only recommended binding method. For secondary networks, use the bridge binding method. The masquerade method is not supported for non-default networks. Select SR-IOV if you configured an SR-IOV network device and defined that network in the namespace. MAC Address MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. 8.17.3.3.3. Attaching a virtual machine to an additional network in the CLI Attach a virtual machine to an additional network by adding a bridge interface and specifying a network attachment definition in the virtual machine configuration. This procedure uses a YAML file to demonstrate editing the configuration and applying the updated file to the cluster. You can alternatively use the oc edit <object> <name> command to edit an existing virtual machine. Prerequisites Shut down the virtual machine before editing the configuration. If you edit a running virtual machine, you must restart the virtual machine for the changes to take effect. Procedure Create or edit a configuration of a virtual machine that you want to connect to the bridge network. Add the bridge interface to the spec.template.spec.domain.devices.interfaces list and the network attachment definition to the spec.template.spec.networks list. This example adds a bridge interface called bridge-net that connects to the a-bridge-network network attachment definition: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <example-vm> spec: template: spec: domain: devices: interfaces: - masquerade: {} name: <default> - bridge: {} name: <bridge-net> 1 ... networks: - name: <default> pod: {} - name: <bridge-net> 2 multus: networkName: <network-namespace>/<a-bridge-network> 3 ... 1 The name of the bridge interface. 2 The name of the network. This value must match the name value of the corresponding spec.template.spec.domain.devices.interfaces entry. 3 The name of the network attachment definition, prefixed by the namespace where it exists. The namespace must be either the default namespace or the same namespace where the VM is to be created. In this case, multus is used. Multus is a cloud network interface (CNI) plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs. Apply the configuration: USD oc apply -f <example-vm.yaml> Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. 8.17.3.4. Additional resources Configuring IP addresses for virtual machines Configuring PXE booting for virtual machines Attaching a virtual machine to an SR-IOV network 8.17.4. Connecting a virtual machine to an SR-IOV network You can connect a virtual machine (VM) to a Single Root I/O Virtualization (SR-IOV) network by performing the following steps: Configure an SR-IOV network device. Configure an SR-IOV network. Connect the VM to the SR-IOV network. 8.17.4.1. Prerequisites You must have installed the SR-IOV Network Operator . 8.17.4.2. Configuring SR-IOV network devices The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io CustomResourceDefinition to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR). Note When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. It might take several minutes for a configuration change to apply. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the SR-IOV Network Operator. You have enough available nodes in your cluster to handle the evicted workload from drained nodes. You have not selected any control plane nodes for SR-IOV network device configuration. Procedure Create an SriovNetworkNodePolicy object, and then save the YAML in the <name>-sriov-node-network.yaml file. Replace <name> with the name for this configuration. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: "<vendor_code>" 9 deviceID: "<device_id>" 10 pfNames: ["<pf_name>", ...] 11 rootDevices: ["<pci_bus_id>", "..."] 12 deviceType: vfio-pci 13 isRdma: false 14 1 Specify a name for the CR object. 2 Specify the namespace where the SR-IOV Operator is installed. 3 Specify the resource name of the SR-IOV device plugin. You can create multiple SriovNetworkNodePolicy objects for a resource name. 4 Specify the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes. 5 Optional: Specify an integer value between 0 and 99 . A smaller number gets higher priority, so a priority of 10 is higher than a priority of 99 . The default value is 99 . 6 Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models. 7 Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 128 . 8 The nicSelector mapping selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters. It is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally. If you specify rootDevices , you must also specify a value for vendor , deviceID , or pfNames . If you specify both pfNames and rootDevices at the same time, ensure that they point to an identical device. 9 Optional: Specify the vendor hex code of the SR-IOV network device. The only allowed values are either 8086 or 15b3 . 10 Optional: Specify the device hex code of SR-IOV network device. The only allowed values are 158b , 1015 , 1017 . 11 Optional: The parameter accepts an array of one or more physical function (PF) names for the Ethernet device. 12 The parameter accepts an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format: 0000:02:00.1 . 13 The vfio-pci driver type is required for virtual functions in OpenShift Virtualization. 14 Optional: Specify whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set isRdma to false . The default value is false . Note If isRDMA flag is set to true , you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode. Optional: Label the SR-IOV capable cluster nodes with SriovNetworkNodePolicy.Spec.NodeSelector if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes". Create the SriovNetworkNodePolicy object: USD oc create -f <name>-sriov-node-network.yaml where <name> specifies the name for this configuration. After applying the configuration update, all the pods in sriov-network-operator namespace transition to the Running status. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured. USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}' 8.17.4.3. Configuring SR-IOV additional network You can configure an additional network that uses SR-IOV hardware by creating an SriovNetwork object. When you create an SriovNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object. Note Do not modify or delete an SriovNetwork object if it is attached to pods or virtual machines in a running state. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create the following SriovNetwork object, and then save the YAML in the <name>-sriov-network.yaml file. Replace <name> with a name for this additional network. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: "<spoof_check>" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: "<trust_vf>" 11 capabilities: <capabilities> 12 1 Replace <name> with a name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 Specify the namespace where the SR-IOV Network Operator is installed. 3 Replace <sriov_resource_name> with the value for the .spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 Replace <target_namespace> with the target namespace for the SriovNetwork. Only pods or virtual machines in the target namespace can attach to the SriovNetwork. 5 Optional: Replace <vlan> with a Virtual LAN (VLAN) ID for the additional network. The integer value must be from 0 to 4095 . The default value is 0 . 6 Optional: Replace <spoof_check> with the spoof check mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator. 7 Optional: Replace <link_state> with the link state of virtual function (VF). Allowed value are enable , disable and auto . 8 Optional: Replace <max_tx_rate> with a maximum transmission rate, in Mbps, for the VF. 9 Optional: Replace <min_tx_rate> with a minimum transmission rate, in Mbps, for the VF. This value should always be less than or equal to Maximum transmission rate. Note Intel NICs do not support the minTxRate parameter. For more information, see BZ#1772847 . 10 Optional: Replace <vlan_qos> with an IEEE 802.1p priority level for the VF. The default value is 0 . 11 Optional: Replace <trust_vf> with the trust mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator. 12 Optional: Replace <capabilities> with the capabilities to configure for this network. To create the object, enter the following command. Replace <name> with a name for this additional network. USD oc create -f <name>-sriov-network.yaml Optional: To confirm that the NetworkAttachmentDefinition object associated with the SriovNetwork object that you created in the step exists, enter the following command. Replace <namespace> with the namespace you specified in the SriovNetwork object. USD oc get net-attach-def -n <namespace> 8.17.4.4. Connecting a virtual machine to an SR-IOV network You can connect the virtual machine (VM) to the SR-IOV network by including the network details in the VM configuration. Procedure Include the SR-IOV network details in the spec.domain.devices.interfaces and spec.networks of the VM configuration: kind: VirtualMachine ... spec: domain: devices: interfaces: - name: <default> 1 masquerade: {} 2 - name: <nic1> 3 sriov: {} networks: - name: <default> 4 pod: {} - name: <nic1> 5 multus: networkName: <sriov-network> 6 ... 1 A unique name for the interface that is connected to the pod network. 2 The masquerade binding to the default pod network. 3 A unique name for the SR-IOV interface. 4 The name of the pod network interface. This must be the same as the interfaces.name that you defined earlier. 5 The name of the SR-IOV interface. This must be the same as the interfaces.name that you defined earlier. 6 The name of the SR-IOV network attachment definition. Apply the virtual machine configuration: USD oc apply -f <vm-sriov.yaml> 1 1 The name of the virtual machine YAML file. 8.17.5. Configuring IP addresses for virtual machines You can configure either dynamically or statically provisioned IP addresses for virtual machines. Prerequisites The virtual machine must connect to an external network . You must have a DHCP server available on the additional network to configure a dynamic IP for the virtual machine. 8.17.5.1. Configuring an IP address for a new virtual machine using cloud-init You can use cloud-init to configure an IP address when you create a virtual machine. The IP address can be dynamically or statically provisioned. Procedure Create a virtual machine configuration and include the cloud-init network details in the spec.volumes.cloudInitNoCloud.networkData field of the virtual machine configuration: To configure a dynamic IP, specify the interface name and the dhcp4 boolean: kind: VirtualMachine spec: ... volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true 2 1 The interface name. 2 Uses DHCP to provision an IPv4 address. To configure a static IP, specify the interface name and the IP address: kind: VirtualMachine spec: ... volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2 1 The interface name. 2 The static IP address for the virtual machine. 8.17.6. Viewing the IP address of NICs on a virtual machine You can view the IP address for a network interface controller (NIC) by using the web console or the oc client. The QEMU guest agent displays additional information about the virtual machine's secondary networks. 8.17.6.1. Prerequisites Install the QEMU guest agent on the virtual machine. 8.17.6.2. Viewing the IP address of a virtual machine interface in the CLI The network interface configuration is included in the oc describe vmi <vmi_name> command. You can also view the IP address information by running ip addr on the virtual machine, or by running oc get vmi <vmi_name> -o yaml . Procedure Use the oc describe command to display the virtual machine interface configuration: USD oc describe vmi <vmi_name> Example output ... Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa 8.17.6.3. Viewing the IP address of a virtual machine interface in the web console The IP information displays in the Virtual Machine Overview screen for the virtual machine. Procedure In the OpenShift Virtualization console, click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine name to open the Virtual Machine Overview screen. The information for each attached NIC is displayed under IP Address . 8.17.7. Using a MAC address pool for virtual machines The KubeMacPool component provides a MAC address pool service for virtual machine NICs in a namespace. 8.17.7.1. About KubeMacPool KubeMacPool provides a MAC address pool per namespace and allocates MAC addresses for virtual machine NICs from the pool. This ensures that the NIC is assigned a unique MAC address that does not conflict with the MAC address of another virtual machine. Virtual machine instances created from that virtual machine retain the assigned MAC address across reboots. Note KubeMacPool does not handle virtual machine instances created independently from a virtual machine. KubeMacPool is enabled by default when you install OpenShift Virtualization. You can disable a MAC address pool for a namespace by adding the mutatevirtualmachines.kubemacpool.io=ignore label to the namespace. Re-enable KubeMacPool for the namespace by removing the label. 8.17.7.2. Disabling a MAC address pool for a namespace in the CLI Disable a MAC address pool for virtual machines in a namespace by adding the mutatevirtualmachines.kubemacpool.io=ignore label to the namespace. Procedure Add the mutatevirtualmachines.kubemacpool.io=ignore label to the namespace. The following example disables KubeMacPool for two namespaces, <namespace1> and <namespace2> : USD oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore 8.17.7.3. Re-enabling a MAC address pool for a namespace in the CLI If you disabled KubeMacPool for a namespace and want to re-enable it, remove the mutatevirtualmachines.kubemacpool.io=ignore label from the namespace. Note Earlier versions of OpenShift Virtualization used the label mutatevirtualmachines.kubemacpool.io=allocate to enable KubeMacPool for a namespace. This is still supported but redundant as KubeMacPool is now enabled by default. Procedure Remove the KubeMacPool label from the namespace. The following example re-enables KubeMacPool for two namespaces, <namespace1> and <namespace2> : USD oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io- 8.18. Virtual machine disks 8.18.1. Storage features Use the following table to determine feature availability for local and shared persistent storage in OpenShift Virtualization. 8.18.1.1. OpenShift Virtualization storage feature matrix Table 8.4. OpenShift Virtualization storage feature matrix Virtual machine live migration Host-assisted virtual machine disk cloning Storage-assisted virtual machine disk cloning Virtual machine snapshots OpenShift Container Storage: RBD block-mode volumes Yes Yes Yes Yes OpenShift Virtualization hostpath provisioner No Yes No No Other multi-node writable storage Yes [1] Yes Yes [2] Yes [2] Other single-node writable storage No Yes Yes [2] Yes [2] PVCs must request a ReadWriteMany access mode. Storage provider must support both Kubernetes and CSI snapshot APIs Note You cannot live migrate virtual machines that use: A storage class with ReadWriteOnce (RWO) access mode Passthrough features such as GPUs or SR-IOV network interfaces that have the sriovLiveMigration feature gate disabled Do not set the evictionStrategy field to LiveMigrate for these virtual machines. 8.18.2. Configuring local storage for virtual machines You can configure local storage for your virtual machines by using the hostpath provisioner feature. 8.18.2.1. About the hostpath provisioner The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first. When you install the OpenShift Virtualization Operator, the hostpath provisioner Operator is automatically installed. To use it, you must: Configure SELinux: If you use Red Hat Enterprise Linux CoreOS (RHCOS) 8 workers, you must create a MachineConfig object on each node. Otherwise, apply the SELinux label container_file_t to the persistent volume (PV) backing directory on each node. Create a HostPathProvisioner custom resource. Create a StorageClass object for the hostpath provisioner. The hostpath provisioner Operator deploys the provisioner as a DaemonSet on each node when you create its custom resource. In the custom resource file, you specify the backing directory for the persistent volumes that the hostpath provisioner creates. 8.18.2.2. Configuring SELinux for the hostpath provisioner on Red Hat Enterprise Linux CoreOS (RHCOS) 8 You must configure SELinux before you create the HostPathProvisioner custom resource. To configure SELinux on Red Hat Enterprise Linux CoreOS (RHCOS) 8 workers, you must create a MachineConfig object on each node. Prerequisites Create a backing directory on each node for the persistent volumes (PVs) that the hostpath provisioner creates. Important The backing directory must not be located in the filesystem's root directory because the / partition is read-only on RHCOS. For example, you can use /var/<directory_name> but not /<directory_name> . Warning If you select a directory that shares space with your operating system, you might exhaust the space on that partition and your node might become non-functional. Create a separate partition and point the hostpath provisioner to the separate partition to avoid interference with your operating system. Procedure Create the MachineConfig file. For example: USD touch machineconfig.yaml Edit the file, ensuring that you include the directory where you want the hostpath provisioner to create PVs. For example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-set-selinux-for-hostpath-provisioner labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux chcon for hostpath provisioner Before=kubelet.service [Service] ExecStart=/usr/bin/chcon -Rt container_file_t <backing_directory_path> 1 [Install] WantedBy=multi-user.target enabled: true name: hostpath-provisioner.service 1 Specify the backing directory where you want the provisioner to create PVs. This directory must not be located in the filesystem's root directory ( / ). Create the MachineConfig object: USD oc create -f machineconfig.yaml -n <namespace> 8.18.2.3. Using the hostpath provisioner to enable local storage To deploy the hostpath provisioner and enable your virtual machines to use local storage, first create a HostPathProvisioner custom resource. Prerequisites Create a backing directory on each node for the persistent volumes (PVs) that the hostpath provisioner creates. Important The backing directory must not be located in the filesystem's root directory because the / partition is read-only on Red Hat Enterprise Linux CoreOS (RHCOS). For example, you can use /var/<directory_name> but not /<directory_name> . Warning If you select a directory that shares space with your operating system, you might exhaust the space on that partition and your node becomes non-functional. Create a separate partition and point the hostpath provisioner to the separate partition to avoid interference with your operating system. Apply the SELinux context container_file_t to the PV backing directory on each node. For example: USD sudo chcon -t container_file_t -R <backing_directory_path> Note If you use Red Hat Enterprise Linux CoreOS (RHCOS) 8 workers, you must configure SELinux by using a MachineConfig manifest instead. Procedure Create the HostPathProvisioner custom resource file. For example: USD touch hostpathprovisioner_cr.yaml Edit the file, ensuring that the spec.pathConfig.path value is the directory where you want the hostpath provisioner to create PVs. For example: apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: "<backing_directory_path>" 1 useNamingPrefix: false 2 workload: 3 1 Specify the backing directory where you want the provisioner to create PVs. This directory must not be located in the filesystem's root directory ( / ). 2 Change this value to true if you want to use the name of the persistent volume claim (PVC) that is bound to the created PV as the prefix of the directory name. 3 Optional: You can use the spec.workload field to configure node placement rules for the hostpath provisioner. Note If you did not create the backing directory, the provisioner attempts to create it for you. If you did not apply the container_file_t SELinux context, this can cause Permission denied errors. Create the custom resource in the openshift-cnv namespace: USD oc create -f hostpathprovisioner_cr.yaml -n openshift-cnv Additional resources Specifying nodes for virtualization components 8.18.2.4. Creating a storage class When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass object's parameters after you create it. Important When using OpenShift Virtualization with OpenShift Container Platform Container Storage, specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. With virtual machine disks, RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs. To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and VolumeMode: Block . Procedure Create a YAML file for defining the storage class. For example: USD touch storageclass.yaml Edit the file. For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-provisioner 1 provisioner: kubevirt.io/hostpath-provisioner reclaimPolicy: Delete 2 volumeBindingMode: WaitForFirstConsumer 3 1 You can optionally rename the storage class by changing this value. 2 The two possible reclaimPolicy values are Delete and Retain . If you do not specify a value, the storage class defaults to Delete . 3 The volumeBindingMode value determines when dynamic provisioning and volume binding occur. Specify WaitForFirstConsumer to delay the binding and provisioning of a PV until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod's scheduling requirements. Note Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned. To solve this problem, use the Kubernetes pod scheduler to bind the PVC to a PV on the correct node. By using StorageClass with volumeBindingMode set to WaitForFirstConsumer , the binding and provisioning of the PV is delayed until a Pod is created using the PVC. Create the StorageClass object: USD oc create -f storageclass.yaml Additional resources Storage classes 8.18.3. Creating data volumes When you create a data volume, the Containerized Data Importer (CDI) creates a persistent volume claim (PVC) and populates the PVC with your data. You can create a data volume as either a standalone resource or by using a dataVolumeTemplate resource in a virtual machine specification. You create a data volume by using either the PVC API or storage APIs. Important When using OpenShift Virtualization with OpenShift Container Platform Container Storage, specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. With virtual machine disks, RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs. To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and VolumeMode: Block . Tip Whenever possible, use the storage API to optimize space allocation and maximize performance. A storage profile is a custom resource that the CDI manages. It provides recommended storage settings based on the associated storage class. A storage profile is allocated for each storage class. Storage profiles enable you to create data volumes quickly while reducing coding and minimizing potential errors. For recognized storage types, the CDI provides values that optimize the creation of PVCs. However, you can configure automatic settings for a storage class if you customize the storage profile. 8.18.3.1. Creating data volumes using the storage API When you create a data volume using the storage API, the Containerized Data Interface (CDI) optimizes your persistent volume claim (PVC) allocation based on the type of storage supported by your selected storage class. You only have to specify the data volume name, namespace, and the amount of storage that you want to allocate. For example: When using Ceph RBD, accessModes is automatically set to ReadWriteMany , which enables live migration. volumeMode is set to Block to maximize performance. When you are using volumeMode: Filesystem , more space will automatically be requested by the CDI, if required to accommodate file system overhead. In the following YAML, using the storage API requests a data volume with two gigabytes of usable space. The user does not need to know the volumeMode in order to correctly estimate the required persistent volume claim (PVC) size. The CDI chooses the optimal combination of accessModes and volumeMode attributes automatically. These optimal values are based on the type of storage or the defaults that you define in your storage profile. If you want to provide custom values, they override the system-calculated values. Example DataVolume definition apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: 2 namespace: "<source_namespace>" 3 name: "<my_vm_disk>" 4 storage: 5 resources: requests: storage: 2Gi 6 storageClassName: <storage_class> 7 1 The name of the new data volume. 2 Indicate that the source of the import is an existing persistent volume claim (PVC). 3 The namespace where the source PVC exists. 4 The name of the source PVC. 5 Indicates allocation using the storage API. 6 Specifies the amount of available space that you request for the PVC. 7 Optional: The name of the storage class. If the storage class is not specified, the system default storage class is used. 8.18.3.2. Creating data volumes using the PVC API When you create a data volume using the PVC API, the Containerized Data Interface (CDI) creates the data volume based on what you specify for the following fields: accessModes ( ReadWriteOnce , ReadWriteMany , or ReadOnlyMany ) volumeMode ( Filesystem or Block ) capacity of storage ( 5Gi , for example) In the following YAML, using the PVC API allocates a data volume with a storage capacity of two gigabytes. You specify an access mode of ReadWriteMany to enable live migration. Because you know the values your system can support, you specify Block storage instead of the default, Filesystem . Example DataVolume definition apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: 2 namespace: "<source_namespace>" 3 name: "<my_vm_disk>" 4 pvc: 5 accessModes: 6 - ReadWriteMany resources: requests: storage: 2Gi 7 volumeMode: Block 8 storageClassName: <storage_class> 9 1 The name of the new data volume. 2 In the source section, pvc indicates that the source of the import is an existing persistent volume claim (PVC). 3 The namespace where the source PVC exists. 4 The name of the source PVC. 5 Indicates allocation using the PVC API. 6 accessModes is required when using the PVC API. 7 Specifies the amount of space you are requesting for your data volume. 8 Specifies that the destination is a block PVC. 9 Optionally, specify the storage class. If the storage class is not specified, the system default storage class is used. Important When you explicitly allocate a data volume by using the PVC API and you are not using volumeMode: Block , consider file system overhead. File system overhead is the amount of space required by the file system to maintain its metadata. The amount of space required for file system metadata is file system dependent. Failing to account for file system overhead in your storage capacity request can result in an underlying persistent volume claim (PVC) that is not large enough to accommodate your virtual machine disk. If you use the storage API, the CDI will factor in file system overhead and request a larger persistent volume claim (PVC) to ensure that your allocation request is successful. 8.18.3.3. Customizing the storage profile You can specify default parameters by editing the StorageProfile object for the provisioner's storage class. These default parameters only apply to the persistent volume claim (PVC) if they are not configured in the DataVolume object. Prerequisites Ensure that your planned configuration is supported by the storage class and its provider. Specifying an incompatible configuration in a storage profile causes volume provisioning to fail. Note An empty status section in a storage profile indicates that a storage provisioner is not recognized by the Containerized Data Interface (CDI). Customizing a storage profile is necessary if you have a storage provisioner that is not recognized by the CDI. In this case, the administrator sets appropriate values in the storage profile to ensure successful allocations. Warning If you create a data volume and omit YAML attributes and these attributes are not defined in the storage profile, then the requested storage will not be allocated and the underlying persistent volume claim (PVC) will not be created. Procedure Edit the storage profile. In this example, the provisioner is not recognized by CDI: USD oc edit -n openshift-cnv storageprofile <storage_class> Example storage profile apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> # ... spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class> Provide the needed attribute values in the storage profile: Example storage profile apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> # ... spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class> 1 The accessModes that you select. 2 The volumeMode that you select. After you save your changes, the selected values appear in the storage profile status element. 8.18.3.3.1. Setting a default cloning strategy using a storage profile You can use storage profiles to set a default cloning method for a storage class, creating a cloning strategy . Setting cloning strategies can be helpful, for example, if your storage vendor only supports certain cloning methods. It also allows you to select a method that limits resource usage or maximizes performance. Cloning strategies can be specified by setting the cloneStrategy attribute in a storage profile to one of these values: snapshot - This method is used by default when snapshots are configured. This cloning strategy uses a temporary volume snapshot to clone the volume. The storage provisioner must support CSI snapshots. copy - This method uses a source pod and a target pod to copy data from the source volume to the target volume. Host-assisted cloning is the least efficient method of cloning. csi-clone - This method uses the CSI clone API to efficiently clone an existing volume without using an interim volume snapshot. Unlike snapshot or copy , which are used by default if no storage profile is defined, CSI volume cloning is only used when you specify it in the StorageProfile object for the provisioner's storage class. Note You can also set clone strategies using the CLI without modifying the default claimPropertySets in your YAML spec section. Example storage profile apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> # ... spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 cloneStrategy: csi-clone 3 status: provisioner: <provisioner> storageClass: <provisioner_class> 1 The accessModes that you select. 2 The volumeMode that you select. 3 The default cloning method of your choice. In this example, CSI volume cloning is specified. 8.18.3.4. Additional resources Creating a storage class Cloning a data volume using smart cloning 8.18.4. Configuring CDI to work with namespaces that have a compute resource quota You can use the Containerized Data Importer (CDI) to import, upload, and clone virtual machine disks into namespaces that are subject to CPU and memory resource restrictions. 8.18.4.1. About CPU and memory quotas in a namespace A resource quota , defined by the ResourceQuota object, imposes restrictions on a namespace that limit the total amount of compute resources that can be consumed by resources within that namespace. The HyperConverged custom resource (CR) defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values are set to a default value of 0 . This ensures that pods created by CDI that do not specify compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota. 8.18.4.2. Overriding CPU and memory defaults Modify the default settings for CPU and memory requests and limits for your use case by adding the spec.resourceRequirements.storageWorkloads stanza to the HyperConverged custom resource (CR). Prerequisites Install the OpenShift CLI ( oc ). Procedure Edit the HyperConverged CR by running the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Add the spec.resourceRequirements.storageWorkloads stanza to the CR, setting the values based on your use case. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: "500m" memory: "2Gi" requests: cpu: "250m" memory: "1Gi" Save and exit the editor to update the HyperConverged CR. 8.18.4.3. Additional resources Resource quotas per project 8.18.5. Managing data volume annotations Data volume (DV) annotations allow you to manage pod behavior. You can add one or more annotations to a data volume, which then propagates to the created importer pods. 8.18.5.1. Example: Data volume annotations This example shows how you can configure data volume (DV) annotations to control which network the importer pod uses. The v1.multus-cni.io/default-network: bridge-network annotation causes the pod to use the multus network named bridge-network as its default network. If you want the importer pod to use both the default network from the cluster and the secondary multus network, use the k8s.v1.cni.cncf.io/networks: <network_name> annotation. Multus network annotation example apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: dv-ann annotations: v1.multus-cni.io/default-network: bridge-network 1 spec: source: http: url: "example.exampleurl.com" pvc: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi 1 Multus network annotation 8.18.6. Using preallocation for data volumes The Containerized Data Importer can preallocate disk space to improve write performance when creating data volumes. You can enable preallocation for specific data volumes. 8.18.6.1. About preallocation The Containerized Data Importer (CDI) can use the QEMU preallocate mode for data volumes to improve write performance. You can use preallocation mode for importing and uploading operations and when creating blank data volumes. If preallocation is enabled, CDI uses the better preallocation method depending on the underlying file system and device type: fallocate If the file system supports it, CDI uses the operating system's fallocate call to preallocate space by using the posix_fallocate function, which allocates blocks and marks them as uninitialized. full If fallocate mode cannot be used, full mode allocates space for the image by writing data to the underlying storage. Depending on the storage location, all the empty allocated space might be zeroed. 8.18.6.2. Enabling preallocation for a data volume You can enable preallocation for specific data volumes by including the spec.preallocation field in the data volume manifest. You can enable preallocation mode in either the web console or by using the OpenShift CLI ( oc ). Preallocation mode is supported for all CDI source types. Procedure Specify the spec.preallocation field in the data volume manifest: apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 ... pvc: ... preallocation: true 2 1 All CDI source types support preallocation, however preallocation is ignored for cloning operations. 2 The preallocation field is a boolean that defaults to false. 8.18.7. Uploading local disk images by using the web console You can upload a locally stored disk image file by using the web console. 8.18.7.1. Prerequisites You must have a virtual machine image file in IMG, ISO, or QCOW2 format. If you require scratch space according to the CDI supported operations matrix , you must first define a storage class or prepare CDI scratch space for this operation to complete successfully. 8.18.7.2. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 8.18.7.3. Uploading an image file using the web console Use the web console to upload an image file to a new persistent volume claim (PVC). You can later use this PVC to attach the image to new virtual machines. Prerequisites You must have one of the following: A raw virtual machine image file in either ISO or IMG format. A virtual machine image file in QCOW2 format. For best results, compress your image file according to the following guidelines before you upload it: Compress a raw image file by using xz or gzip . Note Using a compressed raw image file results in the most efficient upload. Compress a QCOW2 image file by using the method that is recommended for your client: If you use a Linux client, sparsify the QCOW2 file by using the virt-sparsify tool. If you use a Windows client, compress the QCOW2 file by using xz or gzip . Procedure From the side menu of the web console, click Storage Persistent Volume Claims . Click the Create Persistent Volume Claim drop-down list to expand it. Click With Data Upload Form to open the Upload Data to Persistent Volume Claim page. Click Browse to open the file manager and select the image that you want to upload, or drag the file into the Drag a file here or browse to upload field. Optional: Set this image as the default image for a specific operating system. Select the Attach this data to a virtual machine operating system check box. Select an operating system from the list. The Persistent Volume Claim Name field is automatically filled with a unique name and cannot be edited. Take note of the name assigned to the PVC so that you can identify it later, if necessary. Select a storage class from the Storage Class list. In the Size field, enter the size value for the PVC. Select the corresponding unit of measurement from the drop-down list. Warning The PVC size must be larger than the size of the uncompressed virtual disk. Select an Access Mode that matches the storage class that you selected. Click Upload . 8.18.7.4. Additional resources Configure preallocation mode to improve write performance for data volume operations. 8.18.8. Uploading local disk images by using the virtctl tool You can upload a locally stored disk image to a new or existing data volume by using the virtctl command-line utility. 8.18.8.1. Prerequisites Enable the kubevirt-virtctl package. If you require scratch space according to the CDI supported operations matrix , you must first define a storage class or prepare CDI scratch space for this operation to complete successfully. 8.18.8.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 8.18.8.3. Creating an upload data volume You can manually create a data volume with an upload data source to use for uploading local disk images. Procedure Create a data volume configuration that specifies spec: source: upload{} : apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2 1 The name of the data volume. 2 The size of the data volume. Ensure that this value is greater than or equal to the size of the disk that you upload. Create the data volume by running the following command: USD oc create -f <upload-datavolume>.yaml 8.18.8.4. Uploading a local disk image to a data volume You can use the virtctl CLI utility to upload a local disk image from a client machine to a data volume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure. Note After you upload a local disk image, you can add it to a virtual machine. Prerequisites You must have one of the following: A raw virtual machine image file in either ISO or IMG format. A virtual machine image file in QCOW2 format. For best results, compress your image file according to the following guidelines before you upload it: Compress a raw image file by using xz or gzip . Note Using a compressed raw image file results in the most efficient upload. Compress a QCOW2 image file by using the method that is recommended for your client: If you use a Linux client, sparsify the QCOW2 file by using the virt-sparsify tool. If you use a Windows client, compress the QCOW2 file by using xz or gzip . The kubevirt-virtctl package must be installed on the client machine. The client machine must be configured to trust the OpenShift Container Platform router's certificate. Procedure Identify the following items: The name of the upload data volume that you want to use. If this data volume does not exist, it is created automatically. The size of the data volume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image. The file location of the virtual machine disk image that you want to upload. Upload the disk image by running the virtctl image-upload command. Specify the parameters that you identified in the step. For example: USD virtctl image-upload dv <datavolume_name> \ 1 --size=<datavolume_size> \ 2 --image-path=</path/to/image> \ 3 1 The name of the data volume. 2 The size of the data volume. For example: --size=500Mi , --size=1G 3 The file path of the virtual machine disk image. Note If you do not want to create a new data volume, omit the --size parameter and include the --no-create flag. When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk. To allow insecure server connections when using HTTPS, use the --insecure parameter. Be aware that when you use the --insecure flag, the authenticity of the upload endpoint is not verified. Optional. To verify that a data volume was created, view all data volumes by running the following command: USD oc get dvs 8.18.8.5. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 8.18.8.6. Additional resources Configure preallocation mode to improve write performance for data volume operations. 8.18.9. Uploading a local disk image to a block storage data volume You can upload a local disk image into a block data volume by using the virtctl command-line utility. In this workflow, you create a local block device to use as a persistent volume, associate this block volume with an upload data volume, and use virtctl to upload the local disk image into the data volume. 8.18.9.1. Prerequisites Enable the kubevirt-virtctl package. If you require scratch space according to the CDI supported operations matrix , you must first define a storage class or prepare CDI scratch space for this operation to complete successfully. 8.18.9.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 8.18.9.3. About block persistent volumes A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead. Raw block volumes are provisioned by specifying volumeMode: Block in the PV and persistent volume claim (PVC) specification. 8.18.9.4. Creating a local block persistent volume Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block volume and use it as a block device for a virtual machine image. Procedure Log in as root to the node on which to create the local PV. This procedure uses node01 for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file loop10 with a size of 2Gb (20 100Mb blocks): USD dd if=/dev/zero of=<loop10> bs=100M count=20 Mount the loop10 file as a loop device. USD losetup </dev/loop10>d3 <loop10> 1 2 1 File path where the loop device is mounted. 2 The file created in the step to be mounted as the loop device. Create a PersistentVolume manifest that references the mounted loop device. kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4 1 The path of the loop device on the node. 2 Specifies it is a block PV. 3 Optional: Set a storage class for the PV. If you omit it, the cluster default is used. 4 The node on which the block device was mounted. Create the block PV. # oc create -f <local-block-pv10.yaml> 1 1 The file name of the persistent volume created in the step. 8.18.9.5. Creating an upload data volume You can manually create a data volume with an upload data source to use for uploading local disk images. Procedure Create a data volume configuration that specifies spec: source: upload{} : apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2 1 The name of the data volume. 2 The size of the data volume. Ensure that this value is greater than or equal to the size of the disk that you upload. Create the data volume by running the following command: USD oc create -f <upload-datavolume>.yaml 8.18.9.6. Uploading a local disk image to a data volume You can use the virtctl CLI utility to upload a local disk image from a client machine to a data volume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure. Note After you upload a local disk image, you can add it to a virtual machine. Prerequisites You must have one of the following: A raw virtual machine image file in either ISO or IMG format. A virtual machine image file in QCOW2 format. For best results, compress your image file according to the following guidelines before you upload it: Compress a raw image file by using xz or gzip . Note Using a compressed raw image file results in the most efficient upload. Compress a QCOW2 image file by using the method that is recommended for your client: If you use a Linux client, sparsify the QCOW2 file by using the virt-sparsify tool. If you use a Windows client, compress the QCOW2 file by using xz or gzip . The kubevirt-virtctl package must be installed on the client machine. The client machine must be configured to trust the OpenShift Container Platform router's certificate. Procedure Identify the following items: The name of the upload data volume that you want to use. If this data volume does not exist, it is created automatically. The size of the data volume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image. The file location of the virtual machine disk image that you want to upload. Upload the disk image by running the virtctl image-upload command. Specify the parameters that you identified in the step. For example: USD virtctl image-upload dv <datavolume_name> \ 1 --size=<datavolume_size> \ 2 --image-path=</path/to/image> \ 3 1 The name of the data volume. 2 The size of the data volume. For example: --size=500Mi , --size=1G 3 The file path of the virtual machine disk image. Note If you do not want to create a new data volume, omit the --size parameter and include the --no-create flag. When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk. To allow insecure server connections when using HTTPS, use the --insecure parameter. Be aware that when you use the --insecure flag, the authenticity of the upload endpoint is not verified. Optional. To verify that a data volume was created, view all data volumes by running the following command: USD oc get dvs 8.18.9.7. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 8.18.9.8. Additional resources Configure preallocation mode to improve write performance for data volume operations. 8.18.10. Managing virtual machine snapshots You can create and delete virtual machine (VM) snapshots for VMs, whether the VMs are powered off (offline) or on (online). You can only restore to a powered off (offline) VM. OpenShift Virtualization supports VM snapshots on the following: Red Hat OpenShift Container Storage Any other cloud storage provider with the Container Storage Interface (CSI) driver that supports the Kubernetes Volume Snapshot API Online snapshots have a default time deadline of five minutes ( 5m ) that can be changed, if needed. Important Online snapshots are not supported for virtual machines that have hot-plugged virtual disks. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. 8.18.10.1. About virtual machine snapshots A snapshot represents the state and data of a virtual machine (VM) at a specific point in time. You can use a snapshot to restore an existing VM to a state (represented by the snapshot) for backup and disaster recovery or to rapidly roll back to a development version. A VM snapshot is created from a VM that is powered off (Stopped state) or powered on (Running state). When taking a snapshot of a running VM, the controller checks that the QEMU guest agent is installed and running. If so, it freezes the VM file system before taking the snapshot, and thaws the file system after the snapshot is taken. The snapshot stores a copy of each Container Storage Interface (CSI) volume attached to the VM and a copy of the VM specification and metadata. Snapshots cannot be changed after creation. With the VM snapshots feature, cluster administrators and application developers can: Create a new snapshot List all snapshots attached to a specific VM Restore a VM from a snapshot Delete an existing VM snapshot 8.18.10.1.1. Virtual machine snapshot controller and custom resource definitions (CRDs) The VM snapshot feature introduces three new API objects defined as CRDs for managing snapshots: VirtualMachineSnapshot : Represents a user request to create a snapshot. It contains information about the current state of the VM. VirtualMachineSnapshotContent : Represents a provisioned resource on the cluster (a snapshot). It is created by the VM snapshot controller and contains references to all resources required to restore the VM. VirtualMachineRestore : Represents a user request to restore a VM from a snapshot. The VM snapshot controller binds a VirtualMachineSnapshotContent object with the VirtualMachineSnapshot object for which it was created, with a one-to-one mapping. 8.18.10.2. Installing QEMU guest agent on a Linux virtual machine The qemu-guest-agent is widely available and available by default in Red Hat virtual machines. Install the agent and start the service. To check if your virtual machine (VM) has the QEMU guest agent installed and running, verify that AgentConnected is listed in the VM spec. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM's file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. Procedure Access the virtual machine command line through one of the consoles or by SSH. Install the QEMU guest agent on the virtual machine: USD yum install -y qemu-guest-agent Ensure the service is persistent and start it: USD systemctl enable --now qemu-guest-agent 8.18.10.3. Installing QEMU guest agent on a Windows virtual machine For Windows virtual machines, the QEMU guest agent is included in the VirtIO drivers. Install the drivers on an existng or new Windows system. To check if your virtual machine (VM) has the QEMU guest agent installed and running, verify that AgentConnected is listed in the VM spec. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM's file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. 8.18.10.3.1. Installing VirtIO drivers on an existing Windows virtual machine Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine. Note This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps. Procedure Start the virtual machine and connect to a graphical console. Log in to a Windows user session. Open Device Manager and expand Other devices to list any Unknown device . Open the Device Properties to identify the unknown device. Right-click the device and select Properties . Click the Details tab and select Hardware Ids in the Property list. Compare the Value for the Hardware Ids with the supported VirtIO drivers. Right-click the device and select Update Driver Software . Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Click to install the driver. Repeat this process for all the necessary VirtIO drivers. After the driver installs, click Close to close the window. Reboot the virtual machine to complete the driver installation. 8.18.10.3.2. Installing VirtIO drivers during Windows installation Install the VirtIO drivers from the attached SATA CD driver during Windows installation. Note This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing. Procedure Start the virtual machine and connect to a graphical console. Begin the Windows installation process. Select the Advanced installation. The storage destination will not be recognized until the driver is loaded. Click Load driver . The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Repeat the two steps for all required drivers. Complete the Windows installation. 8.18.10.4. Creating a virtual machine snapshot in the web console You can create a virtual machine (VM) snapshot by using the web console. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM's file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. Note The VM snapshot only includes disks that meet the following requirements: Must be either a data volume or persistent volume claim Belong to a storage class that supports Container Storage Interface (CSI) volume snapshots Procedure Click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine to open the Virtual Machine Overview screen. If the virtual machine is running, click Actions Stop Virtual Machine to power it down. Click the Snapshots tab and then click Take Snapshot . Fill in the Snapshot Name and optional Description fields. Expand Disks included in this Snapshot to see the storage volumes to be included in the snapshot. If your VM has disks that cannot be included in the snapshot and you still wish to proceed, select the I am aware of this warning and wish to proceed checkbox. Click Save . 8.18.10.5. Creating a virtual machine snapshot in the CLI You can create a virtual machine (VM) snapshot for an offline or online VM by creating a VirtualMachineSnapshot object. Kubevirt will coordinate with the QEMU guest agent to create a snapshot of the online VM. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM's file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. Prerequisites Ensure that the persistent volume claims (PVCs) are in a storage class that supports Container Storage Interface (CSI) volume snapshots. Install the OpenShift CLI ( oc ). Optional: Power down the VM for which you want to create a snapshot. Procedure Create a YAML file to define a VirtualMachineSnapshot object that specifies the name of the new VirtualMachineSnapshot and the name of the source VM. For example: apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: my-vmsnapshot 1 spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2 1 The name of the new VirtualMachineSnapshot object. 2 The name of the source VM. Create the VirtualMachineSnapshot resource. The snapshot controller creates a VirtualMachineSnapshotContent object, binds it to the VirtualMachineSnapshot and updates the status and readyToUse fields of the VirtualMachineSnapshot object. USD oc create -f <my-vmsnapshot>.yaml Optional: If you are taking an online snapshot, you can use the wait command and monitor the status of the snapshot: Enter the following command: USD oc wait my-vm my-vmsnapshot --for condition=Ready Verify the status of the snapshot: InProgress - The online snapshot operation is still in progress. Succeeded - The online snapshot operation completed successfully. Failed - The online snapshot operaton failed. Note Online snapshots have a default time deadline of five minutes ( 5m ). If the snapshot does not complete successfully in five minutes, the status is set to failed . Afterwards, the file system will be thawed and the VM unfrozen but the status remains failed until you delete the failed snapshot image. To change the default time deadline, add the FailureDeadline attribute to the VM snapshot spec with the time designated in minutes ( m ) or in seconds ( s ) that you want to specify before the snapshot operation times out. To set no deadline, you can specify 0 , though this is generally not recommended, as it can result in an unresponsive VM. If you do not specify a unit of time such as m or s , the default is seconds ( s ). Verification Verify that the VirtualMachineSnapshot object is created and bound with VirtualMachineSnapshotContent . The readyToUse flag must be set to true . USD oc describe vmsnapshot <my-vmsnapshot> Example output apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: creationTimestamp: "2020-09-30T14:41:51Z" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: "3897" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: "2020-09-30T14:42:03Z" reason: Operation complete status: "False" 1 type: Progressing - lastProbeTime: null lastTransitionTime: "2020-09-30T14:42:03Z" reason: Operation complete status: "True" 2 type: Ready creationTime: "2020-09-30T14:42:03Z" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4 1 The status field of the Progressing condition specifies if the snapshot is still being created. 2 The status field of the Ready condition specifies if the snapshot creation process is complete. 3 Specifies if the snapshot is ready to be used. 4 Specifies that the snapshot is bound to a VirtualMachineSnapshotContent object created by the snapshot controller. Check the spec:volumeBackups property of the VirtualMachineSnapshotContent resource to verify that the expected PVCs are included in the snapshot. 8.18.10.6. Verifying online snapshot creation with snapshot indications Snapshot indications are contextual information about online virtual machine (VM) snapshot operations. Indications are not available for offline virtual machine (VM) snapshot operations. Indications are helpful in describing details about the online snapshot creation. Prerequisites To view indications, you must have attempted to create an online VM snapshot using the CLI or the web console. Procedure Display the output from the snapshot indications by doing one of the following: For snapshots created with the CLI, view indicator output in the VirtualMachineSnapshot object YAML, in the status field. For snapshots created using the web console, click VirtualMachineSnapshot > Status in the Snapshot details screen. Verify the status of your online VM snapshot: Online indicates that the VM was running during online snapshot creation. NoGuestAgent indicates that the QEMU guest agent was not running during online snapshot creation. The QEMU guest agent could not be used to freeze and thaw the file system, either because the QEMU guest agent was not installed or running or due to another error. 8.18.10.7. Restoring a virtual machine from a snapshot in the web console You can restore a virtual machine (VM) to a configuration represented by a snapshot in the web console. Procedure Click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine to open the Virtual Machine Overview screen. If the virtual machine is running, click Actions Stop Virtual Machine to power it down. Click the Snapshots tab. The page displays a list of snapshots associated with the virtual machine. Choose one of the following methods to restore a VM snapshot: For the snapshot that you want to use as the source to restore the VM, click Restore . Select a snapshot to open the Snapshot Details screen and click Actions Restore Virtual Machine Snapshot . In the confirmation pop-up window, click Restore to restore the VM to its configuration represented by the snapshot. 8.18.10.8. Restoring a virtual machine from a snapshot in the CLI You can restore an existing virtual machine (VM) to a configuration by using a VM snapshot. You can only restore from an offline VM snapshot. Prerequisites Install the OpenShift CLI ( oc ). Power down the VM you want to restore to a state. Procedure Create a YAML file to define a VirtualMachineRestore object that specifies the name of the VM you want to restore and the name of the snapshot to be used as the source. For example: apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: name: my-vmrestore 1 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2 virtualMachineSnapshotName: my-vmsnapshot 3 1 The name of the new VirtualMachineRestore object. 2 The name of the target VM you want to restore. 3 The name of the VirtualMachineSnapshot object to be used as the source. Create the VirtualMachineRestore resource. The snapshot controller updates the status fields of the VirtualMachineRestore object and replaces the existing VM configuration with the snapshot content. USD oc create -f <my-vmrestore>.yaml Verification Verify that the VM is restored to the state represented by the snapshot. The complete flag must be set to true . USD oc get vmrestore <my-vmrestore> Example output apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: creationTimestamp: "2020-09-30T14:46:27Z" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: "5512" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: "2020-09-30T14:46:28Z" reason: Operation complete status: "False" 2 type: Progressing - lastProbeTime: null lastTransitionTime: "2020-09-30T14:46:28Z" reason: Operation complete status: "True" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: "2020-09-30T14:46:28Z" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1 1 Specifies if the process of restoring the VM to the state represented by the snapshot is complete. 2 The status field of the Progressing condition specifies if the VM is still being restored. 3 The status field of the Ready condition specifies if the VM restoration process is complete. 8.18.10.9. Deleting a virtual machine snapshot in the web console You can delete an existing virtual machine snapshot by using the web console. Procedure Click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine to open the Virtual Machine Overview screen. Click the Snapshots tab. The page displays a list of snapshots associated with the virtual machine. Choose one of the following methods to delete a virtual machine snapshot: Click the Options menu of the virtual machine snapshot that you want to delete and select Delete Virtual Machine Snapshot . Select a snapshot to open the Snapshot Details screen and click Actions Delete Virtual Machine Snapshot . In the confirmation pop-up window, click Delete to delete the snapshot. 8.18.10.10. Deleting a virtual machine snapshot in the CLI You can delete an existing virtual machine (VM) snapshot by deleting the appropriate VirtualMachineSnapshot object. Prerequisites Install the OpenShift CLI ( oc ). Procedure Delete the VirtualMachineSnapshot object. The snapshot controller deletes the VirtualMachineSnapshot along with the associated VirtualMachineSnapshotContent object. USD oc delete vmsnapshot <my-vmsnapshot> Verification Verify that the snapshot is deleted and no longer attached to this VM: USD oc get vmsnapshot 8.18.10.11. Additional resources CSI Volume Snapshots 8.18.11. Moving a local virtual machine disk to a different node Virtual machines that use local volume storage can be moved so that they run on a specific node. You might want to move the virtual machine to a specific node for the following reasons: The current node has limitations to the local storage configuration. The new node is better optimized for the workload of that virtual machine. To move a virtual machine that uses local storage, you must clone the underlying volume by using a data volume. After the cloning operation is complete, you can edit the virtual machine configuration so that it uses the new data volume, or add the new data volume to another virtual machine . Tip When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes . Note Users without the cluster-admin role require additional user permissions to clone volumes across namespaces. 8.18.11.1. Cloning a local volume to another node You can move a virtual machine disk so that it runs on a specific node by cloning the underlying persistent volume claim (PVC). To ensure the virtual machine disk is cloned to the correct node, you must either create a new persistent volume (PV) or identify one on the correct node. Apply a unique label to the PV so that it can be referenced by the data volume. Note The destination PV must be the same size or larger than the source PVC. If the destination PV is smaller than the source PVC, the cloning operation fails. Prerequisites The virtual machine must not be running. Power down the virtual machine before cloning the virtual machine disk. Procedure Either create a new local PV on the node, or identify a local PV already on the node: Create a local PV that includes the nodeAffinity.nodeSelectorTerms parameters. The following manifest creates a 10Gi local PV on node01 . kind: PersistentVolume apiVersion: v1 metadata: name: <destination-pv> 1 annotations: spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi 2 local: path: /mnt/local-storage/local/disk1 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node01 4 persistentVolumeReclaimPolicy: Delete storageClassName: local volumeMode: Filesystem 1 The name of the PV. 2 The size of the PV. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC. 3 The mount path on the node. 4 The name of the node where you want to create the PV. Identify a PV that already exists on the target node. You can identify the node where a PV is provisioned by viewing the nodeAffinity field in its configuration: USD oc get pv <destination-pv> -o yaml The following snippet shows that the PV is on node01 : Example output ... spec: nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname 1 operator: In values: - node01 2 ... 1 The kubernetes.io/hostname key uses the node hostname to select a node. 2 The hostname of the node. Add a unique label to the PV: USD oc label pv <destination-pv> node=node01 Create a data volume manifest that references the following: The PVC name and namespace of the virtual machine. The label you applied to the PV in the step. The size of the destination PV. apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <clone-datavolume> 1 spec: source: pvc: name: "<source-vm-disk>" 2 namespace: "<source-namespace>" 3 pvc: accessModes: - ReadWriteOnce selector: matchLabels: node: node01 4 resources: requests: storage: <10Gi> 5 1 The name of the new data volume. 2 The name of the source PVC. If you do not know the PVC name, you can find it in the virtual machine configuration: spec.volumes.persistentVolumeClaim.claimName . 3 The namespace where the source PVC exists. 4 The label that you applied to the PV in the step. 5 The size of the destination PV. Start the cloning operation by applying the data volume manifest to your cluster: USD oc apply -f <clone-datavolume.yaml> The data volume clones the PVC of the virtual machine into the PV on the specific node. 8.18.12. Expanding virtual storage by adding blank disk images You can increase your storage capacity or create new data partitions by adding blank disk images to OpenShift Virtualization. 8.18.12.1. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 8.18.12.2. Creating a blank disk image with data volumes You can create a new blank disk image in a persistent volume claim by customizing and deploying a data volume configuration file. Prerequisites At least one available persistent volume. Install the OpenShift CLI ( oc ). Procedure Edit the DataVolume manifest: apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} pvc: # Optional: Set the storage class or omit to accept the default # storageClassName: "hostpath" accessModes: - ReadWriteOnce resources: requests: storage: 500Mi Create the blank disk image by running the following command: USD oc create -f <blank-image-datavolume>.yaml 8.18.12.3. Additional resources Configure preallocation mode to improve write performance for data volume operations. 8.18.13. Cloning a data volume using smart-cloning Smart-cloning is a built-in feature of OpenShift Container Platform Storage (OCS), designed to enhance performance of the cloning process. Clones created with smart-cloning are faster and more efficient than host-assisted cloning. You do not need to perform any action to enable smart-cloning, but you need to ensure your storage environment is compatible with smart-cloning to use this feature. When you create a data volume with a persistent volume claim (PVC) source, you automatically initiate the cloning process. You always receive a clone of the data volume if your environment supports smart-cloning or not. However, you will only receive the performance benefits of smart cloning if your storage provider supports smart-cloning. 8.18.13.1. About smart-cloning When a data volume is smart-cloned, the following occurs: A snapshot of the source persistent volume claim (PVC) is created. A PVC is created from the snapshot. The snapshot is deleted. 8.18.13.2. Cloning a data volume Prerequisites For smart-cloning to occur, the following conditions are required: Your storage provider must support snapshots. The source and target PVCs must be defined to the same storage class. The source and target PVCs share the same volumeMode . The VolumeSnapshotClass object must reference the storage class defined to both the source and target PVCs. Procedure To initiate cloning of a data volume: Create a YAML file for a DataVolume object that specifies the name of the new data volume and the name and namespace of the source PVC. In this example, because you specify the storage API, there is no need to specify accessModes or volumeMode . The optimal values will be calculated for you automatically. apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: "<source-namespace>" 2 name: "<my-favorite-vm-disk>" 3 storage: 4 resources: requests: storage: <2Gi> 5 1 The name of the new data volume. 2 The namespace where the source PVC exists. 3 The name of the source PVC. 4 Specifies allocation with the storage API 5 The size of the new data volume. Start cloning the PVC by creating the data volume: USD oc create -f <cloner-datavolume>.yaml Note Data volumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new data volume while the PVC clones. 8.18.13.3. Additional resources Cloning the persistent volume claim of a virtual machine disk into a new data volume Configure preallocation mode to improve write performance for data volume operations. Customizing the storage profile 8.18.14. Creating and using boot sources A boot source contains a bootable operating system (OS) and all of the configuration settings for the OS, such as drivers. You use a boot source to create virtual machine templates with specific configurations. These templates can be used to create any number of available virtual machines. Quick Start tours are available in the OpenShift Container Platform web console to assist you in creating a custom boot source, uploading a boot source, and other tasks. Select Quick Starts from the Help menu to view the Quick Start tours. 8.18.14.1. About virtual machines and boot sources Virtual machines consist of a virtual machine definition and one or more disks that are backed by data volumes. Virtual machine templates enable you to create virtual machines using predefined virtual machine specifications. Every virtual machine template requires a boot source, which is a fully configured virtual machine disk image including configured drivers. Each virtual machine template contains a virtual machine definition with a pointer to the boot source. Each boot source has a predefined name and namespace. For some operating systems, a boot source is automatically provided. If it is not provided, then an administrator must prepare a custom boot source. Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) are created with the cluster's default storage class. If you select a different default storage class after configuration, you must delete the existing data volumes in the cluster namespace that are configured with the default storage class. To use the boot sources feature, install the latest release of OpenShift Virtualization. The namespace openshift-virtualization-os-images enables the feature and is installed with the OpenShift Virtualization Operator. Once the boot source feature is installed, you can create boot sources, attach them to templates, and create virtual machines from the templates. Define a boot source by using a persistent volume claim (PVC) that is populated by uploading a local file, cloning an existing PVC, importing from a registry, or by URL. Attach a boot source to a virtual machine template by using the web console. After the boot source is attached to a virtual machine template, you create any number of fully configured ready-to-use virtual machines from the template. 8.18.14.2. Importing a Red Hat Enterprise Linux image as a boot source You can import a Red Hat Enterprise Linux (RHEL) image as a boot source by specifying the URL address for the image. Prerequisites You must have access to the web server with the operating system image. For example: Red Hat Enterprise Linux web page with images. Procedure In the OpenShift Virtualization console, click Workloads Virtualization from the side menu. Click the Templates tab. Identify the RHEL template for which you want to configure a boot source and click Add source . In the Add boot source to template window, select Import via URL (creates PVC) from the Boot source type list. Click RHEL download page to access the Red Hat Customer Portal. A list of available installers and images is displayed on the Download Red Hat Enterprise Linux page. Identify the Red Hat Enterprise Linux KVM guest image that you want to download. Right-click Download Now , and copy the URL for the image. In the Add boot source to template window, paste the copied URL of the guest image into the Import URL field, and click Save and import . Verification To verify that a boot source was added to the template: Click the Templates tab. Confirm that the tile for this template displays a green checkmark. You can now use this template to create RHEL virtual machines. 8.18.14.3. Adding a boot source for a virtual machine template A boot source can be configured for any virtual machine template that you want to use for creating virtual machines or custom templates. When virtual machine templates are configured with a boot source, they are labeled Available in the Templates tab. After you add a boot source to a template, you can create a new virtual machine from the template. There are four methods for selecting and adding a boot source in the web console: Upload local file (creates PVC) Import via URL (creates PVC) Clone existing PVC (creates PVC) Import via Registry (creates PVC) Prerequisites To add a boot source, you must be logged in as a user with the os-images.kubevirt.io:edit RBAC role or as an administrator. You do not need special privileges to create a virtual machine from a template with a boot source added. To upload a local file, the operating system image file must exist on your local machine. To import via URL, access to the web server with the operating system image is required. For example: the Red Hat Enterprise Linux web page with images. To clone an existing PVC, access to the project with a PVC is required. To import via registry, access to the container registry is required. Procedure In the OpenShift Virtualization console, click Workloads Virtualization from the side menu. Click the Templates tab. Identify the virtual machine template for which you want to configure a boot source and click Add source . In the Add boot source to template window, click Select boot source , select a method for creating a persistent volume claim (PVC): Upload local file , Import via URL , Clone existing PVC , or Import via Registry . Optional: Click This is a CD-ROM boot source to mount a CD-ROM and use it to install the operating system on to an empty disk. The additional empty disk is automatically created and mounted by OpenShift Virtualization. If the additional disk is not needed, you can remove it when you create the virtual machine. Enter a value for Persistent Volume Claim size to specify the PVC size that is adequate for the uncompressed image and any additional space that is required. Optional: Enter a name for Source provider to associate the name with this template. Optional: Advanced Storage settings : Click Storage class and select the storage class that is used to create the disk. Typically, this storage class is the default storage class that is created for use by all PVCs. Note Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) are created with the cluster's default storage class. If you select a different default storage class after configuration, you must delete the existing data volumes in the cluster namespace that are configured with the default storage class. Optional: Advanced Storage settings : Click Access mode and select an access mode for the persistent volume: Single User (RWO) mounts the volume as read-write by a single node. Shared Access (RWX) mounts the volume as read-write by many nodes. Read Only (ROX) mounts the volume as read-only by many nodes. Optional: Advanced Storage settings : Click Volume mode if you want to select Block instead of the default value Filesystem . OpenShift Virtualization can statically provision raw block volumes. These volumes do not have a file system, and can provide performance benefits for applications that either write to the disk directly or implement their own storage service. Select the appropriate method to save your boot source: Click Save and upload if you uploaded a local file. Click Save and import if you imported content from a URL or the registry. Click Save and clone if you cloned an existing PVC. Your custom virtual machine template with a boot source is listed in the Templates tab, and you can create virtual machines by using this template. 8.18.14.4. Creating a virtual machine from a template with an attached boot source After you add a boot source to a template, you can create a new virtual machine from the template. Procedure In the OpenShift Container Platform web console, click Workloads > Virtualization in the side menu. From the Virtual Machines tab or the Templates tab, click Create and select Virtual Machine with Wizard . In the Select a template step, select an OS from the Operating System list that has the (Source available) label to the OS and version name. The (Source available) label indicates that a boot source is available for this OS. Click Review and Confirm . Review your virtual machine settings and edit them, if required. Click Create Virtual Machine to create your virtual machine. The Successfully created virtual machine page is displayed. 8.18.14.5. Creating a custom boot source You can prepare a custom disk image, based on an existing disk image, for use as a boot source. Use this procedure to complete the following tasks: Preparing a custom disk image Creating a boot source from the custom disk image Attaching the boot source to a custom template Procedure In the OpenShift Virtualization console, click Workloads > Virtualization from the side menu. Click the Templates tab. Click the link in the Source provider column for the template you want to customize. A window displays, indicating that the template currently has a defined source. In the window, click the Customize source link. Click Continue in the About boot source customization window to proceed with customization after reading the information provided about the boot source customization process. On the Prepare boot source customization page, in the Define new template section: Select the New template namespace field and then choose a project. Enter the name of the custom template in the New template name field. Enter the name of the template provider in the New template provider field. Select the New template support field and then choose the appropriate value, indicating support contacts for the custom template you create. Select the New template flavor field and then choose the appropriate CPU and memory values for the custom image you create. In the Prepare boot source for customization section, customize the cloud-init YAML script, if needed, to define login credentials. Otherwise, the script generates default credentials for you. Click Start Customization . The customization process begins and the Preparing boot source customization page displays, followed by the Customize boot source page. The Customize boot source page displays the output of the running script. When the script completes, your custom image is available. In the VNC console , click show password in the Guest login credentials section. Your login credentials display. When the image is ready for login, sign in with the VNC Console by providing the user name and password displayed in the Guest login credentials section. Verify the custom image works as expected. If it does, click Make this boot source available . In the Finish customization and make template available window, select I have sealed the boot source so it can be used as a template and then click Apply . On the Finishing boot source customization page, wait for the template creation process to complete. Click Navigate to template details or Navigate to template list to view your customized template, created from your custom boot source. 8.18.14.6. Additional resources Creating virtual machine templates Creating a Microsoft Windows boot source from a cloud image Customizing existing Microsoft Windows boot sources in OpenShift Container Platform Setting a PVC as a boot source for a Microsoft Windows template using the CLI Creating boot sources using automated scripting Creating a boot source automatically within a pod 8.18.15. Hot-plugging virtual disks Important Hot-plugging virtual disks is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Hot-plug and hot-unplug virtual disks when you want to add or remove them without stopping your virtual machine or virtual machine instance. This capability is helpful when you need to add storage to a running virtual machine without incurring down-time. When you hot-plug a virtual disk, you attach a virtual disk to a virtual machine instance while the virtual machine is running. When you hot-unplug a virtual disk, you detach a virtual disk from a virtual machine instance while the virtual machine is running. Only data volumes and persistent volume claims (PVCs) can be hot-plugged and hot-unplugged. You cannot hot-plug or hot-unplug container disks. 8.18.15.1. Hot-plugging a virtual disk using the CLI Hot-plug virtual disks that you want to attach to a virtual machine instance (VMI) while a virtual machine is running. Prerequisites You must have a running virtual machine to hot-plug a virtual disk. You must have at least one data volume or persistent volume claim (PVC) available for hot-plugging. Procedure Hot-plug a virtual disk by running the following command: USD virtctl addvolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> \ [--persist] [--serial=<label-name>] Use the optional --persist flag to add the hot-plugged disk to the virtual machine specification as a permanently mounted virtual disk. Stop, restart, or reboot the virtual machine to permanently mount the virtual disk. After specifying the --persist flag, you can no longer hot-plug or hot-unplug the virtual disk. The --persist flag applies to virtual machines, not virtual machine instances. The optional --serial flag allows you to add an alphanumeric string label of your choice. This helps you to identify the hot-plugged disk in a guest virtual machine. If you do not specify this option, the label defaults to the name of the hot-plugged data volume or PVC. 8.18.15.2. Hot-unplugging a virtual disk using the CLI Hot-unplug virtual disks that you want to detach from a virtual machine instance (VMI) while a virtual machine is running. Prerequisites Your virtual machine must be running. You must have at least one data volume or persistent volume claim (PVC) available and hot-plugged. Procedure Hot-unplug a virtual disk by running the following command: USD virtctl removevolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> 8.18.15.3. Hot-plugging a virtual disk using the web console Hot-plug virtual disks that you want to attach to a virtual machine instance (VMI) while a virtual machine is running. Prerequisites You must have a running virtual machine to hot-plug a virtual disk. Procedure Click Workloads Virtualization from the side menu. On the Virtual Machines tab, select the running virtual machine select the running virtual machine to which you want to hot-plug a virtual disk. On the Disks tab, click Add Disk . In the Add Disk window, fill in the information for the virtual disk that you want to hot-plug. Click Add . 8.18.15.4. Hot-unplugging a virtual disk using the web console Hot-unplug virtual disks that you want to attach to a virtual machine instance (VMI) while a virtual machine is running. Prerequisites Your virtual machine must be running with a hot-plugged disk attached. Procedure Click Workloads Virtualization from the side menu. On the Virtual Machines tab, select the running virtual machine with the disk you want to hot-unplug. On the Disks tab, click the Options menu of the virtual disk that you want to hot-unplug. Click Delete . 8.18.16. Using container disks with virtual machines You can build a virtual machine image into a container disk and store it in your container registry. You can then import the container disk into persistent storage for a virtual machine or attach it directly to the virtual machine for ephemeral storage. Important If you use large container disks, I/O traffic might increase, impacting worker nodes. This can lead to unavailable nodes. You can resolve this by: Pruning DeploymentConfig objects Configuring garbage collection 8.18.16.1. About container disks A container disk is a virtual machine image that is stored as a container image in a container image registry. You can use container disks to deliver the same disk images to multiple virtual machines and to create large numbers of virtual machine clones. A container disk can either be imported into a persistent volume claim (PVC) by using a data volume that is attached to a virtual machine, or attached directly to a virtual machine as an ephemeral containerDisk volume. 8.18.16.1.1. Importing a container disk into a PVC by using a data volume Use the Containerized Data Importer (CDI) to import the container disk into a PVC by using a data volume. You can then attach the data volume to a virtual machine for persistent storage. 8.18.16.1.2. Attaching a container disk to a virtual machine as a containerDisk volume A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. When a virtual machine with a containerDisk volume starts, the container image is pulled from the registry and hosted on the node that is hosting the virtual machine. Use containerDisk volumes for read-only file systems such as CD-ROMs or for disposable virtual machines. Important Using containerDisk volumes for read-write file systems is not recommended because the data is temporarily written to local storage on the hosting node. This slows live migration of the virtual machine, such as in the case of node maintenance, because the data must be migrated to the destination node. Additionally, all data is lost if the node loses power or otherwise shuts down unexpectedly. 8.18.16.2. Preparing a container disk for virtual machines You must build a container disk with a virtual machine image and push it to a container registry before it can used with a virtual machine. You can then either import the container disk into a PVC using a data volume and attach it to a virtual machine, or you can attach the container disk directly to a virtual machine as an ephemeral containerDisk volume. The size of a disk image inside a container disk is limited by the maximum layer size of the registry where the container disk is hosted. Note For Red Hat Quay , you can change the maximum layer size by editing the YAML configuration file that is created when Red Hat Quay is first deployed. Prerequisites Install podman if it is not already installed. The virtual machine image must be either QCOW2 or RAW format. Procedure Create a Dockerfile to build the virtual machine image into a container image. The virtual machine image must be owned by QEMU, which has a UID of 107 , and placed in the /disk/ directory inside the container. Permissions for the /disk/ directory must then be set to 0440 . The following example uses the Red Hat Universal Base Image (UBI) to handle these configuration changes in the first stage, and uses the minimal scratch image in the second stage to store the result: USD cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF 1 Where <vm_image> is the virtual machine image in either QCOW2 or RAW format. To use a remote virtual machine image, replace <vm_image>.qcow2 with the complete url for the remote image. Build and tag the container: USD podman build -t <registry>/<container_disk_name>:latest . Push the container image to the registry: USD podman push <registry>/<container_disk_name>:latest If your container registry does not have TLS you must add it as an insecure registry before you can import container disks into persistent storage. 8.18.16.3. Disabling TLS for a container registry to use as insecure registry You can disable TLS (transport layer security) for one or more container registries by editing the insecureRegistries field of the HyperConverged custom resource. Prerequisites Log in to the cluster as a user with the cluster-admin role. Procedure Edit the HyperConverged custom resource and add a list of insecure registries to the spec.storageImport.insecureRegistries field. apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - "private-registry-example-1:5000" - "private-registry-example-2:5000" 1 Replace the examples in this list with valid registry hostnames. 8.18.16.4. steps Import the container disk into persistent storage for a virtual machine . Create a virtual machine that uses a containerDisk volume for ephemeral storage. 8.18.17. Preparing CDI scratch space 8.18.17.1. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 8.18.17.2. About scratch space The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images. During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV). The scratch space PVC is deleted after the operation completes or aborts. You can define the storage class that is used to bind the scratch space PVC in the spec.scratchSpaceStorageClass field of the HyperConverged custom resource. If the defined storage class does not match a storage class in the cluster, then the default storage class defined for the cluster is used. If there is no default storage class defined in the cluster, the storage class used to provision the original DV or PVC is used. Note CDI requires requesting scratch space with a file volume mode, regardless of the PVC backing the origin data volume. If the origin PVC is backed by block volume mode, you must define a storage class capable of provisioning file volume mode PVCs. Manual provisioning If there are no storage classes, CDI uses any PVCs in the project that match the size requirements for the image. If there are no PVCs that match these requirements, the CDI import pod remains in a Pending state until an appropriate PVC is made available or until a timeout function kills the pod. 8.18.17.3. CDI operations that require scratch space Type Reason Registry imports CDI must download the image to a scratch space and extract the layers to find the image file. The image file is then passed to QEMU-IMG for conversion to a raw disk. Upload image QEMU-IMG does not accept input from STDIN. Instead, the image to upload is saved in scratch space before it can be passed to QEMU-IMG for conversion. HTTP imports of archived images QEMU-IMG does not know how to handle the archive formats CDI supports. Instead, the image is unarchived and saved into scratch space before it is passed to QEMU-IMG. HTTP imports of authenticated images QEMU-IMG inadequately handles authentication. Instead, the image is saved to scratch space and authenticated before it is passed to QEMU-IMG. HTTP imports of custom certificates QEMU-IMG inadequately handles custom certificates of HTTPS endpoints. Instead, CDI downloads the image to scratch space before passing the file to QEMU-IMG. 8.18.17.4. Defining a storage class You can define the storage class that the Containerized Data Importer (CDI) uses when allocating scratch space by adding the spec.scratchSpaceStorageClass field to the HyperConverged custom resource (CR). Prerequisites Install the OpenShift CLI ( oc ). Procedure Edit the HyperConverged CR by running the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Add the spec.scratchSpaceStorageClass field to the CR, setting the value to the name of a storage class that exists in the cluster: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: "<storage_class>" 1 1 If you do not specify a storage class, CDI uses the storage class of the persistent volume claim that is being populated. Save and exit your default editor to update the HyperConverged CR. 8.18.17.5. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 8.18.17.6. Additional resources Dynamic provisioning 8.18.18. Re-using persistent volumes To re-use a statically provisioned persistent volume (PV), you must first reclaim the volume. This involves deleting the PV so that the storage configuration can be re-used. 8.18.18.1. About reclaiming statically provisioned persistent volumes When you reclaim a persistent volume (PV), you unbind the PV from a persistent volume claim (PVC) and delete the PV. Depending on the underlying storage, you might need to manually delete the shared storage. You can then re-use the PV configuration to create a PV with a different name. Statically provisioned PVs must have a reclaim policy of Retain to be reclaimed. If they do not, the PV enters a failed state when the PVC is unbound from the PV. Important The Recycle reclaim policy is deprecated in OpenShift Container Platform 4. 8.18.18.2. Reclaiming statically provisioned persistent volumes Reclaim a statically provisioned persistent volume (PV) by unbinding the persistent volume claim (PVC) and deleting the PV. You might also need to manually delete the shared storage. Reclaiming a statically provisioned PV is dependent on the underlying storage. This procedure provides a general approach that might need to be customized depending on your storage. Procedure Ensure that the reclaim policy of the PV is set to Retain : Check the reclaim policy of the PV: USD oc get pv <pv_name> -o yaml | grep 'persistentVolumeReclaimPolicy' If the persistentVolumeReclaimPolicy is not set to Retain , edit the reclaim policy with the following command: USD oc patch pv <pv_name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' Ensure that no resources are using the PV: USD oc describe pvc <pvc_name> | grep 'Mounted By:' Remove any resources that use the PVC before continuing. Delete the PVC to release the PV: USD oc delete pvc <pvc_name> Optional: Export the PV configuration to a YAML file. If you manually remove the shared storage later in this procedure, you can refer to this configuration. You can also use spec parameters in this file as the basis to create a new PV with the same storage configuration after you reclaim the PV: USD oc get pv <pv_name> -o yaml > <file_name>.yaml Delete the PV: USD oc delete pv <pv_name> Optional: Depending on the storage type, you might need to remove the contents of the shared storage folder: USD rm -rf <path_to_share_storage> Optional: Create a PV that uses the same storage configuration as the deleted PV. If you exported the reclaimed PV configuration earlier, you can use the spec parameters of that file as the basis for a new PV manifest: Note To avoid possible conflict, it is good practice to give the new PV object a different name than the one that you deleted. USD oc create -f <new_pv_name>.yaml Additional resources Configuring local storage for virtual machines The OpenShift Container Platform Storage documentation has more information on Persistent Storage . 8.18.19. Deleting data volumes You can manually delete a data volume by using the oc command-line interface. Note When you delete a virtual machine, the data volume it uses is automatically deleted. 8.18.19.1. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 8.18.19.2. Listing all data volumes You can list the data volumes in your cluster by using the oc command-line interface. Procedure List all data volumes by running the following command: USD oc get dvs 8.18.19.3. Deleting a data volume You can delete a data volume by using the oc command-line interface (CLI). Prerequisites Identify the name of the data volume that you want to delete. Procedure Delete the data volume by running the following command: USD oc delete dv <datavolume_name> Note This command only deletes objects that exist in the current project. Specify the -n <project_name> option if the object you want to delete is in a different project or namespace. | [
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: app: <vm_name> 1 name: <vm_name> spec: dataVolumeTemplates: - apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <vm_name> spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 30Gi running: false template: metadata: labels: kubevirt.io/domain: <vm_name> spec: domain: cpu: cores: 1 sockets: 2 threads: 1 devices: disks: - disk: bus: virtio name: rootdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default rng: {} features: smm: enabled: true firmware: bootloader: efi: {} resources: requests: memory: 8Gi evictionStrategy: LiveMigrate networks: - name: default pod: {} volumes: - dataVolume: name: <vm_name> name: rootdisk - cloudInitNoCloud: userData: |- #cloud-config user: cloud-user password: '<password>' 2 chpasswd: { expire: False } name: cloudinitdisk",
"oc create -f <vm_manifest_file>.yaml",
"virtctl start <vm_name>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: RunStrategy: Always 1 template:",
"oc edit <object_type> <object_ID>",
"oc apply <object_type> <object_ID>",
"oc edit vm example",
"disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default",
"oc delete vm <vm_name>",
"oc get vmis",
"oc delete vmi <vmi_name>",
"remmina --connect /path/to/console.rdp",
"virtctl expose vm <fedora-vm> --port=22 --name=fedora-vm-ssh --type=NodePort 1",
"oc get svc",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE fedora-vm-ssh NodePort 127.0.0.1 <none> 22:32551/TCP 6s",
"ssh username@<node_IP_address> -p 32551",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: namespace: ssh-ns 1 name: vm-ssh spec: running: false template: metadata: labels: kubevirt.io/vm: vm-ssh special: vm-ssh 2 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} 3 name: testmasquerade 4 rng: {} machine: type: \"\" resources: requests: memory: 1024M networks: - name: testmasquerade pod: {} volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: userData: | #cloud-config user: fedora password: fedora chpasswd: {expire: False}",
"oc create -f <path_for_the_VM_YAML_file>",
"virtctl start vm-ssh",
"apiVersion: v1 kind: Service metadata: name: svc-ssh 1 namespace: ssh-ns 2 spec: ports: - targetPort: 22 3 protocol: TCP port: 27017 selector: special: vm-ssh 4 type: NodePort",
"oc create -f <path_for_the_service_YAML_file>",
"oc get vmi",
"NAME AGE PHASE IP NODENAME vm-ssh 6s Running 10.244.196.152 node01",
"oc get svc",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc-ssh NodePort 10.106.236.208 <none> 27017:30093/TCP 22s",
"oc get node <node_name> -o wide",
"NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node01 Ready worker 6d22h v1.22.1 192.168.55.101 <none>",
"ssh [email protected] -p 30093",
"virtctl console <VMI>",
"virtctl vnc <VMI>",
"virtctl vnc <VMI> -v 4",
"oc login -u <user> https://<cluster.example.com>:8443",
"oc describe vmi <windows-vmi-name>",
"spec: networks: - name: default pod: {} - multus: networkName: cnv-bridge name: bridge-net status: interfaces: - interfaceName: eth0 ipAddress: 198.51.100.0/24 ipAddresses: 198.51.100.0/24 mac: a0:36:9f:0f:b1:70 name: default - interfaceName: eth1 ipAddress: 192.0.2.0/24 ipAddresses: 192.0.2.0/24 2001:db8::/32 mac: 00:17:a4:77:77:25 name: bridge-net",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force=true",
"oc delete node <node_name>",
"oc get vmis",
"yum install -y qemu-guest-agent",
"systemctl enable --now qemu-guest-agent",
"spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk",
"oc edit vm <vm-name>",
"spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk",
"spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk",
"oc edit vm <vm-name>",
"spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: resources: requests: memory: 128Mi limits: memory: 256Mi 1",
"metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2",
"metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname",
"metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value",
"metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"",
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3",
"certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s",
"error: hyperconvergeds.hco.kubevirt.io \"kubevirt-hyperconverged\" could not be patched: admission webhook \"validate-hco.kubevirt.io\" denied the request: spec.certConfig: ca.duration is smaller than server.duration",
"kubevirt_vm: namespace: name: cpu_cores: memory: disks: - name: volume: containerDisk: image: disk: bus:",
"kubevirt_vm: namespace: default name: vm1 cpu_cores: 1 memory: 64Mi disks: - name: containerdisk volume: containerDisk: image: kubevirt/cirros-container-disk-demo:latest disk: bus: virtio",
"kubevirt_vm: namespace: default name: vm1 state: running 1 cpu_cores: 1",
"ansible-playbook create-vm.yaml",
"(...) TASK [Create my first VM] ************************************************************************ changed: [localhost] PLAY RECAP ******************************************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"ansible-playbook create-vm.yaml",
"--- - name: Ansible Playbook 1 hosts: localhost connection: local tasks: - name: Create my first VM kubevirt_vm: namespace: default name: vm1 cpu_cores: 1 memory: 64Mi disks: - name: containerdisk volume: containerDisk: image: kubevirt/cirros-container-disk-demo:latest disk: bus: virtio",
"apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2 #",
"oc create -f <file_name>.yaml",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"pxe-net-conf\", \"plugins\": [ { \"type\": \"cnv-bridge\", \"bridge\": \"br1\", \"vlan\": 1 1 }, { \"type\": \"cnv-tuning\" 2 } ] }'",
"oc create -f pxe-net-conf.yaml",
"interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1",
"devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2",
"networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf",
"oc create -f vmi-pxe-boot.yaml",
"virtualmachineinstance.kubevirt.io \"vmi-pxe-boot\" created",
"oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running",
"virtctl vnc vmi-pxe-boot",
"virtctl console vmi-pxe-boot",
"ip addr",
"3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff",
"kind: VirtualMachine spec: template: domain: resources: requests: memory: 1024M memory: guest: 2048M",
"oc create -f <file_name>.yaml",
"kind: VirtualMachine spec: template: domain: resources: overcommitGuestOverhead: true requests: memory: 1024M",
"oc create -f <file_name>.yaml",
"kind: VirtualMachine spec: domain: resources: requests: memory: \"4Gi\" 1 memory: hugepages: pageSize: \"1Gi\" 2",
"oc apply -f <virtual_machine>.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1",
"apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3",
"oc create -f 100-worker-kernel-arg-iommu.yaml",
"oc get MachineConfig",
"lspci -nnv | grep -i nvidia",
"02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)",
"variant: openshift version: 4.9.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci",
"butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml",
"oc apply -f 100-worker-vfiopci.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s",
"lspci -nnk -d 10de:",
"04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1) Subsystem: NVIDIA Corporation Device [10de:1eb8] Kernel driver in use: vfio-pci Kernel modules: nouveau",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: \"10DE:1DB6\" 3 resourceName: \"nvidia.com/GV100GL_Tesla_V100\" 4 - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\" - pciDeviceSelector: \"8086:6F54\" resourceName: \"intel.com/qat\" externalResourceProvider: true 5",
"oc describe node <node_name>",
"Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: \"10DE:1DB6\" resourceName: \"nvidia.com/GV100GL_Tesla_V100\" - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\"",
"oc describe node <node_name>",
"Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1",
"lspci -nnk | grep NVIDIA",
"02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: \"poweroff\" 1",
"oc apply -f <file_name>.yaml",
"lspci | grep watchdog -i",
"echo c > /proc/sysrq-trigger",
"pkill -9 watchdog",
"yum install watchdog",
"#watchdog-device = /dev/watchdog",
"systemctl enable --now watchdog.service",
"oc get ns",
"oc create configmap <configmap-name> --from-file=</path/to/file/ca.pem>",
"apiVersion: v1 kind: ConfigMap metadata: name: tls-certs data: ca.pem: | -----BEGIN CERTIFICATE----- ... <base64 encoded cert> -----END CERTIFICATE-----",
"apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 2 secretKey: \"\" 3",
"oc apply -f endpoint-secret.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi storageClassName: local source: http: 3 url: \"https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2\" 4 secretRef: endpoint-secret 5 certConfigMap: \"\" 6 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: \"\" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}",
"oc create -f vm-fedora-datavolume.yaml",
"oc get pods",
"oc describe dv fedora-dv 1",
"virtctl console vm-fedora-datavolume",
"dd if=/dev/zero of=<loop10> bs=100M count=20",
"losetup </dev/loop10>d3 <loop10> 1 2",
"kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4",
"oc create -f <local-block-pv10.yaml> 1",
"apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 2 secretKey: \"\" 3",
"oc apply -f endpoint-secret.yaml",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: import-pv-datavolume 1 spec: storageClassName: local 2 source: http: url: \"https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2\" 3 secretRef: endpoint-secret 4 storage: volumeMode: Block 5 resources: requests: storage: 10Gi",
"oc create -f import-pv-datavolume.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: [\"cdi.kubevirt.io\"] resources: [\"datavolumes/source\"] verbs: [\"*\"]",
"oc create -f <datavolume-cloner.yaml> 1",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io",
"oc create -f <datavolume-cloner.yaml> 1",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: \"<source-namespace>\" 2 name: \"<my-favorite-vm-disk>\" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4",
"oc create -f <cloner-datavolume>.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: \"source-namespace\" name: \"my-favorite-vm-disk\"",
"oc create -f <vm-clone-datavolumetemplate>.yaml",
"dd if=/dev/zero of=<loop10> bs=100M count=20",
"losetup </dev/loop10>d3 <loop10> 1 2",
"kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4",
"oc create -f <local-block-pv10.yaml> 1",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: \"<source-namespace>\" 2 name: \"<my-favorite-vm-disk>\" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4 volumeMode: Block 5",
"oc create -f <cloner-datavolume>.yaml",
"kind: VirtualMachine spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 networks: - name: default pod: {}",
"oc create -f <vm-name>.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4",
"oc create -f example-vm-ipv6.yaml",
"oc get vmi <vmi-name> -o jsonpath=\"{.status.interfaces[*].ipAddresses}\"",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-ephemeral namespace: example-namespace spec: running: false template: metadata: labels: special: key 1",
"apiVersion: v1 kind: Service metadata: name: vmservice 1 namespace: example-namespace 2 spec: externalTrafficPolicy: Cluster 3 ports: - nodePort: 30000 4 port: 27017 protocol: TCP targetPort: 22 5 selector: special: key 6 type: NodePort 7",
"oc create -f <service_name>.yaml",
"oc get service -n example-namespace",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice ClusterIP 172.30.3.149 <none> 27017/TCP 2m",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice NodePort 172.30.232.73 <none> 27017:30000/TCP 5m",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice LoadBalancer 172.30.27.5 172.29.10.235,172.29.10.235 27017:31829/TCP 5s",
"ssh [email protected] -p 27017",
"ssh fedora@USDNODE_IP -p 30000",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: <bridge-network> 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/<bridge-interface> 2 spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"<bridge-network>\", 3 \"type\": \"cnv-bridge\", 4 \"bridge\": \"<bridge-interface>\", 5 \"macspoofchk\": true, 6 \"vlan\": 1 7 }'",
"oc create -f <network-attachment-definition.yaml> 1",
"oc get network-attachment-definition <bridge-network>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <example-vm> spec: template: spec: domain: devices: interfaces: - masquerade: {} name: <default> - bridge: {} name: <bridge-net> 1 networks: - name: <default> pod: {} - name: <bridge-net> 2 multus: networkName: <network-namespace>/<a-bridge-network> 3",
"oc apply -f <example-vm.yaml>",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: \"<vendor_code>\" 9 deviceID: \"<device_id>\" 10 pfNames: [\"<pf_name>\", ...] 11 rootDevices: [\"<pci_bus_id>\", \"...\"] 12 deviceType: vfio-pci 13 isRdma: false 14",
"oc create -f <name>-sriov-node-network.yaml",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: \"<trust_vf>\" 11 capabilities: <capabilities> 12",
"oc create -f <name>-sriov-network.yaml",
"oc get net-attach-def -n <namespace>",
"kind: VirtualMachine spec: domain: devices: interfaces: - name: <default> 1 masquerade: {} 2 - name: <nic1> 3 sriov: {} networks: - name: <default> 4 pod: {} - name: <nic1> 5 multus: networkName: <sriov-network> 6",
"oc apply -f <vm-sriov.yaml> 1",
"kind: VirtualMachine spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true 2",
"kind: VirtualMachine spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2",
"oc describe vmi <vmi_name>",
"Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa",
"oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore",
"oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-",
"touch machineconfig.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-set-selinux-for-hostpath-provisioner labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux chcon for hostpath provisioner Before=kubelet.service [Service] ExecStart=/usr/bin/chcon -Rt container_file_t <backing_directory_path> 1 [Install] WantedBy=multi-user.target enabled: true name: hostpath-provisioner.service",
"oc create -f machineconfig.yaml -n <namespace>",
"sudo chcon -t container_file_t -R <backing_directory_path>",
"touch hostpathprovisioner_cr.yaml",
"apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"<backing_directory_path>\" 1 useNamingPrefix: false 2 workload: 3",
"oc create -f hostpathprovisioner_cr.yaml -n openshift-cnv",
"touch storageclass.yaml",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-provisioner 1 provisioner: kubevirt.io/hostpath-provisioner reclaimPolicy: Delete 2 volumeBindingMode: WaitForFirstConsumer 3",
"oc create -f storageclass.yaml",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: 2 namespace: \"<source_namespace>\" 3 name: \"<my_vm_disk>\" 4 storage: 5 resources: requests: storage: 2Gi 6 storageClassName: <storage_class> 7",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: 2 namespace: \"<source_namespace>\" 3 name: \"<my_vm_disk>\" 4 pvc: 5 accessModes: 6 - ReadWriteMany resources: requests: storage: 2Gi 7 volumeMode: Block 8 storageClassName: <storage_class> 9",
"oc edit -n openshift-cnv storageprofile <storage_class>",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 cloneStrategy: csi-clone 3 status: provisioner: <provisioner> storageClass: <provisioner_class>",
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: \"500m\" memory: \"2Gi\" requests: cpu: \"250m\" memory: \"1Gi\"",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: dv-ann annotations: v1.multus-cni.io/default-network: bridge-network 1 spec: source: http: url: \"example.exampleurl.com\" pvc: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 pvc: preallocation: true 2",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2",
"oc create -f <upload-datavolume>.yaml",
"virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3",
"oc get dvs",
"dd if=/dev/zero of=<loop10> bs=100M count=20",
"losetup </dev/loop10>d3 <loop10> 1 2",
"kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4",
"oc create -f <local-block-pv10.yaml> 1",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2",
"oc create -f <upload-datavolume>.yaml",
"virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3",
"oc get dvs",
"yum install -y qemu-guest-agent",
"systemctl enable --now qemu-guest-agent",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: my-vmsnapshot 1 spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2",
"oc create -f <my-vmsnapshot>.yaml",
"oc wait my-vm my-vmsnapshot --for condition=Ready",
"oc describe vmsnapshot <my-vmsnapshot>",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: creationTimestamp: \"2020-09-30T14:41:51Z\" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: \"3897\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"False\" 1 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"True\" 2 type: Ready creationTime: \"2020-09-30T14:42:03Z\" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: name: my-vmrestore 1 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2 virtualMachineSnapshotName: my-vmsnapshot 3",
"oc create -f <my-vmrestore>.yaml",
"oc get vmrestore <my-vmrestore>",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: creationTimestamp: \"2020-09-30T14:46:27Z\" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: \"5512\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"False\" 2 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"True\" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: \"2020-09-30T14:46:28Z\" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1",
"oc delete vmsnapshot <my-vmsnapshot>",
"oc get vmsnapshot",
"kind: PersistentVolume apiVersion: v1 metadata: name: <destination-pv> 1 annotations: spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi 2 local: path: /mnt/local-storage/local/disk1 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node01 4 persistentVolumeReclaimPolicy: Delete storageClassName: local volumeMode: Filesystem",
"oc get pv <destination-pv> -o yaml",
"spec: nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname 1 operator: In values: - node01 2",
"oc label pv <destination-pv> node=node01",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <clone-datavolume> 1 spec: source: pvc: name: \"<source-vm-disk>\" 2 namespace: \"<source-namespace>\" 3 pvc: accessModes: - ReadWriteOnce selector: matchLabels: node: node01 4 resources: requests: storage: <10Gi> 5",
"oc apply -f <clone-datavolume.yaml>",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} pvc: # Optional: Set the storage class or omit to accept the default # storageClassName: \"hostpath\" accessModes: - ReadWriteOnce resources: requests: storage: 500Mi",
"oc create -f <blank-image-datavolume>.yaml",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: \"<source-namespace>\" 2 name: \"<my-favorite-vm-disk>\" 3 storage: 4 resources: requests: storage: <2Gi> 5",
"oc create -f <cloner-datavolume>.yaml",
"virtctl addvolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> [--persist] [--serial=<label-name>]",
"virtctl removevolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC>",
"cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF",
"podman build -t <registry>/<container_disk_name>:latest .",
"podman push <registry>/<container_disk_name>:latest",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - \"private-registry-example-1:5000\" - \"private-registry-example-2:5000\"",
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: \"<storage_class>\" 1",
"oc get pv <pv_name> -o yaml | grep 'persistentVolumeReclaimPolicy'",
"oc patch pv <pv_name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'",
"oc describe pvc <pvc_name> | grep 'Mounted By:'",
"oc delete pvc <pvc_name>",
"oc get pv <pv_name> -o yaml > <file_name>.yaml",
"oc delete pv <pv_name>",
"rm -rf <path_to_share_storage>",
"oc create -f <new_pv_name>.yaml",
"oc get dvs",
"oc delete dv <datavolume_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/virtualization/virtual-machines |
Developing Ansible automation content | Developing Ansible automation content Red Hat Ansible Automation Platform 2.4 Develop Ansible automation content to run automation jobs Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/developing_ansible_automation_content/index |
Configuring HA clusters to manage SAP NetWeaver or SAP S/4HANA Application server instances using the RHEL HA Add-On | Configuring HA clusters to manage SAP NetWeaver or SAP S/4HANA Application server instances using the RHEL HA Add-On Red Hat Enterprise Linux for SAP Solutions 8 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/configuring_ha_clusters_to_manage_sap_netweaver_or_sap_s4hana_application_server_instances_using_the_rhel_ha_add-on/index |
Chapter 7. Running and removing JBoss EAP as a service on RHEL | Chapter 7. Running and removing JBoss EAP as a service on RHEL 7.1. JBoss EAP as a service on RHEL When using the archive, JBoss EAP installation manager, and GUI installation methods, you can configure JBoss EAP to run as a service in Red Hat Enterprise Linux RHEL. This enables the JBoss EAP service to start automatically when the RHEL server starts. Note A server installed using RPM is already configured to run as a service. You can use one of two daemon services: Systemd (System daemon). Systemd scripts are supplied with a basic configuration, and are intended to be used for standalone and domain mode instances. Init, also known as "System V init", or SysVinit . Init scripts are deprecated in favor of Systemd scripts. 7.2. Running JBoss EAP as a Systemd service on RHEL You can configure JBoss EAP to run as a Systemd service in Red Hat Enterprise Linux RHEL. This enables the JBoss EAP service to start automatically when the RHEL server starts. You must generate a systemd unit file with the username:groupname to launch the server. Prerequisites You have downloaded and Installed JBoss EAP. You have set the JAVA_HOME system environment variable. You have administrator access to the RHEL server. Procedure Create the user and group that will be used to launch the server: USD sudo groupadd -r groupname USD sudo useradd -r -g groupname -d USDEAP_HOME -s /sbin/nologin username USD sudo chown -R username:groupname USDEAP_HOME Note The commands in this procedure require root privileges to run. Either run su - to switch to the root user or preface the commands with sudo . Generate the systemd unit file: Standalone mode USD sudo cd USDEAP_HOME/bin/systemd USD sudo ./generate_systemd_unit.sh standalone username groupname Domain mode USD sudo cd USDEAP_HOME/bin/systemd USD sudo ./generate_systemd_unit.sh domain username groupname Configure the start-up options in the jboss-eap-standalone.conf or jboss-eap-domain.conf by using a text editor and set the options for your JBoss EAP installation: Standalone mode USD sudo vim USDEAP_HOME/bin/systemd/jboss-eap-standalone.conf Domain mode USD sudo vim USDEAP_HOME/bin/systemd/jboss-eap-domain.conf Copy the systemd unit file and the server configuration file, and reload the service daemon: Standalone mode USD sudo cp USDEAP_HOME/bin/systemd/jboss-eap-standalone.service USD(pkg-config systemd --variable=systemd_system_unit_dir) USD sudo cp USDEAP_HOME/bin/systemd/jboss-eap-standalone.conf /etc/sysconfig/ USD sudo systemctl daemon-reload Domain mode USD sudo cp USDEAP_HOME/bin/systemd/jboss-eap-domain.service USD(pkg-config systemd --variable=systemd_system_unit_dir) USD sudo cp USDEAP_HOME/bin/systemd/jboss-eap-domain.conf /etc/sysconfig/ USD sudo systemctl daemon-reload Enable the service: Standalone mode USD sudo systemctl enable jboss-eap-standalone.service USD sudo systemctl start jboss-eap-standalone.service Domain mode USD sudo systemctl enable jboss-eap-domain.service USD sudo systemctl start jboss-eap-domain.service Additional resources For more information about controlling the state of services, see Management system services in the RHEL Configuring basic system settings guide. For more information about viewing error logs, see Bootup logging in the JBoss EAP Configuration Guide. 7.3. Removing JBoss EAP Systemd service on RHEL You can remove the Systemd services associated with your server instance. The steps describe how to do it. Prerequisites You have JBoss EAP installed as a Systemd service. Procedure When the service is running, open a terminal and stop the service with one of the following commands: Standalone mode USD sudo systemctl stop jboss-eap-standalone.service Domain mode USD sudo systemctl stop jboss-eap-domain.service Note The commands in this procedure require root privileges to run. Either run su - to switch to the root user or preface the commands with sudo . Disable the service: Standalone mode USD sudo systemctl disable jboss-eap-standalone.service Domain mode USD sudo systemctl disable jboss-eap-domain.service Delete the configuration file and startup script: Standalone mode USD sudo rm "USD(pkg-config systemd --variable=systemd_system_unit_dir)/jboss-eap-standalone.service" USD sudo rm "/etc/sysconfig/jboss-eap-standalone.conf" USD sudo systemctl daemon-reload Domain mode USD sudo rm "USD(pkg-config systemd --variable=systemd_system_unit_dir)/jboss-eap-domain.service" USD sudo rm "/etc/sysconfig/jboss-eap-domain.conf" USD sudo systemctl daemon-reload 7.4. Running JBoss EAP as an Init service on RHEL Note Init scripts are deprecated in favor of Systemd scripts. Additional packages may need to be installed to provide this functionality. For example, when using RHEL 9, you must install the initscripts package to use Init services. You can configure JBoss EAP to run as a service in Red Hat Enterprise Linux RHEL. This enables the JBoss EAP service to start automatically when the RHEL server starts. Prerequisites You have downloaded and Installed JBoss EAP. You have set the JAVA_HOME system environment variable. You have administrator access to the RHEL server. When using RHEL, you have the initscripts package installed. Procedure Configure the start-up options in the jboss-eap.conf file by opening the jboss-eap.conf in a text editor and set the options for your JBoss EAP installation. Copy the service initialization and configuration files into the system directories: Copy the modified service configuration file to the /etc/default directory. Note The commands in this procedure require root privileges to run. Either run su - to switch to the root user or preface the commands with sudo . USD sudo cp EAP_HOME/bin/init.d/jboss-eap.conf /etc/default Copy the service startup script to the /etc/init.d directory and give it execute permissions: USD sudo cp EAP_HOME/bin/init.d/jboss-eap-rhel.sh /etc/init.d USD sudo chmod +x /etc/init.d/jboss-eap-rhel.sh USD sudo restorecon /etc/init.d/jboss-eap-rhel.sh Add the new jboss-eap-rhel.sh service to the list of automatically started services using the chkconfig service management command: USD sudo chkconfig --add jboss-eap-rhel.sh Verify that the service has been installed correctly by using the following command: USD sudo service jboss-eap-rhel start Optional: To make the service start automatically when the RHEL server starts, run the following command: USD sudo chkconfig jboss-eap-rhel.sh on Verification To check the permissions of a file, enter the ls -l command in the directory containing the file. To check that the automatic service start is enabled, enter the following command: USD sudo chkconfig --list jboss-eap-rhel.sh Additional resources For more information about controlling the state of services, see Management system services in the RHEL Configuring basic system settings guide. For more information about viewing error logs, see Bootup logging in the RHEL Configuration Guide. 7.5. Removing JBoss EAP Init service on RHEL You can remove the Systemd services associated with your server instance. The steps describe how to do it. Prerequisites You have JBoss EAP installed as an Init service. Procedure When the service is running, open a terminal and stop the service with one of the following command: USD sudo service jboss-eap-rhel.sh stop Note The commands in this procedure require root privileges to run. Either run su - to switch to the root user or preface the commands with sudo . Remove JBoss EAP from the list of services: USD sudo chkconfig --del jboss-eap-rhel.sh Delete the configuration file and startup script: USD sudo rm /etc/init.d/jboss-eap-rhel.sh USD sudo rm /etc/default/jboss-eap.conf | [
"sudo groupadd -r groupname sudo useradd -r -g groupname -d USDEAP_HOME -s /sbin/nologin username sudo chown -R username:groupname USDEAP_HOME",
"sudo cd USDEAP_HOME/bin/systemd sudo ./generate_systemd_unit.sh standalone username groupname",
"sudo cd USDEAP_HOME/bin/systemd sudo ./generate_systemd_unit.sh domain username groupname",
"sudo vim USDEAP_HOME/bin/systemd/jboss-eap-standalone.conf",
"sudo vim USDEAP_HOME/bin/systemd/jboss-eap-domain.conf",
"sudo cp USDEAP_HOME/bin/systemd/jboss-eap-standalone.service USD(pkg-config systemd --variable=systemd_system_unit_dir) sudo cp USDEAP_HOME/bin/systemd/jboss-eap-standalone.conf /etc/sysconfig/ sudo systemctl daemon-reload",
"sudo cp USDEAP_HOME/bin/systemd/jboss-eap-domain.service USD(pkg-config systemd --variable=systemd_system_unit_dir) sudo cp USDEAP_HOME/bin/systemd/jboss-eap-domain.conf /etc/sysconfig/ sudo systemctl daemon-reload",
"sudo systemctl enable jboss-eap-standalone.service sudo systemctl start jboss-eap-standalone.service",
"sudo systemctl enable jboss-eap-domain.service sudo systemctl start jboss-eap-domain.service",
"sudo systemctl stop jboss-eap-standalone.service",
"sudo systemctl stop jboss-eap-domain.service",
"sudo systemctl disable jboss-eap-standalone.service",
"sudo systemctl disable jboss-eap-domain.service",
"sudo rm \"USD(pkg-config systemd --variable=systemd_system_unit_dir)/jboss-eap-standalone.service\" sudo rm \"/etc/sysconfig/jboss-eap-standalone.conf\" sudo systemctl daemon-reload",
"sudo rm \"USD(pkg-config systemd --variable=systemd_system_unit_dir)/jboss-eap-domain.service\" sudo rm \"/etc/sysconfig/jboss-eap-domain.conf\" sudo systemctl daemon-reload",
"sudo cp EAP_HOME/bin/init.d/jboss-eap.conf /etc/default",
"sudo cp EAP_HOME/bin/init.d/jboss-eap-rhel.sh /etc/init.d sudo chmod +x /etc/init.d/jboss-eap-rhel.sh sudo restorecon /etc/init.d/jboss-eap-rhel.sh",
"sudo chkconfig --add jboss-eap-rhel.sh",
"sudo service jboss-eap-rhel start",
"sudo chkconfig jboss-eap-rhel.sh on",
"sudo chkconfig --list jboss-eap-rhel.sh",
"sudo service jboss-eap-rhel.sh stop",
"sudo chkconfig --del jboss-eap-rhel.sh",
"sudo rm /etc/init.d/jboss-eap-rhel.sh sudo rm /etc/default/jboss-eap.conf"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/red_hat_jboss_enterprise_application_platform_installation_methods/assembly_running-jboss-eap-as-a-service-on-rhel-and-microsoft_default |
Chapter 4. Embedding applications for offline use | Chapter 4. Embedding applications for offline use You can embed microservices-based workloads and applications in a Red Hat Enterprise Linux for Edge (RHEL for Edge) image. Embedding means you can run a Red Hat build of MicroShift cluster in air-gapped, disconnected, or offline environments. 4.1. Embedding workload container images for offline use To embed container images in devices at the edge that do not have any network connection, you must create a new container, mount the ISO, and then copy the contents into the file system. Prerequisites You have root access to the host. Application RPMs have been added to a blueprint. You installed the OpenShift CLI ( oc ). Procedure Render the manifests, extract all of the container image references, and translate the application image to blueprint container sources by running the following command: USD oc kustomize ~/manifests | grep "image:" | grep -oE '[^ ]+USD' | while read line; do echo -e "[[containers]]\nsource = \"USD{line}\"\n"; done >><my_blueprint>.toml Push the updated blueprint to image builder by running the following command: USD sudo composer-cli blueprints push <my_blueprint>.toml If your workload containers are located in a private repository, you must provide image builder with the necessary pull secrets: Set the auth_file_path in the [containers] section in the /etc/osbuild-worker/osbuild-worker.toml configuration file to point to the pull secret. If needed, create a directory and file for the pull secret, for example: Example directory and file [containers] auth_file_path = "/<path>/pull-secret.json" 1 1 Use the custom location previously set for copying and retrieving images. Build the container image by running the following command: USD sudo composer-cli compose start-ostree <my_blueprint> edge-commit Proceed with your preferred rpm-ostree image flow, such as waiting for the build to complete, exporting the image and integrating it into your rpm-ostree repository or creating a bootable ISO. 4.2. Additional resources Options for embedding Red Hat build of MicroShift applications in a RHEL for Edge image Creating the RHEL for Edge image Add the blueprint to image builder and build the ISO Download the ISO and prepare it for use Upgrading RHEL for Edge systems | [
"oc kustomize ~/manifests | grep \"image:\" | grep -oE '[^ ]+USD' | while read line; do echo -e \"[[containers]]\\nsource = \\\"USD{line}\\\"\\n\"; done >><my_blueprint>.toml",
"sudo composer-cli blueprints push <my_blueprint>.toml",
"[containers] auth_file_path = \"/<path>/pull-secret.json\" 1",
"sudo composer-cli compose start-ostree <my_blueprint> edge-commit"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/running_applications/microshift-embed-apps-offline-use |
Chapter 23. MVEL | Chapter 23. MVEL Overview MVEL is a Java-based dynamic language that is similar to OGNL, but is reported to be much faster. The MVEL support is in the camel-mvel module. Syntax You use the MVEL dot syntax to invoke Java methods, for example: Because MVEL is dynamically typed, it is unnecessary to cast the message body instance (of Object type) before invoking the getFamilyName() method. You can also use an abbreviated syntax for invoking bean attributes, for example: Adding the MVEL module To use MVEL in your routes you need to add a dependency on camel-mvel to your project as shown in Example 23.1, "Adding the camel-mvel dependency" . Example 23.1. Adding the camel-mvel dependency Built-in variables Table 23.1, "MVEL variables" lists the built-in variables that are accessible when using MVEL. Table 23.1. MVEL variables Name Type Description this org.apache.camel.Exchange The current Exchange exchange org.apache.camel.Exchange The current Exchange exception Throwable the Exchange exception (if any) exchangeID String the Exchange ID fault org.apache.camel.Message The Fault message(if any) request org.apache.camel.Message The IN message response org.apache.camel.Message The OUT message properties Map The Exchange properties property( name ) Object The value of the named Exchange property property( name , type ) Type The typed value of the named Exchange property Example Example 23.2, "Route using MVEL" shows a route that uses MVEL. Example 23.2. Route using MVEL | [
"getRequest().getBody().getFamilyName()",
"request.body.familyName",
"<!-- Maven POM File --> <properties> <camel-version>2.23.2.fuse-7_13_0-00013-redhat-00001</camel-version> </properties> <dependencies> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-mvel</artifactId> <version>USD{camel-version}</version> </dependency> </dependencies>",
"<camelContext> <route> <from uri=\"seda:foo\"/> <filter> <language langauge=\"mvel\">request.headers.foo == 'bar'</language> <to uri=\"seda:bar\"/> </filter> </route> </camelContext>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/mvel |
Part VIII. System Backup and Recovery | Part VIII. System Backup and Recovery This part describes how to use the Relax-and-Recover (ReaR) disaster recovery and system migration utility. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/part-system_backup_and_recovery |
Chapter 4. New features | Chapter 4. New features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 8.9. 4.1. Installer and image creation Support to both legacy and UEFI boot for AWS EC2 images Previously, RHEL image builder created EC2 AMD or Intel 64-bit architecture AMIs images with support only for the legacy boot type. As a consequence, it was not possible to take advantage of certain AWS features requiring UEFI boot, such as secure boot. This enhancement extends the AWS EC2 AMD or Intel 64-bit architecture AMI image to support UEFI boot, in addition to the legacy BIOS boot. As a result, it is now possible to take advantage of AWS features which require booting the image with UEFI. Jira:RHELDOCS-16339 [1] New boot option inst.wait_for_disks= to add wait time for loading a kickstart file or the kernel drivers Sometimes, it may take a few seconds to load a kickstart file or the kernel drivers from the device with the OEMDRV label during the boot process. To adjust the wait time, you can now use the new boot option, inst.wait_for_disks= . Using this option, you can specify how many seconds to wait before the installation. The default time is set to 5 seconds, however, you can use 0 seconds to minimize the delay. For more information about this option, see Storage boot options . Bugzilla:1770969 New network kickstart options to control DNS handling You can now control DNS handling using the network kickstart command with the following new options. Use these new options with the --device option. The --ipv4-dns-search and --ipv6-dns-search options allow you to set DNS search domains manually. These options mirror their respective NetworkManager properties, for example: The --ipv4-ignore-auto-dns and --ipv6-ignore-auto-dns options allow you to ignore DNS settings from DHCP. They do not require any arguments. Bugzilla:1656662 [1] 4.2. Security opencryptoki rebased to 3.21.0 The opencryptoki package has been rebased to version 3.21.0, which provides many enhancements and bug fixes. Most notably, opencryptoki now supports the following features: Concurrent hardware security module (HSM) master key changes The protected-key option to transform a chosen key into a protected key Additional key types, such as DH, DSA, and generic secret key types EP11 host library version 4 AES-XTS key type IBM-specific Kyber key type and mechanism Additional IBM-specific Dilithium key round 2 and 3 variants Additionally, pkcsslotd slot manager no longer runs as root and opencryptoki offers further hardening. With this update, you can also use the following set of new commands: p11sak set-key-attr To modify keys p11sak copy-key To copy keys p11sak import-key To import keys p11sak export-key To export keys Bugzilla:2159697 [1] fapolicyd now provides rule numbers for troubleshooting With this enhancement, new kernel and Audit components allow the fapolicyd service to send the number of the rule that causes a denial to the fanotify API. As a result, you can troubleshoot problems related to fapolicyd more precisely. Jira:RHEL-628 ANSSI-BP-028 security profiles updated to version 2.0 The following French National Agency for the Security of Information Systems (ANSSI) BP-028 profiles in the SCAP Security Guide were updated to be aligned with version 2.0: ANSSI-BP-028 Minimal Level ANSSI-BP-028 Intermediary Level ANSSI-BP-028 Enhanced Level ANSSI-BP-028 High Level Bugzilla:2155789 Better definition of interactive users The rules in the scap-security-guide package were improved to provide more consistent interactive user configuration. Previously, some rules used different approaches for identifying interactive and non-interactive users. With this update, we have unified the definitions of interactive users. User accounts with UID greater than or equal to 1000 are now considered interactive, with the exception of the nobody and nfsnobody accounts and with the exception of accounts that use /sbin/nologin as the login shell. This change affects the following rules: accounts_umask_interactive_users accounts_user_dot_user_ownership accounts_user_dot_group_ownership accounts_user_dot_no_world_writable_programs accounts_user_interactive_home_directory_defined accounts_user_interactive_home_directory_exists accounts_users_home_files_groupownership accounts_users_home_files_ownership accounts_users_home_files_permissions file_groupownership_home_directories file_ownership_home_directories file_permissions_home_directories file_permissions_home_dirs no_forward_files Bugzilla:2157877 , Bugzilla:2178740 The DISA STIG profile now supports audit_rules_login_events_faillock With this enhancement, the SCAP Security Guide audit_rules_login_events_faillock rule, which references STIG ID RHEL-08-030590, has been added to the DISA STIG profile for RHEL 8. This rule checks if the Audit daemon is configured to record any attempts to modify login event logs stored in the /var/log/faillock directory. Bugzilla:2167999 OpenSCAP rebased to 1.3.8 The OpenSCAP packages have been rebased to upstream version 1.3.8. This version provides various bug fixes and enhancements, most notably: Fixed systemd probes to not ignore some systemd units Added offline capabilities to the shadow OVAL probe Added offline capabilities to the sysctl OVAL probe Added auristorfs to the list of network filesystems Created a workaround for issues with tailoring files produced by the autotailor utility Bugzilla:2217441 SCAP Security Guide rebased to version 0.1.69 The SCAP Security Guide (SSG) packages have been rebased to upstream version 0.1.69. This version provides various enhancements and bug fixes, most notably three new SCAP profiles for RHEL 9 which are aligned with three levels of the CCN-STIC-610A22 Guide issued by the National Cryptologic Center of Spain in 2022-10: CCN Red Hat Enterprise Linux 9 - Basic CCN Red Hat Enterprise Linux 9 - Intermediate CCN Red Hat Enterprise Linux 9 - Advanced Bugzilla:2221695 FIPS-enabled in-place upgrades from RHEL 8.8 and later to RHEL 9.2 and later are supported With the release of the RHBA-2023:3824 advisory, you can perform an in-place upgrade of a RHEL 8.8 and later system to a RHEL 9.2 and later system with FIPS mode enabled. Bugzilla:2097003 crypto-policies permitted_enctypes no longer break replications in FIPS mode Before this update, an IdM server running on RHEL 8 sent an AES-256-HMAC-SHA-1-encrypted service ticket that an IdM replica running RHEL 9 in FIPS mode. Consequently, the default permitted_enctypes krb5 configuration broke a replication between the RHEL 8 IdM server and the RHEL 9 IdM replica in FIPS mode. With this update, the values of the permitted_enctypes krb5 configuration option depend on the mac and cipher crypto-policy values. That allows the prioritization of the interoperable encryption types by default. As additional results of this update, the arcfour-hmac-md5 option is available only in the LEGACY:AD-SUPPORT subpolicy and the aes256-cts-hmac-sha1-96 is no longer available in the FUTURE policy. Note If you use Kerberos, verify the order of the values of permitted_enctypes in the /etc/crypto-policies/back-ends/krb5.config file. If your scenario requires a different order, apply a custom cryptographic subpolicy. Bugzilla:2219912 Audit now supports FANOTIFY record fields This update of the audit packages introduces support for FANOTIFY Audit record fields. The Audit subsystem now logs additional information in the AUDIT_FANOTIFY record, notably: fan_type to specify the type of a FANOTIFY event fan_info to specify additional context information sub_trust and obj_trust to indicate trust levels for a subject and an object involved in an event As a result, you can better understand why the Audit system denied access in certain cases. This can help you write policies for tools such as the fapolicyd framework. Bugzilla:2216666 New SELinux boolean to allow QEMU Guest Agent executing confined commands Previously, commands that were supposed to execute in a confined context through the QEMU Guest Agent daemon program, such as mount , failed with an Access Vector Cache (AVC) denial. To be able to execute these commands, the guest-agent must run in the virt_qemu_ga_unconfined_t domain. Therefore, this update adds the SELinux policy boolean virt_qemu_ga_run_unconfined that allows guest-agent to make the transition to virt_qemu_ga_unconfined_t for executables located in any of the following directories: /etc/qemu-ga/fsfreeze-hook.d/ /usr/libexec/qemu-ga/fsfreeze-hook.d/ /var/run/qemu-ga/fsfreeze-hook.d/ In addition, the necessary rules for transitions for the qemu-ga daemon have been added to the SELinux policy boolean. As a result, you can now execute confined commands through the QEMU Guest Agent without AVC denials by enabling the virt_qemu_ga_run_unconfined boolean. Bugzilla:2093355 4.3. Infrastructure services Postfix now supports SRV lookups With this enhancement, you can now use the Postfix DNS service records resolution (SRV) to automatically configure mail clients and balance load of servers. Additionally, you can prevent mail delivery disruptions caused by temporary DNS issues or misconfigured SRV records by using the following SRV-related options in your Postfix configuration: use_srv_lookup You can enable discovery for the specified service by using DNS SRV records. allow_srv_lookup_fallback You can use a cascading approach to locating a service. ignore_srv_lookup_error You can ensure that the service discovery remains functional even if SRV records are not available or encounter errors. Bugzilla:1787010 You can now specify TLS 1.3 cipher suites in vsftpd With this enhancement, you can use the new ssl_ciphersuites option to configure which cipher suites vsftpd uses. As a result, you can specify TLS 1.3 cipher suites that differ from the TLS versions. To specify multiple cipher suites, separate entries with colons (:). Bugzilla:2069733 Generic LF-to-CRLF driver is available in cups-filters With this enhancement, you can now use the Generic LF-to-CRLF driver, which converts LF characters to CR+LF characters for printers accepting files with CR+LF characters. The carriage return (CR) and line feed (LF) are control characters that mark the end of lines. As a result, by using this driver, you can send an LF character terminated file from your application to a printer accepting only CR+LF characters. The Generic LF-to-CRLF driver is a renamed version of the text-only driver from RHEL 7. The new name reflects its actual functionality. Bugzilla:2118406 [1] 4.4. Networking iproute rebased to version 6.2.0 The iproute packages have been upgraded to upstream version 6.2.0, which provides a number of enhancements and bug fixes over the version. The most notable changes are: The new ip stats command manages and shows interface statistics. By default, the ip stats show command displays statistics for all network devices, including bridges and bonds. You can filter the output by using the dev and group options. For further details, see the ip-stats(8) man page. The ss utility now provides the -T ( --threads ) option to display thread information, which extends the -p ( --processes ) option. For further details, see the ss(8) man page. You can use the new bridge fdb flush command to remove specific forwarding database (fdb) entries which match a supplied option. For further details, see the bridge(8) man page. Jira:RHEL-424 [1] Security improvement of the default nftables service configuration This enhancement adds the do_masquerade chain to the default nftables service configuration in the /etc/sysconfig/nftables/nat.nft file. This reduces the risk of a port shadow attack, which is described in CVE-2021-3773 . The first rule in the do_masquerade chain detects suitable packets and enforces source port randomization to reduce the risk of port shadow attacks. Bugzilla:2061942 NetworkManager supports the no-aaaa DNS option You can now use the no-aaaa option to configure DNS settings on managed nodes by suppressing AAAA queries generated by the stub resolver. Previously, there was no option to suppress AAAA queries generated by the stub resolver, including AAAA lookups triggered by NSS-based interfaces such as getaddrinfo ; only DNS lookups were affected. With this enhancement, you can disable IPv6 resolution by using the nmcli utility. After a restart of the NetworkManager service, the no-aaaa setting gets reflected in the /etc/resolv.conf file, with additional control over DNS lookups. Bugzilla:2144521 The nm-cloud-setup utility now supports IMDSv2 configuration Users can configure an AWS Red Hat Enterprise Linux EC2 instance with Instance Metadata Service Version 2 (IMDSv2) with the nm-cloud-setup utility. To comply with improved security that restricts unauthorized access to EC2 metadata and new features, integration between AWS and Red Hat services is necessary to provide advanced features. This enhancement enables the nm-cloud-setup utility to fetch and save the IMDSv2 tokens, verify an EC2 environment, and retrieve information about available interfaces and IP configuration by using the secured IMDSv2 tokens. Bugzilla:2151987 The libnftnl package rebased to version 1.2.2 The Netlink API to the in-kernel nf_tables subsystem ( libnftnl ) package has been rebased. Notable changes and enhancements include: Added features: Nesting of the udata attribute Resetting TCP options with the exthdr expression The sdif and sdifname meta keywords Support for a new attribute NFTNL_CHAIN_FLAGS in the nftnl_chain struct, to communicate flags between the kernel and user space. Support for the nftnl_set struct nftables sets backend to add expressions to sets and set elements. Comments to sets, tables, objects, and chains The nftnl_table struct now has an NFTNL_TABLE_OWNER attribute. Set this attribute to enable the kernel to communicate the owner to the user space. Readiness for incremental updates to flowtable device The typeof keyword related nftnl_set udata definitions The chain ID attribute The function to remove expressions from a rule A new last expression Improved bitwise expressions: Newly added op and data attributes Left and right shifts Aligned with debug output of other expressions Improved socket expressions: Added the wildcard attribute Support for cgroups v2 Improved debug output: Included the key_end data register in set elements Dropped unused registers from masq and nat expressions Applied fix for verdict map elements Removed leftovers from dropped XML formatting Support for payload offset of inner header Bugzilla:2211096 4.5. Kernel Kernel version in RHEL 8.9 Red Hat Enterprise Linux 8.9 is distributed with the kernel version 4.18.0-513.5.1. Bugzilla:2232558 The RHEL kernel now supports AutoIBRS Automatic Indirect Branch Restricted Speculation (AutoIBRS) is a feature provided by the AMD EPYC 9004 Genoa family of processors and later CPU versions. AutoIBRS is the default mitigation for the Spectre v2 CPU vulnerability, which boosts performance and improves scalability. Bugzilla:1989283 [1] The Intel(R) QAT kernel driver rebased to upstream version 6.2 The Intel(R) Quick Assist Technology (QAT) has been rebased to upstream version 6.2. The Intel(R) QAT includes accelerators optimized for symmetric and asymmetric cryptography, compression performance, and other CPU intensive tasks. The rebase includes many bug fixes and enhancements. The most notable enhancement is the support available for following hardware accelerator devices for QAT GEN4: Intel Quick Assist Technology 401xx devices Intel Quick Assist Technology 402xx devices Bugzilla:2144529 [1] makedumpfile rebased to version 1.7.2 The makedumpfile tool, which makes the crash dump file small by compressing pages or excluding memory pages that are not required, has been rebased to version 1.7.2. The rebase includes many bug fixes and enhancements. The most notable change is the added 5-level paging mode for standalone dump ( sadump ) mechanism on AMD and Intel 64-bit architectures. The 5-level paging mode extends the processor's linear address width to allow applications access larger amounts of memory. 5-level paging extends the size of virtual addresses from 48 to 57 bits and the physical addresses from 46 to 52 bits. Bugzilla:2173791 4.6. File systems and storage Support for specifying a UUID when creating a GFS2 file system The mkfs.gfs2 command now supports the new -U option, which makes it possible to specify the file system UUID for the file system you create. If you omit this option, the file system's UUID is generated randomly. Bugzilla:2180782 fuse3 now allows invalidating a directory entry without triggering umount With this update, a new mechanism has been added to fuse3 package, that allows invalidating a directory entry without automatically triggering the umount of any mounts that exists on the entry. Bugzilla:2171095 [1] 4.7. High availability and clusters Pacemaker's scheduler now tries to satisfy all mandatory colocation constraints before trying to satisfy optional colocation constraints Previously, colocation constraints were considered one by one regardless of whether they were mandatory or optional. This meant that certain resources could be unable to run even though a node assignment was possible. Pacemaker's scheduler now tries to satisfy all mandatory colocation constraints, including the implicit constraints between group members, before trying to satisfy optional colocation constraints. As a result, resources with a mix of optional and mandatory colocation constraints are now more likely to be able to run. Bugzilla:1876173 IPaddr2 and IPsrcaddr cluster resource agents now support policy-based routing The IPaddr2 and IPsrcaddr cluster resource agents now support policy-based routing, which enables you to configure complex routing scenarios. Policy-based routing requires that you configure the resource agent's table parameter. Bugzilla:2040110 The Filesystem resource agent now supports the EFS file system type The ocf:heartbeat:Filesystem cluster resource agent now supports the Amazon Elastic File System (EFS). You can now specify fstype=efs when configuring a Filesystem resource. Bugzilla:2049319 The alert_snmp.sh.sample alert agent now supports SNMPv3 The alert_snmp.sh.sample alert agent, which is the sample alert agent provided with Pacemaker, now supports the SNMPv3 protocol as well as SNMPv2. With this update, you can copy the alert_snmp.sh.sample agent without modification to use SNMPv3 with Pacemaker alerts. Bugzilla:2160206 New enabled alert meta option to disable a Pacemaker alert Pacemaker alerts and alert recipients now support an enabled meta option. Setting the enabled meta option to false for an alert disables the alert. Setting the enabled meta option to true for an alert and false for a particular recipient disables the alert for that recipient. The default value for the enabled meta option is true . You can use this option to temporarily disable an alert for any reason, such as planned maintenance. Bugzilla:2078611 Pacemaker Remote nodes now preserve transient node attributes after a brief connection outage Previously, when a Pacemaker Remote connection was lost, Pacemaker would always purge its transient node attributes. This was unnecessary if the connection was quickly recoverable and the remote daemon had not restarted in the meantime. Pacemaker Remote nodes now preserve transient node attributes after a brief, recoverable connection outage. Bugzilla:2030869 Enhancements to the pcs property command The pcs property command now supports the following enhancements: The pcs property config --output-format= option Specify --output-format=cmd to display the pcs property set command created from the current cluster properties configuration. You can use this command to re-create configured cluster properties on a different system. Specify --output-format=json to display the configured cluster properties in JSON format. Specify output-format=text to display the configured cluster properties in plain text format, which is the default value for this option. The pcs property defaults command, which replaces the deprecated pcs property --defaults option The pcs property describe command, which describes the meaning of cluster properties. Bugzilla:2166289 4.8. Dynamic programming languages, web and database servers A new nodejs:20 module stream is fully supported A new module stream, nodejs:20 , previously available as a Technology Preview, is fully supported with the release of the RHEA-2023:7249 advisory. The nodejs:20 module stream now provides Node.js 20.9 , which is a Long Term Support (LTS) version. Node.js 20 included in RHEL 8.9 provides numerous new features, bug fixes, security fixes, and performance improvements over Node.js 18 available since RHEL 8.7. Notable changes include: The V8 JavaScript engine has been upgraded to version 11.3. The npm package manager has been upgraded to version 9.8.0. Node.js introduces a new experimental Permission Model. Node.js introduces a new experimental Single Executable Application (SEA) feature. Node.js provides improvements to the Experimental ECMAScript modules (ESM) loader. The native test runner, introduced as an experimental node:test module in Node.js 18 , is now considered stable. To install the nodejs:20 module stream, use: If you want to upgrade from the nodejs:18 stream, see Switching to a later stream . For information about the length of support for the nodejs Application Streams, see Red Hat Enterprise Linux Application Streams Life Cycle . Bugzilla:2186718 A new filter argument to the Python tarfile extraction functions To mitigate CVE-2007-4559 , Python adds a filter argument to the tarfile extraction functions. The argument allows turning tar features off for increased safety (including blocking the CVE-2007-4559 directory traversal attack). If a filter is not specified, the 'data' filter, which is the safest but most limited, is used by default in RHEL. In addition, Python emits a warning when your application has been affected. For more information, including instructions to hide the warning, see the Knowledgebase article Mitigation of directory traversal attack in the Python tarfile library (CVE-2007-4559) . Jira:RHELDOCS-16405 [1] The HTTP::Tiny Perl module now verifies TLS certificates by default The default value for the verify_SSL option in the HTTP::Tiny Perl module has been changed from 0 to 1 to verify TLS certificates when using HTTPS. This change fixes CVE-2023-31486 for HTTP::Tiny and CVE-2023-31484 for the CPAN Perl module. To make support for TLS verification available, this update adds the following dependencies to the perl-HTTP-Tiny package: perl-IO-Socket-SSL perl-Mozilla-CA perl-Net-SSLeay Bugzilla:2228409 [1] A new environment variable in Python to control parsing of email addresses To mitigate CVE-2023-27043 , a backward incompatible change to ensure stricter parsing of email addresses was introduced in Python 3. The update in RHSA-2024:0256 introduces a new PYTHON_EMAIL_DISABLE_STRICT_ADDR_PARSING environment variable. When you set this variable to true , the , less strict parsing behavior is the default for the entire system: However, individual calls to the affected functions can still enable stricter behavior. You can achieve the same result by creating the /etc/python/email.cfg configuration file with the following content: For more information, see the Knowledgebase article Mitigation of CVE-2023-27043 introducing stricter parsing of email addresses in Python . Jira:RHELDOCS-17369 [1] 4.9. Compilers and development tools Improved string and memory routine performance on Intel(R) Xeon(R) v5-based hardware in glibc Previously, the default amount of cache used by glibc for string and memory routines resulted in lower than expected performance on Intel(R) Xeon(R) v5-based systems. With this update, the amount of cache to use has been tuned to improve performance. Bugzilla:2180462 GCC now supports preserving register arguments With this update, you can now store argument register content to the stack and generate proper Call Frame Information (CFI) to allow the unwinder to locate it without negatively impacting performance. Bugzilla:2168205 [1] New GCC Toolset 13 GCC Toolset 13 is a compiler toolset that provides recent versions of development tools. It is available as an Application Stream in the form of a Software Collection in the AppStream repository. The GCC compiler has been updated to version 13.1.1, which provides many bug fixes and enhancements that are available in upstream GCC. The following tools and versions are provided by GCC Toolset 13: Tool Version GCC 13.1.1 GDB 12.1 binutils 2.40 dwz 0.14 annobin 12.20 To install GCC Toolset 13, run the following command as root: To run a tool from GCC Toolset 13: To run a shell session where tool versions from GCC Toolset 13 override system versions of these tools: For more information, seehttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/developing_c_and_cpp_applications_in_rhel_8/additional-toolsets-for-development_developing-applications#gcc-toolset-13_assembly_additional-toolsets-for-development[GCC Toolset 13] and Using GCC Toolset . Bugzilla:2171898 [1] , Bugzilla:2171928, Bugzilla:2188490 GCC Toolset 13: GCC rebased to version 13.1.1 In GCC Toolset 13, the GNU Compiler Collection (GCC) has been updated to version 13.1.1. Notable changes include: General improvements OpenMP: OpenMP 5.0: Fortran now supports some non-rectangular loop nests. Such support was added for C/C++ in GCC 11. Many OpenMP 5.1 features have been added. Initial support for OpenMP 5.2 features has been added. A new debug info compression option value, -gz=zstd , is now available. The -Ofast , -ffast-math , and -funsafe-math-optimizations options no longer add startup code to alter the floating-point environment when producing a shared object with the -shared option. GCC can now emit its diagnostics using Static Analysis Results Interchange Format (SARIF), a JSON-based format suited for capturing the results of static analysis tools (like GCC's -fanalyzer ). You can also use SARIF to capture other GCC warnings and errors in a machine-readable format. Link-time optimization improvements have been implemented. New languages and language-specific improvements C family: A new -Wxor-used-as-pow option warns about uses of the exclusive or ( ^ ) operator where the user might have meant exponentiation. Three new function attributes have been added for documenting int arguments that are file descriptors: attribute ((fd_arg(N))) attribute ((fd_arg_read(N))) attribute ((fd_arg_write(N))) These attributes are also used by -fanalyzer to detect misuses of file descriptors. A new statement attribute, attribute ((assume(EXPR))); , has been added for C++23 portable assumptions. The attribute is supported also in C or earlier C++. GCC can now control when to treat the trailing array of a structure as a flexible array member for the purpose of accessing the elements of such an array. By default, all trailing arrays in aggregates are treated as flexible array members. Use the new command-line option -fstrict-flex-arrays to control what array members are treated as flexible arrays. C: Several C23 features have been implemented: Introduced the nullptr constant. Enumerations enhanced to specify underlying types. Requirements for variadic parameter lists have been relaxed. Introduced the auto feature to enable type inference for object definitions. Introduced the constexpr specifier for object definitions. Introduced storage-class specifiers for compound literals. Introduced the typeof object (previously supported as an extension) and the typeof_unqual object. Added new keywords: alignas , alignof , bool , false , static_assert , thread_local , and true . Added the [[noreturn]] attribute to specify that a function does not return execution to its caller. Added support for empty initializer braces. Added support for STDC_VERSION_*_H header version macros. Removed the ATOMIC_VAR_INIT macro. Added the unreachable macro for the <stddef.h> header. Removed trigraphs. Removed unprototyped functions. Added printf and scanf format checking through the -Wformat option for the %wN and %wfN format length modifiers. Added support for identifier syntax of Unicode Standard Annex (UAX) 31. Existing features adopted in C23 have been adjusted to follow C23 requirements and are not diagnosed using the -std=c2x -Wpedantic option. A new -Wenum-int-mismatch option warns about mismatches between an enumerated type and an integer type. C++: Implemented excess precision support through the -fexcess-precision option. It is enabled by default in strict standard modes like -std=c++17 , where it defaults to -fexcess-precision=standard . In GNU standard modes like -std=gnu++20 , it defaults to -fexcess-precision=fast , which restores behavior. The -fexcess-precision option affects the following architectures: Intel 32- and 64-bit using x87 math, in some cases on Motorola 68000, where float and double expressions are evaluated in long double precision. 64-bit IBM Z systems where float expressions are evaluated in double precision. Several architectures that support the std::float16_t or std::bfloat16_t types, where these types are evaluated in float precision. Improved experimental support for C++23, including: Added support for labels at the end of compound statements. Added a type trait to detect reference binding to a temporary. Reintroduced support for volatile compound operations. Added support for the #warning directive. Added support for delimited escape sequences. Added support for named universal character escapes. Added a compatibility and portability fix for the char8_t type. Added static operator() function objects. Simplified implicit moves. Rewriting equality in expressions is now less of a breaking change. Removed non-encodable wide character literals and wide multicharacter literals. Relaxed some constexpr function restrictions. Extended floating-point types and standard names. Implemented portable assumptions. Added support for UTF-8 as a portable source file encoding standard. Added support for static operator[] subscripts. New warnings: -Wself-move warns when a value is moved to itself with std::move . -Wdangling-reference warns when a reference is bound to a temporary whose lifetime has ended. The -Wpessimizing-move and -Wredundant-move warnings have been extended to warn in more contexts. The new -nostdlib++ option enables linking with g++ without implicitly linking in the C++ standard library. Changes in the libstdc++ runtime library Improved experimental support for C++20, including: Added the <format> header and the std::format function. Added support in the <chrono> header for the std::chrono::utc_clock clock, other clocks, time zones, and the std::format function. Improved experimental support for C++23, including: Additions to the <ranges> header: views::zip , views::zip_transform , views::adjacent , views::adjacent_transform , views::pairwise , views::slide , views::chunk , views::chunk_by , views::repeat , views::chunk_by , views::cartesian_product , views::as_rvalue , views::enumerate , views::as_const . Additions to the <algorithm> header: ranges::contains , ranges::contains_subrange , ranges::iota , ranges::find_last , ranges::find_last_if , ranges::find_last_if_not , ranges::fold_left , ranges::fold_left_first , ranges::fold_right , ranges::fold_right_last , ranges::fold_left_with_iter , ranges::fold_left_first_with_iter . Support for monadic operations for the std::expected class template. Added constexpr modifiers to the std::bitset , std::to_chars and std::from_chars functions. Added library support for extended floating-point types. Added support for the <experimental/scope> header from version 3 of the Library Fundamentals Technical Specification (TS). Added support for the <experimental/synchronized_value> header from version 2 of the Concurrency TS. Added support for many previously unavailable features in freestanding mode. For example: The std::tuple class template is now available for freestanding compilation. The libstdc++ library adds components to the freestanding subset, such as std::array and std::string_view . The libstdc++ library now respects the -ffreestanding compiler option, so it is no longer necessary to build a separate freestanding installation of the libstdc++ library. Compiling with -ffreestanding will restrict the available features to the freestanding subset, even if the libstdc++ library was built as a full, hosted implementation. New targets and target-specific Improvements The 64-bit ARM architecture: Added support for the armv9.1-a , armv9.2-a , and armv9.3-a arguments for the -march= option. The 32- and 64-bit AMD and Intel architectures: For both C and C++, the __bf16 type is supported on systems with Streaming SIMD Extensions 2 and above enabled. The real __bf16 type is now used for AVX512BF16 instruction intrinsics. Previously, __bfloat16 , a typedef of short, was used. Adjust your AVX512BF16 related source code when upgrading GCC 12 to GCC 13. Added new Instruction Set Architecture (ISA) extensions to support the following Intel instructions: AVX-IFMA whose instruction intrinsics are available through the -mavxifma compiler switch. AVX-VNNI-INT8 whose instruction intrinsics are available through the -mavxvnniint8 compiler switch. AVX-NE-CONVERT whose instruction intrinsics are available through the -mavxneconvert compiler switch. CMPccXADD whose instruction intrinsics are available through the -mcmpccxadd compiler switch. AMX-FP16 whose instruction intrinsics are available through the -mamx-fp16 compiler switch. PREFETCHI whose instruction intrinsics are available through the -mprefetchi compiler switch. RAO-INT whose instruction intrinsics are available through the -mraoint compiler switch. AMX-COMPLEX whose instruction intrinsics are available through the -mamx-complex compiler switch. GCC now supports AMD CPUs based on the znver4 core through the -march=znver4 compiler switch. The switch makes GCC consider using 512-bit vectors when auto-vectorizing. Improvements to the static analyzer The static analyzer has gained 20 new warnings: -Wanalyzer-allocation-size -Wanalyzer-deref-before-check -Wanalyzer-exposure-through-uninit-copy -Wanalyzer-imprecise-fp-arithmetic -Wanalyzer-infinite-recursion -Wanalyzer-jump-through-null -Wanalyzer-out-of-bounds -Wanalyzer-putenv-of-auto-var -Wanalyzer-tainted-assertion Seven new warnings relating to misuse of file descriptors: -Wanalyzer-fd-access-mode-mismatch -Wanalyzer-fd-double-close -Wanalyzer-fd-leak -Wanalyzer-fd-phase-mismatch (for example, calling accept on a socket before calling listen on it) -Wanalyzer-fd-type-mismatch (for example, using a stream socket operation on a datagram socket) -Wanalyzer-fd-use-after-close -Wanalyzer-fd-use-without-check Also implemented special-casing handling of the behavior of the open , close , creat , dup , dup2 , dup3 , pipe , pipe2 , read , and write functions. Four new warnings for misuses of the <stdarg.h> header: -Wanalyzer-va-list-leak warns about missing a va_end macro after a va_start or va_copy macro. -Wanalyzer-va-list-use-after-va-end warns about a va_arg or va_copy macro used on a va_list object type that has had the va_end macro called on it. -Wanalyzer-va-arg-type-mismatch type-checks va_arg macro usage in interprocedural execution paths against the types of the parameters that were actually passed to the variadic call. -Wanalyzer-va-list-exhausted warns if a va_arg macro is used too many times on a va_list object type in interprocedural execution paths. Numerous other improvements. Backwards incompatible changes For C++, construction of global iostream objects such as std::cout , std::cin is now done inside the standard library, instead of in every source file that includes the <iostream> header. This change improves the startup performance of C++ programs, but it means that code compiled with GCC 13.1 will crash if the correct version of libstdc++.so is not used at runtime. See the documentation about using the correct libstdc++.so at runtime. Future GCC releases will mitigate the problem so that the program cannot be run at all with an earlier incompatible libstdc++.so . Bugzilla:2172091 [1] GCC Toolset 13: annobin rebased to version 12.20 GCC Toolset 13 provides the annobin package version 12.20. Notable enhancements include: Added support for moving annobin notes into a separate debug info file. This results in reduced executable binary size. Added support for a new smaller note format reduces the size of the separate debuginfo files and the time taken to create these files. Bugzilla:2171923 [1] GCC Toolset 13: GDB rebased to version 12.1 GCC Toolset 13 provides GDB version 12.1. Notable bug fixes and enhancements include: GDB now styles source code and disassembler by default. If styling interferes with automation or scripting of GDB, you can disable it by using the maint set gnu-source-highlight enabled off and maint set style disassembler enabled off commands. GDB now displays backtraces whenever it encounters an internal error. If this affects scripts or automation, you can use the maint set backtrace-on-fatal-signal off command to disable this feature. C/C++ improvements: GDB now treats functions or types involving C++ templates similarly to function overloads. You can omit parameter lists to set breakpoints on families of template functions, including types or functions composed of multiple template types. Tab completion has gained similar improvements. Terminal user interface (TUI): tui layout tui focus tui refresh tui window height These are the new names for the old layout , focus , refresh , and winheight TUI commands respectively. The old names still exist as aliases to these new commands. tui window width winwidth Use the new tui window width command, or the winwidth alias, to adjust the width of a TUI window when windows are laid out in horizontal mode. info win This command now includes information about the width of the TUI windows in its output. Machine Interface (MI) changes: The default version of the MI interpreter is now 4 (-i=mi4). The -add-inferior command with no flag now inherits the connection of the current inferior. This restores the behavior of GDB prior to version 10. The -add-inferior command now accepts a --no-connection flag that causes the new inferior to start without a connection. The script field in breakpoint output (which is syntactically incorrect in MI 3 and earlier) has become a list in MI 4. This affects the following commands and events: -break-insert -break-info =breakpoint-created =breakpoint-modified Use the -fix-breakpoint-script-output command to enable the new behavior with earlier MI versions. New commands: maint set internal-error backtrace [on|off] maint show internal-error backtrace maint set internal-warning backtrace [on|off] maint show internal-warning backtrace GDB can now print a backtrace of itself when it encounters internal error or internal warning. This is enabled by default for internal errors and disabled by default for internal warnings. exit You can exit GDB using the new exit command in addition to the existing quit command. maint set gnu-source-highlight enabled [on|off] maint show gnu-source-highlight enabled Enables or disables the GNU Source Highlight library for adding styling to source code. When disabled, the library is not used even if it is available. When the GNU Source Highlight library is not used the Python Pygments library is used instead. set suppress-cli-notifications [on|off] show suppress-cli-notifications Controls if printing the notifications is suppressed for CLI or not. CLI notifications occur when you change the selected context (such as the current inferior, thread, or frame), or when the program being debugged stops (for example: because of hitting a breakpoint, completing source-stepping, or an interrupt). set style disassembler enabled [on|off] show style disassembler enabled When enabled, the command applies styling to disassembler output if GDB is compiled with Python support and the Python Pygments package is available. Changed commands: set logging [on|off] Deprecated and replaced by the set logging enabled [on|off] command. print Printing of floating-point values with base-modifying formats like /x has been changed to display the underlying bytes of the value in the desired base. clone-inferior The clone-inferior command now ensures that the TTY , CMD , and ARGs settings are copied from the original inferior to the new one. All modifications to the environment variables done using the set environment or unset environment commands are also copied to the new inferior. Python API: The new gdb.add_history() function takes a gdb.Value object and adds the value it represents to GDB's history list. The function returns an integer, which is the index of the new item in the history list. The new gdb.history_count() function returns the number of values in GDB's value history. The new gdb.events.gdb_exiting event is called with a gdb.GdbExitingEvent object that has the read-only attribute exit_code containing the value of the GDB exit code. This event is triggered prior to GDB's exit before GDB starts to clean up its internal state. The new gdb.architecture_names() function returns a list containing all of the possible Architecture.name() values. Each entry is a string. The new gdb.Architecture.integer_type() function returns an integer type given a size and a signed-ness. The new gdb.TargetConnection object type represents a connection (as displayed by the info connections command). A sub-class, gdb.RemoteTargetConnection , represents remote and extended-remote connections. The gdb.Inferior type now has a connection property that is an instance of the gdb.TargetConnection object, the connection used by this inferior. This can be None if the inferior has no connection. The new gdb.events.connection_removed event registry emits a gdb.ConnectionEvent event when a connection is removed from GDB. This event has a connection property, a gdb.TargetConnection object for the connection being removed. The new gdb.connections() function returns a list of all currently active connections. The new gdb.RemoteTargetConnection.send_packet(PACKET) method is equivalent to the existing maint packet CLI command. You can use it to send a specified packet to the remote target. The new gdb.host_charset() function returns the name of the current host character set as a string. The new gdb.set_parameter( NAME , VALUE ) function sets the GDB parameter NAME to VALUE . The new gdb.with_parameter( NAME , VALUE ) function returns a context manager that temporarily sets the GDB parameter NAME to VALUE and then resets it when the context is exited. The gdb.Value.format_string method now takes a styling argument, which is a boolean. When true , the returned string can include escape sequences to apply styling. The styling is present only if styling is turned on in GDB (see help set styling ). When false , which is the default if the styling argument is not given, no styling is applied to the returned string. The new read-only attribute gdb.InferiorThread.details is either a string containing additional target-specific thread-state information, or None if there is no such additional information. The new read-only attribute gdb.Type.is_scalar is True for scalar types, and False for all other types. The new read-only attribute gdb.Type.is_signed should only be read when Type.is_scalar is True , and will be True for signed types and False for all other types. Attempting to read this attribute for non-scalar types will raise a ValueError . You can now add GDB and MI commands implemented in Python. For more information see the upstream release notes: What has changed in GDB? Bugzilla:2172095 [1] GCC Toolset 13: bintuils rebased to version 2.40 GCC Toolset 13 provides the binutils package version 2.40. Notable enhancements include: Linkers: The new -w ( --no-warnings ) command-line option for the linker suppresses the generation of any warning or error messages. This is useful in case you need to create a known non-working binary. The ELF linker now generates a warning message if: The stack is made executable It creates a memory resident segment with all three of the Read , Write and eXecute permissions set It creates a thread local data segment with the eXecute permission set. You can disable these warnings by using the --no-warn-exec-stack or --no-warn-rwx-segments options. The linker can now insert arbitrary JSON-format metadata into binaries that it creates. Other tools: A new the objdump tool's --private option to display fields in the file header and section headers for Portable Executable (PE) format files. A new --strip-section-headers command-line option for the objcopy and strip utilities to remove the ELF section header from ELF files. A new --show-all-symbols command-line option for the objdump utility to display all symbols that match a given address when disassembling, as opposed to the default function of displaying only the first symbol that matches an address. A new -W ( --no-weak ) option to the nm utility to make it ignore weak symbols. The objdump utility now supports syntax highlighting of disassembler output for some architectures. Use the --disassembler-color= MODE command-line option, with MODE being one of the following: off color - This option is supported by all terminal emulators. extended-color - This option uses 8-bit colors not supported by all terminal emulators. Bugzilla:2171924 [1] GCC Toolset 13: annobin rebased to version 12.20 GCC Toolset 13 provides the annobin package version 12.20. Notable enhancements include: Added support for moving annobin notes into a separate debug info file. This results in reduced executable binary size. Added support for a new smaller note format, which reduces the size of the separate debuginfo files and the time taken to create these files. Bugzilla:2171921 [1] Valgrind rebased to version 3.21.0 Valgrind has been updated to version 3.21.0. Notable enhancements include: A new abexit value for the --vgdb-stop-at= event1 , event2 ,... option notifies the gdbserver utility when your program exits abnormally, such as with a non-zero exit code. A new --enable-debuginfod=[yes|no] option instructs Valgrind to use the debuginfod servers listed in the DEBUGINFOD_URLS environment variable to fetch any missing DWARF debuginfo information for the program running under Valgrind. The default value for this option is yes . Note The DEBUGINFOD_URLS environment variable is not set by default. The vgdb utility now supports the extended remote protocol when invoked with the --multi option. The GDB run command is supported in this mode and, as a result, you can run GDB and Valgrind from a single terminal. You can use the --realloc-zero-bytes-frees=[yes|no] option to change the behavior of the realloc() function with a size of zero for tools that intercept the malloc() call. The memcheck tool now performs checks for the use of the realloc() function with a size of zero. Use the new --show-realloc-size-zero=[yes|no] switch to disable this feature. You can use the new --history-backtrace-size= value option for the helgrind tool to configure the number of entries to record in the stack traces of earlier accesses. The --cache-sim=[yes|no] cachegrind option now defaults to no and, as a result, only instruction cache read events are gathered by default. The source code for the cg_annotate , cg_diff , and cg_merge cachegrind utilities has been rewritten and, as a result, the utilities have more flexible command line option handling. For example, they now support the --show-percs and --no-show-percs options as well as the existing --show-percs=yes and --show-percs=no options. The cg_annotate cachegrind utility now supports diffing (using the --diff , --mod-filename , and --mod-funcname options) and merging (by passing multiple data files). In addition, cg_annotate now provides more information at the file and function level. A new user-request for the DHAT tool allows you to override the 1024 byte limit on access count histograms for blocks of memory. The following new architecture-specific instruction sets are now supported: 64-bit ARM: v8.2 scalar and vector Floating-point Absolute Difference (FABD), Floating-point Absolute Compare Greater than or Equal (FACGE), Floating-point Absolute Compare Greater Than (FACGT), and Floating-point Add (FADD) instructions. v8.2 Floating-point (FP) compare and conditional compare instructions. Zero variants of v8.2 Floating-point (FP) compare instructions. 64-bit IBM Z: Support for the miscellaneous-instruction-extensions facility 3 and the vector-enhancements facility 2 . This enables programs compiled with the -march=arch13 or -march=z15 options to be executed under Valgrind. IBM Power: ISA 3.1 support is now complete. ISA 3.0 now supports the deliver a random number (darn) instruction. ISA 3.0 now supports the System Call Vectored (scv) instruction. ISA 3.0 now supports the copy, paste, and cpabort instructions. Bugzilla:2124345 systemtap rebased to version 4.9 The systemtap package has been upgraded to version 4.9. Notable changes include: A new Language-Server-Protocol (LSP) backend for easier interactive drafting of systemtap scripts on LSP-capable editors. Access to a Python/Jupyter interactive notebook frontend. Improved handling of DWARF 5 bitfields. Bugzilla:2186932 elfutils rebased to version 0.189 The elfutils package has been updated to version 0.189. Notable improvements and bug fixes include: libelf The elf_compress tool now supports the ELFCOMPRESS_ZSTD ELF compression type. libdwfl The dwfl_module_return_value_location function now returns 0 (no return type) for DWARF Information Entries (DIEs) that point to a DW_TAG_unspecified_type type tag. eu-elfcompress The -t and --type= options now support the Zstandard ( zstd ) compression format via the zstd argument. Bugzilla:2182060 libpfm rebased to version 4.13 The libpfm package has been updated to version 4.13. With this update, libpfm can now access performance monitoring hardware native events for the following processor microarchitectures: AMD Zen 4 ARM Neoverse N1 ARM Neoverse N2 ARM Neoverse V1 ARM Neoverse V2 4th Generation Intel(R) Xeon(R) Scalable Processors IBM z16 Bugzilla:2185653 , Bugzilla:2111987, Bugzilla:2111966, Bugzilla:2111973, Bugzilla:2109907, Bugzilla:2111981, Bugzilla:2047725 papi supports new processor microarchitectures With this enhancement, you can access performance monitoring hardware using papi events presets on the following processor microarchitectures: ARM Neoverse N1 ARM Neoverse N2 ARM Neoverse V1 ARM Neoverse V2 Bugzilla:2111982 [1] , Bugzilla:2111988 papi now supports fast performance event count read operations for 64-bit ARM Previously on 64-bit ARM processors, all performance event counter read operations required the use of a resource-intensive system call. papi has been updated for 64-bit ARM to let processes monitoring themselves with the performance counters use a faster user-space read of the performance event counters. Setting the /proc/sys/kernel/perf_user_access parameter to 1 reduces the average number of clock cycles for papi to read 2 counters from 724 cycles to 29 cycles. Bugzilla:2161146 [1] LLVM Toolset rebased to version 16.0.6 LLVM Toolset has been updated to version 16.0.6. Notable enhancements include: Improvements to optimization Support for new CPU extensions Improved support for new C++ versions. Notable backwards incompatible changes include: Clang's default C++ standard is now gnu++17 instead of gnu++14 . The -Wimplicit-function-declaration , -Wimplicit-int and -Wincompatible-function-pointer-types options now default to error for C code. This might affect the behavior of configure scripts. By default, Clang 16 uses the libstdc++ library version 13 and binutils 2.40 provided by GCC Toolset 13. For more information, see the LLVM release notes and Clang release notes . Bugzilla:2178806 Rust Toolset rebased to version 1.71.1 Rust Toolset has been updated to version 1.71.1. Notable changes include: A new implementation of multiple producer, single consumer (mpsc) channels to improve performance A new Cargo sparse index protocol for more efficient use of the crates.io registry New OnceCell and OnceLock types for one-time value initialization A new C-unwind ABI string to enable usage of forced unwinding across Foreign Function Interface (FFI) boundaries For more details, see the series of upstream release announcements: Announcing Rust 1.67.0 Announcing Rust 1.68.0 Announcing Rust 1.69.0 Announcing Rust 1.70.0 Announcing Rust 1.71.0 Bugzilla:2191740 The Rust profiler_builtins runtime component is now available With this enhancement, the Rust profile_builtins runtime component is now available. This runtime component enables the following compiler options: -C instrument-coverage Enables coverage profiling -C profile-generate Enables profile-guided optimization Bugzilla:2213875 [1] Go Toolset rebased to version 1.20.10 Go Toolset has been updated to version 1.20.10. Notable enhancements include: New functions added in the unsafe package to handle slices and strings without depending on the internal representation. Comparable types can now satisfy comparable constraints. A new crypto/ecdh package. The go build and go test commands no longer accept the -i flag. The go generate and go test commands now accept the -skip pattern option. The go build , go install , and other build-related commands now support the -pgo and -cover flags. The go command now disables cgo by default on systems without a C toolchain. The go version -m command now supports reading more Go binaries types. The go command now disables cgo by default on systems without a C toolchain. Added support for collecting code coverage profiles from applications and integration tests instead of collecting them only from unit tests. Bugzilla:2185260 [1] grafana rebased to version 9.2.10 The grafana package has been updated to version 9.2.10. Notable changes include: The time series panel is now the default visualization option, replacing the graph panel. Grafana provides a new Prometheus and Loki query builder. Grafana now includes multiple UI/UX and performance improvements. The license has changed from Apache 2.0 to GNU Affero General Public License (AGPL). The heatmap panel is now used throughout Grafana. Geomaps can now measure both distance and area. The Alertmanager is now based on Prometheus Alertmanager version 0.24. Grafana Alerting rules now return an Error state by default on execution error or timeout. Expressions can now be used on public dashboards. The join transformation now supports inner joins. Public dashboards now allow sharing Grafana dashboards. A new Prometheus streaming parser is now available as an opt-in feature. For more information, see the upstream release notes: What's new in Grafana v8.0 What's new in Grafana v9.0 What's new in Grafana v9.1 What's new in Grafana v9.2 Bugzilla:2193250 grafana-pcp rebased to version 5.1.1 The grafana-pcp package, which provides the Performance Co-Pilot Grafana Plugin, has been updated to version 5.1.1. Notable changes include: Query editor: Added buttons to disable rate conversation and time utilization conversation Redis datasource: Removed the deprecated label_values(metric, label) function Fixed the network error for metrics with many series (requires Performance Co-Pilot version 6 and later) Set the pmproxy API timeout to 1 minute Bugzilla:2193270 .NET 8.0 is available Red Hat Enterprise Linux 8.9 is distributed with .NET version 8.0. Notable improvements include: Added support for the C#12 and F#8 language versions. Added support for building container images using the .NET Software Development Kit directly. Many performance improvements to the garbage collector (GC), Just-In-Time (JIT) compiler, and the base libraries. Jira:RHELPLAN-164398 [1] 4.10. Identity Management samba rebased to version 4.18.4 The samba packages have been upgraded to upstream version 4.18.4, which provides bug fixes and enhancements over the version. The most notable changes: Security improvements in releases impacted the performance of the Server Message Block (SMB) server for high metadata workloads. This update improves the performance in this scenario. The new wbinfo --change-secret-at=<domain_controller> command enforces the change of the trust account password on the specified domain controller. By default, Samba stores access control lists (ACLs) in the security.NTACL extended attribute of files. You can now customize the attribute name with the acl_xattr:<security_acl_name> setting in the /etc/samba/smb.conf file. Note that a custom extended attribute name is not a protected location as security.NTACL . Consequently, users with local access to the server can be able to modify the custom attribute's content and compromise the ACL. Note that the server message block version 1 (SMB1) protocol has been deprecated since Samba 4.11 and will be removed in a future release. Back up the database files before starting Samba. When the smbd , nmbd , or winbind services start, Samba automatically updates its tdb database files. Red Hat does not support downgrading tdb database files. After updating Samba, use the testparm utility to verify the /etc/samba/smb.conf file. Bugzilla:2190417 ipa rebased to version 4.9.12 The ipa package has been upgraded to version 4.9.12. For more information, see the upstream FreeIPA release notes . Bugzilla:2196425 Multiple IdM groups and services can now be managed in a single Ansible task With this enhancement in ansible-freeipa , you can add, modify, and delete multiple Identity Management (IdM) user groups and services by using a single Ansible task. For that, use the groups and services options of the ipagroup and ipaservice modules. Using the groups option available in ipagroup , you can specify multiple group variables that only apply to a particular group. This group is defined by the name variable, which is the only mandatory variable for the groups option. Similarly, using the services option available in ipaservice , you can specify multiple service variables that only apply to a particular service. This service is defined by the name variable, which is the only mandatory variable for the services option. Jira:RHELDOCS-16474 [1] ansible-freeipa ipaserver role now supports Random Serial Numbers With this update, you can use the ipaserver_random_serial_numbers=true option with the ansible-freeipa ipaserver role. This way, you can generate fully random serial numbers for certificates and requests in PKI when installing an Identity Management (IdM) server using Ansible. With RSNv3, you can avoid range management in large IdM installations and prevent common collisions when reinstalling IdM. Important RSNv3 is supported only for new IdM installations. If enabled, it is required to use RSNv3 on all PKI services. Jira:RHELDOCS-16462 [1] The ipaserver_remove_on_server and ipaserver_ignore_topology_disconnect options are now available in the ipaserver role If removing a replica from an Identity Management (IdM) topology by using the remove_server_from_domain option of the ipaserver ansible-freeipa role leads to a disconnected topology, you must now specify which part of the domain you want to preserve. Specifically, you must do the following: Specify the ipaserver_remove_on_server value to identify which part of the topology you want to preserve. Set ipaserver_ignore_topology_disconnect to True. Note that if removing a replica from IdM by using the remove_server_from_domain option preserves a connected topology, neither of these options is required. Bugzilla:2127901 The ipaclient role now allows configuring user subID ranges on the IdM level With this update, the ipaclient role provides the ipaclient_subid option, using which you can configure subID ranges on the Identity Management (IdM) level. Without the new option set explicitly to true , the ipaclient role keeps the default behavior and installs the client without subID ranges configured for IdM users. Previously, the role configured the sssd authselect profile that in turn customized the /etc/nsswitch.conf file. The subID database did not use IdM and relied only on the local files of /etc/subuid and /etc/subgid . Bugzilla:2175766 You can now manage IdM certificates using the ipacert Ansible module You can now use the ansible-freeipa ipacert module to request or retrieve SSL certificates for Identity Management (IdM) users, hosts and services. The users, hosts and services can then use these certificates to authenticate to IdM. You can also revoke the certificates, as well as restore certificates that have been put on hold. Bugzilla:2127906 MIT Kerberos now supports the Extended KDC MS-PAC signature With this update, MIT Kerberos, which is used by Red Hat, implements support for one of the two types of the Privilege Attribute Certificate (PAC) signatures introduced by Microsoft in response to recent CVEs. Specifically, MIT Kerberos in RHEL 8 supports the Extended KDC signature that was released in KB5020805 and that addresses CVE-2022-37967 . Note that because of ABI stability constraints , MIT Kerberos on RHEL8 cannot support the other PAC signature type, that is Ticket signature as defined in KB4598347 . To troubleshoot problems related to this enhancement, see the following Knowledgebase resources: RHEL-8.9 IdM update, web UI and CLI 401 Unauthorized with KDC S4U2PROXY_EVIDENCE_TKT_WITHOUT_PAC - user and group objects need SIDs find_sid_for_ldap_entry - [file ipa_sidgen_cofind_sid_for_ldap_entry - [file ipa_sidgen_common.c, line 521]: Cannot convert Posix ID [120000023l] into an unused SID] When upgrading to RHEL9, IDM users are not able to login anymore POSIX IDs, SIDs and IDRanges in IPA See also BZ#2211387 and BZ#2176406 . Bugzilla:2211390 RHEL 8.9 provides 389-ds-base 1.4.3.37 RHEL 8.9 is distributed with the 389-ds-base package version 1.4.3.37. Bugzilla:2188628 New passwordAdminSkipInfoUpdate: on/off configuration option is now available You can add a new passwordAdminSkipInfoUpdate: on/off setting under the cn=config entry to provide a fine grained control over password updates performed by password administrators. When you enable this setting, password updates do not update certain attributes, for example, passwordHistory , passwordExpirationTime , passwordRetryCount , pwdReset , and passwordExpWarned . Bugzilla:2166332 4.11. Graphics infrastructures Intel Arc A-Series graphics is now fully supported The Intel Arc A-Series graphics (Alchemist or DG2) feature, previously available as a Technology Preview, is now fully supported. Intel Arc A-Series graphics is a GPU that enables hardware acceleration, mostly used in PC gaming. With this release, you no longer have to set the i915.force_probe kernel option, and full support for these GPUs is enabled by default. Bugzilla:2041686 [1] 4.12. The web console Podman health check action is now available You can select one of the following Podman health check actions when creating a new container: No action (default): Take no action. Restart: Restart the container. Stop: Stop the container. Force stop: Force stops the container, it does not wait for the container to exit. Jira:RHELDOCS-16247 [1] Accounts page updates for the web console This update introduces the following updates to the Accounts page: It is now possible to add custom user ID and define home directory and shell during the account creation process. When creating an account, password validation actively performs a check on every keystroke. Additionally, weak passwords are now shown with a warning. Account detail pages now show the home directory and shell for an account. It is possible to change shell from the account details page. Jira:RHELDOCS-16367 [1] 4.13. Red Hat Enterprise Linux system roles The postgresql RHEL system role is now available The new postgresql RHEL system role installs, configures, manages, and starts the PostgreSQL server. The role also optimizes the database server settings to improve performance. The role supports the currently released and supported versions of PostgreSQL on RHEL 8 and RHEL 9 managed nodes. For more information, see Installing and configuring PostgreSQL by using the postgresql RHEL system role . Bugzilla:2151371 keylime_server RHEL system role With the new keylime_server RHEL system role, you can use Ansible playbooks to configure the verifier and registrar Keylime components on RHEL 9 systems. Keylime is a remote machine attestation tool that uses the trusted platform module (TPM) technology. Bugzilla:2224387 Support for new ha_cluster system role features The ha_cluster system role now supports the following features: Configuration of resource and resource operation defaults, including multiple sets of defaults with rules. Loading and blocking of SBD watchdog kernel modules. This makes installed hardware watchdogs available to the cluster. Assignment of distinct passwords to the cluster hosts and the quorum device. With that, you can configure a deployment where the same quorum hosts are joined to multiple, separate clusters, and the passwords of the hacluster user on these clusters are different. For information about the parameters you configure to implement these features, see Configuring a high-availability cluster by using the ha_cluster RHEL system role . Bugzilla:2190483 , Bugzilla:2190478 , Bugzilla:2216485 storage system role supports configuring the stripe size for RAID LVM volumes With this update, you can now specify a custom stripe size when creating RAID LVM devices. For better performance, use the custom stripe size for SAP HANA. The recommended stripe size for RAID LVM volumes is 64 KB. Bugzilla:2141961 podman RHEL system role now supports Quadlets, healthchecks, and secrets Starting with Podman 4.6, you can use the podman_quadlet_specs variable in the podman RHEL system role. You can define a Quadlet by specifying a unit file, or in the inventory by a name, a type of unit, and a specification. Types of a unit can be the following: container , kube , network , and volume . Note that Quadlets work only with root containers on RHEL 8. Quadlets work with rootless containers on RHEL 9. The healthchecks are supported only for Quadlet Container types. In the [Container] section, specify the HealthCmd field to define the healthcheck command and HealthOnFailure field to define the action when a container is unhealthy. Possible options are none , kill , restart , and stop . You can use the podman_secrets variable to manage secrets. For details, see upstream documentation . Jira:RHELPLAN-154440 [1] RHEL system roles now have new volume options for mount point customization With this update, you can now specify mount_user , mount_group , and mount_permissions parameters for your mount directory. Bugzilla:2181661 kdump RHEL system role updates The kdump RHEL system role has been updated to a newer version, which brings the following notable enhancements: After installing kexec-tools , the utility suite no longer generates the /etc/sysconfig/kdump file because you do not need to manage this file anymore. The role supports the auto_reset_crashkernel and dracut_args variables. For more details, see resources in the /usr/share/doc/rhel-system-roles/kdump/ directory. Bugzilla:2211272 The ad_integration RHEL system role can now rejoin an AD domain With this update, you can now use the ad_integration RHEL system role to rejoin an Active Directory (AD) domain. To do this, set the ad_integration_force_rejoin variable to true . If the realm_list output shows that host is already in an AD domain, it will leave the existing domain before rejoining it. Bugzilla:2211723 The rhc system role now supports setting a proxy server type The newly introduced attribute scheme under the rhc_proxy parameter enables you to configure the proxy server type by using the rhc system role. You can set two values: http , the default and https . Bugzilla:2211778 New option in the ssh role to disable configuration backups You can now prevent old configuration files from being backed up before they are overwritten by setting the new ssh_backup option to false . Previously, backup configuration files were created automatically, which might be unnecessary. The default value of the ssh_backup option is true , which preserves the original behavior. Bugzilla:2216759 The certificate RHEL system role now allows changing certificate file mode when using certmonger Previously, certificates created by the certificate RHEL system role with the certmonger provider used a default file mode. However, in some use-cases you might require a more restrictive mode. With this update, you can now set a different certificate and a key file mode using the mode parameter. Bugzilla:2218204 New RHEL system role for managing systemd units The rhel-system-role package now contains the systemd RHEL system role. You can use this role to deploy unit files and manage systemd units on multiple systems. You can automate systemd functionality by providing systemd unit files and templates, and by specifying the state of those units, such as started, stopped, masked and other. Bugzilla:2224388 The network RHEL system role supports the no-aaaa DNS option You can now use the no-aaaa option to configure DNS settings on managed nodes. Previously, there was no option to suppress AAAA queries generated by the stub resolver, including AAAA lookups triggered by NSS-based interfaces such as getaddrinfo ; only DNS lookups were affected. With this enhancement, you can now suppress AAAA queries generated by the stub resolver. Bugzilla:2218595 The network RHEL system role supports the auto-dns option to control automatic DNS record updates This enhancement provides support for defined name servers and search domains. You can now use only the name servers and search domains specified in dns and dns_search properties while disabling automatically configured name servers and search domains such as dns record from DHCP. With this enhancement, you can disable automatically auto dns record by changing the auto-dns settings. Bugzilla:2211273 firewall RHEL system role supports variables related to ipsets With this update of the firewall RHEL system role, you can define, modify, and delete ipsets . Also, you can add and remove those ipsets from firewall zones. Alternatively, you can use those ipsets when defining firewall rich rules. You can manage ipsets with the firewall RHEL system role using the following variables: ipset ipset_type ipset_entries short description state: present or state: absent permanent: true The following are some notable benefits of this enhancement: You can reduce the complexity of the rich rules that define rules for many IP addresses. You can add or remove IP addresses from sets as needed without modifying multiple rules. For more details, see resources in the /usr/share/doc/rhel-system-roles/firewall/ directory. Bugzilla:2140880 Improved performance of the selinux system role with restorecon -T 0 The selinux system role now uses the -T 0 option with the restorecon command in all applicable cases. This improves the performance of tasks that restore default SELinux security contexts on files. Bugzilla:2192343 The firewall RHEL system role has an option to disable conflicting services, and it no longer fails if firewalld is masked Previously, the firewall system role failed when the firewalld service was masked on the role run or in the presence of conflicting services. This update brings two notable enhancements: The linux-system-roles.firewall role always attempts to install, unmask, and enable the firewalld service on role run. You can now add a new variable firewall_disable_conflicting_services to your playbook to disable known conflicting services, for example, iptables.service , nftables.service , and ufw.service . The firewall_disable_conflicting_services variable is set to false by default. To disable conflicting services, set the variable to true . Bugzilla:2222809 The podman RHEL system role now uses getsubids to get subuids and subgids The podman RHEL system role now uses the getsubids command to get the subuid and subgid ranges for a user and group, respectively. The podman RHEL system role also uses this command to verify users and groups to work with identity management. Jira:RHEL-866 [1] The podman_kube_specs variable now supports pull_image and continue_if_pull_fails fields The podman_kube_specs variable now supports new fields: pull_image : ensures the image is pulled before use. The default value is true . Use false if you have some other mechanism to ensure the images are present on the system and you do not want to pull the images. continue_if_pull_fails : If pulling image fails, it is not treated as a fatal error, and continues with the role. The default is false . Use true if you have some other mechanism to ensure the correct images are present on the system. Jira:RHEL-858 [1] Resetting the firewall RHEL system role configuration now requires minimal downtime Previously, when you reset the firewall role configuration by using the : replaced variable, the firewalld service restarted. Restarting adds downtime and prolongs the period of an open connection in which firewalld does not block traffic from active connections. With this enhancement, the firewalld service completes the configuration reset by reloading instead of restarting. Reloading minimizes the downtime and reduces the opportunity to bypass firewall rules. As a result, using the : replaced variable to reset the firewall role configuration now requires minimal downtime. Bugzilla:2224648 4.14. RHEL in cloud environments cloud-init supports NetworkManager keyfiles With this update, the cloud-init utility can use a NetworkManager (NM) keyfile to configure the network of the created cloud instance. Note that by default, cloud-init still uses the sysconfig method for network setup. To configure cloud-init to use a NM keyfile instead, edit the /etc/cloud/cloud.cfg and set network-manager as the primary network renderer: Bugzilla:2219528 [1] cloud-init now uses VMware datasources by default on ESXi When creating RHEL virtual machines (VMs) on a host that uses the VMware ESXi hypervisor, such as the VMware vSphere cloud platform. This improves the performance and stability of creating an ESXi instance of RHEL by using cloud-init . Note, however, that ESXi is still compatible with Open Virtualization Format (OVF) datasources, and you can use an OVF datasource if a VMware one is not available. Bugzilla:2230777 [1] 4.15. Supportability sos rebased to version 4.6 The sos utility, for collecting configuration, diagnostic, and troubleshooting data, has been rebased to version 4.6. This update provides the following enhancements: sos reports now include the contents of both /boot/grub2/custom.cfg and /boot/grub2/user.cfg files that might contain critical information for troubleshooting boot issues. (BZ#2213951) The sos plugin for OVN-Kubernetes collects additional logs for the interconnect environment. With this update, sos also collects logs from the ovnkube-controller container when both ovnkube-node and ovnkube-controller containers are merged into one. In addition, notable bug fixes include: sos now correctly gathers cgroup data in the OpenShift Container Platform 4 environment (BZ#2186361). While collecting sos reports with the sudo plugin enabled, sos now removes the bindpw option properly. (BZ#2143272) The subscription_manager plugin no longer collects proxy usernames and passwords from the /var/lib/rhsm/ path. (BZ#2177282) The virsh plugin no longer collects the SPICE remote-display passwords in virt-manager logs, which prevents sos from disclosing passwords in its reports. (BZ#2184062) sos now masks usernames and passwords previously displayed in the /var/lib/iscsi/nodes/<IQN>/<PortalIP>/default file. Important The generated archive might contain data considered sensitive. Thus, you should always review the content before passing it to any third party. (BZ#2187859) sos completes the tailed log collection even when the size of the log file is exceeded and when a plugin times out. (BZ#2203141) When entering the sos collect command on a Pacemaker cluster node, sos collects an sos report from the same cluster node. (BZ#2186460) When collecting data from a host in the OpenShift Container Platform 4 environment, sos now uses the sysroot path, which ensures that only the correct data are assembled. (BZ#2075720) The sos report --clean command obfuscates all MAC addresses as intended. (BZ#2207562) Disabling the hpssm plugin no longer raises exceptions. (BZ#2216608) The sos clean command follows permissions of sanitized files. (BZ#2218279) For details on each release of sos , see upstream release notes . Jira:RHELPLAN-156196 [1] 4.16. Containers Podman supports pulling and pushing images compressed with zstd You can pull and push images compressed with the zstd format. The zstd compression is more efficient and faster than gzip. It can reduce the amount of network traffic and storage involved in pulling and pushing the image. Jira:RHELPLAN-154313 [1] Quadlet in Podman is now available Beginning with Podman v4.6, you can use Quadlet to automatically generate a systemd service file from a container description. The Quadlets might be easier to use than the podman generate systemd command because the description focuses on the relevant container details and without the technical complexity of running containers under systemd . Note that Quadlets work only with rootful containers. For more details, see the Quadlet upstream documentation and the Make systemd better for Podman with Quadlet article. Jira:RHELPLAN-154431 [1] The Container Tools packages have been updated The updated Container Tools packages, which contain the Podman, Buildah, Skopeo, crun, and runc tools, are now available. This update applies a series of bug fixes and enhancements over the version. Notable changes in Podman v4.6 include: The podman kube play command now supports the --configmap= <path> option to provide Kubernetes YAML file with environment variables used within the containers of the pod. The podman kube play command now supports multiple Kubernetes YAML files for the --configmap option. The podman kube play command now supports containerPort names and port numbers within liveness probes. The podman kube play command now adds the ctrName as an alias to the pod network. The podman kube play and podman kube generate commands now support SELinux filetype labels and ulimit annotations. A new command, podman secret exists , has been added, which verifies if a secret with the given name exists. The podman create , podman run , podman pod create , and podman pod clone commands now support a new option, --shm-size-systemd , which allows limiting tmpfs sizes for systemd-specific mounts. The podman create and podman run commands now support a new option, --security-opt label=nested , which allows SELinux labeling within a confined container. Podman now supports auto updates for containers running inside a pod. Podman can now use an SQLite database as a backend for increased stability. The default remains the BoltDB database. You can select the database by setting the database_backend field in the containers.conf file. Podman now supports Quadlets to automatically generate a systemd service file from the container description. The description focuses on the relevant container details and hides the technical complexity of running containers under systemd . For further information about notable changes, see upstream release notes . Jira:RHELPLAN-154443 [1] Podman now supports a Podmansh login shell Beginning with Podman v4.6, you can use the Podmansh login shell to manage user access and control. To switch to CGroups v2, add systemd.unified_cgroup_hierarchy=1 to the kernel command line. Configure the settings for a user to use the /usr/bin/podmansh command as a login shell instead of a standard shell command, for example, /usr/bin/bash . When a user logs into a system setup, the podmansh command runs the user's session in a Podman container named podmansh . Containers into which users log in are defined using the Quadlet files, which are created in the /etc/containers/systemd/users/ directory. In these files, set the ContainerName field in the [Container] section to podmansh . Systemd automatically starts podmansh when the user session starts and continues running until all user sessions exit. For more information, see Podman v4.6.0 Introduces Podmansh: A Revolutionary Login Shell . Jira:RHELPLAN-163002 [1] Clients for sigstore signatures with Fulcio and Rekor are now available With Fulcio and Rekor servers, you can now create signatures by using short-term certificates based on an OpenID Connect (OIDC) server authentication, instead of manually managing a private key. Clients for sigstore signatures with Fulcio and Rekor, previously available as a Technology Preview, are now fully supported. This added functionality is the client side support only, and does not include either the Fulcio or Rekor servers. Add the fulcio section in the policy.json file. To sign container images, use the podman push --sign-by-sigstore=file.yml or skopeo copy --sign-by-sigstore= file.yml commands, where file.yml is the sigstore signing parameter file. To verify signatures, add the fulcio section and the rekorPublicKeyPath or rekorPublicKeyData fields in the policy.json file. For more information, see containers-policy.json man page. Jira:RHELPLAN-160659 [1] | [
"network --device ens3 --ipv4-dns-search domain1.example.com,domain2.example.com",
"yum module install nodejs:20",
"export PYTHON_EMAIL_DISABLE_STRICT_ADDR_PARSING=true",
"[email_addr_parsing] PYTHON_EMAIL_DISABLE_STRICT_ADDR_PARSING = true",
"yum install gcc-toolset-13",
"scl enable gcc-toolset-13 tool",
"scl enable gcc-toolset-13 bash",
"cat /etc/cloud/cloud.cfg network: renderers: ['network-manager', 'eni', 'netplan', 'sysconfig', 'networkd']"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.9_release_notes/new-features |
Chapter 19. Project [config.openshift.io/v1] | Chapter 19. Project [config.openshift.io/v1] Description Project holds cluster-wide information about Project. The canonical name is cluster Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 19.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 19.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description projectRequestMessage string projectRequestMessage is the string presented to a user if they are unable to request a project via the projectrequest api endpoint projectRequestTemplate object projectRequestTemplate is the template to use for creating projects in response to projectrequest. This must point to a template in 'openshift-config' namespace. It is optional. If it is not specified, a default template is used. 19.1.2. .spec.projectRequestTemplate Description projectRequestTemplate is the template to use for creating projects in response to projectrequest. This must point to a template in 'openshift-config' namespace. It is optional. If it is not specified, a default template is used. Type object Property Type Description name string name is the metadata.name of the referenced project request template 19.1.3. .status Description status holds observed values from the cluster. They may not be overridden. Type object 19.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/projects DELETE : delete collection of Project GET : list objects of kind Project POST : create a Project /apis/config.openshift.io/v1/projects/{name} DELETE : delete a Project GET : read the specified Project PATCH : partially update the specified Project PUT : replace the specified Project /apis/config.openshift.io/v1/projects/{name}/status GET : read status of the specified Project PATCH : partially update status of the specified Project PUT : replace status of the specified Project 19.2.1. /apis/config.openshift.io/v1/projects Table 19.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Project Table 19.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 19.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Project Table 19.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 19.5. HTTP responses HTTP code Reponse body 200 - OK ProjectList schema 401 - Unauthorized Empty HTTP method POST Description create a Project Table 19.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.7. Body parameters Parameter Type Description body Project schema Table 19.8. HTTP responses HTTP code Reponse body 200 - OK Project schema 201 - Created Project schema 202 - Accepted Project schema 401 - Unauthorized Empty 19.2.2. /apis/config.openshift.io/v1/projects/{name} Table 19.9. Global path parameters Parameter Type Description name string name of the Project Table 19.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Project Table 19.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 19.12. Body parameters Parameter Type Description body DeleteOptions schema Table 19.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Project Table 19.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 19.15. HTTP responses HTTP code Reponse body 200 - OK Project schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Project Table 19.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.17. Body parameters Parameter Type Description body Patch schema Table 19.18. HTTP responses HTTP code Reponse body 200 - OK Project schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Project Table 19.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.20. Body parameters Parameter Type Description body Project schema Table 19.21. HTTP responses HTTP code Reponse body 200 - OK Project schema 201 - Created Project schema 401 - Unauthorized Empty 19.2.3. /apis/config.openshift.io/v1/projects/{name}/status Table 19.22. Global path parameters Parameter Type Description name string name of the Project Table 19.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Project Table 19.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 19.25. HTTP responses HTTP code Reponse body 200 - OK Project schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Project Table 19.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.27. Body parameters Parameter Type Description body Patch schema Table 19.28. HTTP responses HTTP code Reponse body 200 - OK Project schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Project Table 19.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.30. Body parameters Parameter Type Description body Project schema Table 19.31. HTTP responses HTTP code Reponse body 200 - OK Project schema 201 - Created Project schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/config_apis/project-config-openshift-io-v1 |
probe::sunrpc.sched.delay | probe::sunrpc.sched.delay Name probe::sunrpc.sched.delay - Delay an RPC task Synopsis sunrpc.sched.delay Values prog the program number in the RPC call xid the transmission id in the RPC call delay the time delayed vers the program version in the RPC call tk_flags the flags of the task tk_pid the debugging id of the task prot the IP protocol in the RPC call | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-sunrpc-sched-delay |
Chapter 13. Updating hardware on nodes running on vSphere | Chapter 13. Updating hardware on nodes running on vSphere You must ensure that your nodes running in vSphere are running on the hardware version supported by OpenShift Container Platform. Currently, hardware version 13 or later is supported for vSphere virtual machines in a cluster. You can update your virtual hardware immediately or schedule an update in vCenter. Important Using hardware version 13 for your cluster nodes running on vSphere is now deprecated. This version is still fully supported, but support will be removed in a future version of OpenShift Container Platform. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. 13.1. Updating virtual hardware on vSphere To update the hardware of your virtual machines (VMs) on VMware vSphere, update your virtual machines separately to reduce the risk of downtime for your cluster. 13.1.1. Updating the virtual hardware for control plane nodes on vSphere To reduce the risk of downtime, it is recommended that control plane nodes be upgraded serially. This ensures that the Kubernetes API remains available and etcd retains quorum. Prerequisites You have cluster administrator permissions to execute the required permissions in the vCenter instance hosting your OpenShift Container Platform cluster. Your vSphere ESXi hosts are version 6.7U3 or later. Procedure List the control plane nodes in your cluster. USD oc get nodes -l node-role.kubernetes.io/master Example output NAME STATUS ROLES AGE VERSION control-plane-node-0 Ready master 75m v1.23.0 control-plane-node-1 Ready master 75m v1.23.0 control-plane-node-2 Ready master 75m v1.23.0 Note the names of your control plane nodes. Mark the control plane node as unschedulable. USD oc adm cordon <control_plane_node> Shut down the virtual machine (VM) associated with the control plane node. Do this in the vSphere client by right-clicking the VM and selecting Power Shut Down Guest OS . Do not shut down the VM using Power Off because it might not shut down safely. Upgrade the VM in the vSphere client. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Power on the VM associated with the control plane node. Do this in the vSphere client by right-clicking the VM and selecting Power On . Wait for the node to report as Ready : USD oc wait --for=condition=Ready node/<control_plane_node> Mark the control plane node as schedulable again: USD oc adm uncordon <control_plane_node> Repeat this procedure for each control plane node in your cluster. 13.1.2. Updating the virtual hardware for compute nodes on vSphere To reduce the risk of downtime, it is recommended that compute nodes be upgraded serially. Note Multiple compute nodes can be upgraded in parallel given workloads are tolerant of having multiple nodes in a NotReady state. It is the responsibility of the administrator to ensure that the required compute nodes are available. Prerequisites You have cluster administrator permissions to execute the required permissions in the vCenter instance hosting your OpenShift Container Platform cluster. Your vSphere ESXi hosts are version 6.7U3 or later. Procedure List the compute nodes in your cluster. USD oc get nodes -l node-role.kubernetes.io/worker Example output NAME STATUS ROLES AGE VERSION compute-node-0 Ready worker 30m v1.23.0 compute-node-1 Ready worker 30m v1.23.0 compute-node-2 Ready worker 30m v1.23.0 Note the names of your compute nodes. Mark the compute node as unschedulable: USD oc adm cordon <compute_node> Evacuate the pods from the compute node. There are several ways to do this. For example, you can evacuate all or selected pods on a node: USD oc adm drain <compute_node> [--pod-selector=<pod_selector>] See the "Understanding how to evacuate pods on nodes" section for other options to evacuate pods from a node. Shut down the virtual machine (VM) associated with the compute node. Do this in the vSphere client by right-clicking the VM and selecting Power Shut Down Guest OS . Do not shut down the VM using Power Off because it might not shut down safely. Upgrade the VM in the vSphere client. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Power on the VM associated with the compute node. Do this in the vSphere client by right-clicking the VM and selecting Power On . Wait for the node to report as Ready : USD oc wait --for=condition=Ready node/<compute_node> Mark the compute node as schedulable again: USD oc adm uncordon <compute_node> Repeat this procedure for each compute node in your cluster. 13.1.3. Updating the virtual hardware for template on vSphere Prerequisites You have cluster administrator permissions to execute the required permissions in the vCenter instance hosting your OpenShift Container Platform cluster. Your vSphere ESXi hosts are version 6.7U3 or later. Procedure If the RHCOS template is configured as a vSphere template follow Convert a Template to a Virtual Machine in the VMware documentation prior to the step. Note Once converted from a template, do not power on the virtual machine. Update the VM in the vSphere client. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Convert the VM in the vSphere client from a VM to template. Follow Convert a Virtual Machine to a Template in the vSphere Client in the VMware documentation for more information. Additional resources Understanding how to evacuate pods on nodes 13.2. Scheduling an update for virtual hardware on vSphere Virtual hardware updates can be scheduled to occur when a virtual machine is powered on or rebooted. You can schedule your virtual hardware updates exclusively in vCenter by following Schedule a Compatibility Upgrade for a Virtual Machine in the VMware documentation. When scheduling an upgrade prior to performing an upgrade of OpenShift Container Platform, the virtual hardware update occurs when the nodes are rebooted during the course of the OpenShift Container Platform upgrade. | [
"oc get nodes -l node-role.kubernetes.io/master",
"NAME STATUS ROLES AGE VERSION control-plane-node-0 Ready master 75m v1.23.0 control-plane-node-1 Ready master 75m v1.23.0 control-plane-node-2 Ready master 75m v1.23.0",
"oc adm cordon <control_plane_node>",
"oc wait --for=condition=Ready node/<control_plane_node>",
"oc adm uncordon <control_plane_node>",
"oc get nodes -l node-role.kubernetes.io/worker",
"NAME STATUS ROLES AGE VERSION compute-node-0 Ready worker 30m v1.23.0 compute-node-1 Ready worker 30m v1.23.0 compute-node-2 Ready worker 30m v1.23.0",
"oc adm cordon <compute_node>",
"oc adm drain <compute_node> [--pod-selector=<pod_selector>]",
"oc wait --for=condition=Ready node/<compute_node>",
"oc adm uncordon <compute_node>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/updating_clusters/updating-hardware-on-nodes-running-on-vsphere |
Chapter 27. Hoist Field Action | Chapter 27. Hoist Field Action Wrap data in a single field 27.1. Configuration Options The following table summarizes the configuration options available for the hoist-field-action Kamelet: Property Name Description Type Default Example field * Field The name of the field that will contain the event string Note Fields marked with an asterisk (*) are mandatory. 27.2. Dependencies At runtime, the hoist-field-action Kamelet relies upon the presence of the following dependencies: github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT camel:core camel:jackson camel:kamelet 27.3. Usage This section describes how you can use the hoist-field-action . 27.3.1. Knative Action You can use the hoist-field-action Kamelet as an intermediate step in a Knative binding. hoist-field-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: hoist-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: hoist-field-action properties: field: "The Field" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 27.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 27.3.1.2. Procedure for using the cluster CLI Save the hoist-field-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f hoist-field-action-binding.yaml 27.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step hoist-field-action -p "step-0.field=The Field" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 27.3.2. Kafka Action You can use the hoist-field-action Kamelet as an intermediate step in a Kafka binding. hoist-field-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: hoist-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: hoist-field-action properties: field: "The Field" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 27.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 27.3.2.2. Procedure for using the cluster CLI Save the hoist-field-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f hoist-field-action-binding.yaml 27.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step hoist-field-action -p "step-0.field=The Field" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 27.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/hoist-field-action.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: hoist-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: hoist-field-action properties: field: \"The Field\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f hoist-field-action-binding.yaml",
"kamel bind timer-source?message=Hello --step hoist-field-action -p \"step-0.field=The Field\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: hoist-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: hoist-field-action properties: field: \"The Field\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f hoist-field-action-binding.yaml",
"kamel bind timer-source?message=Hello --step hoist-field-action -p \"step-0.field=The Field\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/hoist-field-action |
Chapter 3. Usage | Chapter 3. Usage This chapter describes the necessary steps for rebuilding and using Red Hat Software Collections 3.0, and deploying applications that use Red Hat Software Collections. 3.1. Using Red Hat Software Collections 3.1.1. Running an Executable from a Software Collection To run an executable from a particular Software Collection, type the following command at a shell prompt: scl enable software_collection ... ' command ...' Or, alternatively, use the following command: scl enable software_collection ... -- command ... Replace software_collection with a space-separated list of Software Collections you want to use and command with the command you want to run. For example, to execute a Perl program stored in a file named hello.pl with the Perl interpreter from the perl516 Software Collection, type: You can execute any command using the scl utility, causing it to be run with the executables from a selected Software Collection in preference to their possible Red Hat Enterprise Linux system equivalents. For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.0 Components" . 3.1.2. Running a Shell Session with a Software Collection as Default To start a new shell session with executables from a selected Software Collection in preference to their Red Hat Enterprise Linux equivalents, type the following at a shell prompt: scl enable software_collection ... bash Replace software_collection with a space-separated list of Software Collections you want to use. For example, to start a new shell session with the python27 and rh-postgresql95 Software Collections as default, type: The list of Software Collections that are enabled in the current session is stored in the USDX_SCLS environment variable, for instance: For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.0 Components" . 3.1.3. Running a System Service from a Software Collection Software Collections that include system services install corresponding init scripts in the /etc/rc.d/init.d/ directory. To start such a service in the current session, type the following at a shell prompt as root : service software_collection - service_name start Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : chkconfig software_collection - service_name on For example, to start the postgresql service from the rh-postgresql95 Software Collection and enable it in runlevels 2, 3, 4, and 5, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 6, refer to the Red Hat Enterprise Linux 6 Deployment Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.0 Components" . 3.2. Accessing a Manual Page from a Software Collection Every Software Collection contains a general manual page that describes the content of this component. Each manual page has the same name as the component and it is located in the /opt/rh directory. To read a manual page for a Software Collection, type the following command: scl enable software_collection 'man software_collection ' Replace software_collection with the particular Red Hat Software Collections component. For example, to display the manual page for rh-mariadb101 , type: 3.3. Deploying Applications That Use Red Hat Software Collections In general, you can use one of the following two approaches to deploy an application that depends on a component from Red Hat Software Collections in production: Install all required Software Collections and packages manually and then deploy your application, or Create a new Software Collection for your application and specify all required Software Collections and other packages as dependencies. For more information on how to manually install individual Red Hat Software Collections components, see Section 2.2, "Installing Red Hat Software Collections" . For further details on how to use Red Hat Software Collections, see Section 3.1, "Using Red Hat Software Collections" . For a detailed explanation of how to create a custom Software Collection or extend an existing one, read the Red Hat Software Collections Packaging Guide . 3.4. Red Hat Software Collections Container Images Container images based on Red Hat Software Collections include applications, daemons, and databases. The images can be run on Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux Atomic Host. For information about their usage, see Using Red Hat Software Collections 3 Container Images . For details regarding container images based on Red Hat Software Collections versions 2.4 and earlier, see Using Red Hat Software Collections 2 Container Images . The following container images are available with Red Hat Software Collections 3.0: rhscl/devtoolset-7-toolchain-rhel7 rhscl/devtoolset-7-perftools-rhel7 rhscl/httpd-24-rhel7 rhscl/mariadb-102-rhel7 rhscl/mongodb-34-rhel7 rhscl/nginx-112-rhel7 rhscl/nodejs-8-rhel7 rhscl/php-71-rhel7 rhscl/postgresql-96-rhel7 rhscl/python-36-rhel7 The following container images are based on Red Hat Software Collections 2.4: rhscl/devtoolset-6-toolchain-rhel7 rhscl/devtoolset-6-perftools-rhel7 rhscl/nginx-110-rhel7 rhscl/nodejs-6-rhel7 rhscl/python-27-rhel7 rhscl/ruby-24-rhel7 rhscl/ror-50-rhel7 rhscl/thermostat-16-agent-rhel7 (EOL) rhscl/thermostat-16-storage-rhel7 (EOL) The following container images are based on Red Hat Software Collections 2.3: rhscl/mysql-57-rhel7 rhscl/perl-524-rhel7 rhscl/php-70-rhel7 rhscl/redis-32-rhel7 rhscl/mongodb-32-rhel7 rhscl/php-56-rhel7 rhscl/python-35-rhel7 rhscl/ruby-23-rhel7 The following container images are based on Red Hat Software Collections 2.2: rhscl/devtoolset-4-toolchain-rhel7 rhscl/devtoolset-4-perftools-rhel7 rhscl/mariadb-101-rhel7 rhscl/nginx-18-rhel7 rhscl/nodejs-4-rhel7 rhscl/postgresql-95-rhel7 rhscl/ror-42-rhel7 rhscl/thermostat-1-agent-rhel7 (EOL) rhscl/varnish-4-rhel7 The following container images are based on Red Hat Software Collections 2.0: rhscl/mariadb-100-rhel7 rhscl/mongodb-26-rhel7 rhscl/mysql-56-rhel7 rhscl/nginx-16-rhel7 (EOL) rhscl/passenger-40-rhel7 rhscl/perl-520-rhel7 rhscl/postgresql-94-rhel7 rhscl/python-34-rhel7 rhscl/ror-41-rhel7 rhscl/ruby-22-rhel7 rhscl/s2i-base-rhel7 Images marked as End of Life (EOL) are no longer supported. | [
"~]USD scl enable rh-perl524 'perl hello.pl' Hello, World!",
"~]USD scl enable python27 rh-postgresql95 bash",
"~]USD echo USDX_SCLS python27 rh-postgresql95",
"~]# service rh-postgresql95-postgresql start Starting rh-postgresql95-postgresql service: [ OK ] ~]# chkconfig rh-postgresql95-postgresql on",
"~]USD scl enable rh-mariadb101 \"man rh-mariadb101\""
] | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.0_release_notes/chap-usage |
7.183. piranha | 7.183. piranha 7.183.1. RHBA-2013:0351 - piranha bug fix and enhancement update Updated piranha packages that fix two bugs are now available for Red Hat Enterprise Linux 6. Piranha provides high-availability and load-balancing services for Red Hat Enterprise Linux. The piranha packages contain various tools to administer and configure the Linux Virtual Server (LVS), as well as the heartbeat and failover components. LVS is a dynamically-adjusted kernel routing mechanism that provides load balancing, primarily for Web and FTP servers. Bug Fixes BZ# 857917 The IPVS timeout values in the Piranha web interface could be reset whenever the Global Settings page was visited. As a consequence, if the Transmission Control Protocol (TCP) timeout, TCP FIN timeout, or User Datagram Protocol (UDP) timeout values had been set, these values could be erased from the configuration file. This bug has been fixed and all IPVS timeout values are preserved as expected. BZ#860924 Previously, the Piranha web interface incorrectly displayed the value "5" for a virtual server interface. With this update, the Piranha web interface properly displays the interface associated with a virtual server. All users of piranha are advised to upgrade to these updated packages, which fix these bugs. 7.183.2. RHBA-2013:0576 - piranha bug fix update Updated piranha packages that fix one bug are now available for Red Hat Enterprise Linux 6. Piranha provides high-availability and load-balancing services for Red Hat Enterprise Linux. The piranha packages contain various tools to administer and configure the Linux Virtual Server (LVS), as well as the heartbeat and failover components. LVS is a dynamically-adjusted kernel routing mechanism that provides load balancing, primarily for Web and FTP servers. Bug Fix BZ# 915584 Previously, the lvsd daemon did not properly activate the sorry server fallback service when all real servers were determined to be unavailable. As a result, incoming traffic for a virtual service with no available real servers were not directed to the sorry server. With this update, the lvsd daemon properly activates the sorry server when no real servers are available. Users of piranha are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/piranha |
Chapter 3. Binding [v1] | Chapter 3. Binding [v1] Description Binding ties one object to another; for example, a pod is bound to a node by a scheduler. Deprecated in 1.7, please use the bindings subresource of pods instead. Type object Required target 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata target object ObjectReference contains enough information to let you inspect or modify the referred object. 3.1.1. .target Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 3.2. API endpoints The following API endpoints are available: /api/v1/namespaces/{namespace}/bindings POST : create a Binding /api/v1/namespaces/{namespace}/pods/{name}/binding POST : create binding of a Pod 3.2.1. /api/v1/namespaces/{namespace}/bindings Table 3.1. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a Binding Table 3.3. Body parameters Parameter Type Description body Binding schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK Binding schema 201 - Created Binding schema 202 - Accepted Binding schema 401 - Unauthorized Empty 3.2.2. /api/v1/namespaces/{namespace}/pods/{name}/binding Table 3.5. Global path parameters Parameter Type Description name string name of the Binding namespace string object name and auth scope, such as for teams and projects Table 3.6. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create binding of a Pod Table 3.7. Body parameters Parameter Type Description body Binding schema Table 3.8. HTTP responses HTTP code Reponse body 200 - OK Binding schema 201 - Created Binding schema 202 - Accepted Binding schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/metadata_apis/binding-v1 |
4.2. Creating an Image Builder blueprint in the web console interface | 4.2. Creating an Image Builder blueprint in the web console interface To describe the customized system image, create a blueprint first. Prerequisites You have opened the Image Builder interface of the RHEL 7 web console in a browser. Procedure 1. Click Create Blueprint in the top right corner. Figure 4.3. Creating a Blueprint A pop-up appears with fields for the blueprint name and description . 2. Fill in the name of the blueprint, its description, then click Create. The screen changes to blueprint editing mode . 3. Add components that you want to include in the system image: i. On the left, enter all or part of the component name in the Available Components field and press Enter. Figure 4.4. Searching for Available Componentes The search is added to the list of filters under the text entry field, and the list of components below is reduced to these that match the search. If the list of components is too long, add further search terms in the same way. ii. The list of components is paged. To move to other result pages, use the arrows and entry field above the component list. iii. Click on name of the component you intend to use to display its details. The right pane fills with details of the components, such as its version and dependencies. iv. Select the version you want to use in the Component Options box, with the Version Release dropdown. v. Click Add in the top left. vi. If you added a component by mistake, remove it by clicking the ... button at the far right of its entry in the right pane, and select Remove in the menu. Note If you do not intend to select version for some components, you can skip the component details screen and version selection by clicking the + buttons on the right side of the component list. 4. To save the blueprint, click Commit in the top right. A dialog with a summary of the changes pops up. Click Commit . A small pop-up on the right informs you of the saving progress and then the result. 5. To exit the editing screen, click Back to Blueprints in the top left. The Image Builder view opens, listing existing blueprints. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/image_builder_guide/sect-documentation-image_builder-chapter4-section_2 |
4.5. Securing DNS Traffic with DNSSEC | 4.5. Securing DNS Traffic with DNSSEC 4.5.1. Introduction to DNSSEC DNSSEC is a set of Domain Name System Security Extensions ( DNSSEC ) that enables a DNS client to authenticate and check the integrity of responses from a DNS nameserver in order to verify their origin and to determine if they have been tampered with in transit. 4.5.2. Understanding DNSSEC For connecting over the Internet, a growing number of websites now offer the ability to connect securely using HTTPS . However, before connecting to an HTTPS webserver, a DNS lookup must be performed, unless you enter the IP address directly. These DNS lookups are done insecurely and are subject to man-in-the-middle attacks due to lack of authentication. In other words, a DNS client cannot have confidence that the replies that appear to come from a given DNS nameserver are authentic and have not been tampered with. More importantly, a recursive nameserver cannot be sure that the records it obtains from other nameservers are genuine. The DNS protocol did not provide a mechanism for the client to ensure it was not subject to a man-in-the-middle attack. DNSSEC was introduced to address the lack of authentication and integrity checks when resolving domain names using DNS . It does not address the problem of confidentiality. Publishing DNSSEC information involves digitally signing DNS resource records as well as distributing public keys in such a way as to enable DNS resolvers to build a hierarchical chain of trust. Digital signatures for all DNS resource records are generated and added to the zone as digital signature resource records ( RRSIG ). The public key of a zone is added as a DNSKEY resource record. To build the hierarchical chain, hashes of the DNSKEY are published in the parent zone as Delegation of Signing ( DS ) resource records. To facilitate proof of non-existence, the NextSECure ( NSEC ) and NSEC3 resource records are used. In a DNSSEC signed zone, each resource record set ( RRset ) has a corresponding RRSIG resource record. Note that records used for delegation to a child zone (NS and glue records) are not signed; these records appear in the child zone and are signed there. Processing DNSSEC information is done by resolvers that are configured with the root zone public key. Using this key, resolvers can verify the signatures used in the root zone. For example, the root zone has signed the DS record for .com . The root zone also serves NS and glue records for the .com name servers. The resolver follows this delegation and queries for the DNSKEY record of .com using these delegated name servers. The hash of the DNSKEY record obtained should match the DS record in the root zone. If so, the resolver will trust the obtained DNSKEY for .com . In the .com zone, the RRSIG records are created by the .com DNSKEY. This process is repeated similarly for delegations within .com , such as redhat.com . Using this method, a validating DNS resolver only needs to be configured with one root key while it collects many DNSKEYs from around the world during its normal operation. If a cryptographic check fails, the resolver will return SERVFAIL to the application. DNSSEC has been designed in such a way that it will be completely invisible to applications not supporting DNSSEC. If a non-DNSSEC application queries a DNSSEC capable resolver, it will receive the answer without any of these new resource record types such as RRSIG. However, the DNSSEC capable resolver will still perform all cryptographic checks, and will still return a SERVFAIL error to the application if it detects malicious DNS answers. DNSSEC protects the integrity of the data between DNS servers (authoritative and recursive), it does not provide security between the application and the resolver. Therefore, it is important that the applications are given a secure transport to their resolver. The easiest way to accomplish that is to run a DNSSEC capable resolver on localhost and use 127.0.0.1 in /etc/resolv.conf . Alternatively a VPN connection to a remote DNS server could be used. Understanding the Hotspot Problem Wi-Fi Hotspots or VPNs rely on " DNS lies " : Captive portals tend to hijack DNS in order to redirect users to a page where they are required to authenticate (or pay) for the Wi-Fi service. Users connecting to a VPN often need to use an " internal only " DNS server in order to locate resources that do not exist outside the corporate network. This requires additional handling by software. For example, dnssec-trigger can be used to detect if a Hotspot is hijacking the DNS queries and unbound can act as a proxy nameserver to handle the DNSSEC queries. Choosing a DNSSEC Capable Recursive Resolver To deploy a DNSSEC capable recursive resolver, either BIND or unbound can be used. Both enable DNSSEC by default and are configured with the DNSSEC root key. To enable DNSSEC on a server, either will work however the use of unbound is preferred on mobile devices, such as notebooks, as it allows the local user to dynamically reconfigure the DNSSEC overrides required for Hotspots when using dnssec-trigger , and for VPNs when using Libreswan . The unbound daemon further supports the deployment of DNSSEC exceptions listed in the etc/unbound/*.d/ directories which can be useful to both servers and mobile devices. 4.5.3. Understanding Dnssec-trigger Once unbound is installed and configured in /etc/resolv.conf , all DNS queries from applications are processed by unbound . dnssec-trigger only reconfigures the unbound resolver when triggered to do so. This mostly applies to roaming client machines, such as laptops, that connect to different Wi-Fi networks. The process is as follows: NetworkManager " triggers " dnssec-trigger when a new DNS server is obtained through DHCP . Dnssec-trigger then performs a number of tests against the server and decides whether or not it properly supports DNSSEC. If it does, then dnssec-trigger reconfigures unbound to use that DNS server as a forwarder for all queries. If the tests fail, dnssec-trigger will ignore the new DNS server and try a few available fall-back methods. If it determines that an unrestricted port 53 ( UDP and TCP ) is available, it will tell unbound to become a full recursive DNS server without using any forwarder. If this is not possible, for example because port 53 is blocked by a firewall for everything except reaching the network's DNS server itself, it will try to use DNS to port 80, or TLS encapsulated DNS to port 443. Servers running DNS on port 80 and 443 can be configured in /etc/dnssec-trigger/dnssec-trigger.conf . Commented out examples should be available in the default configuration file. If these fall-back methods also fail, dnssec-trigger offers to either operate insecurely, which would bypass DNSSEC completely, or run in " cache only " mode where it will not attempt new DNS queries but will answer for everything it already has in the cache. Wi-Fi Hotspots increasingly redirect users to a sign-on page before granting access to the Internet. During the probing sequence outlined above, if a redirection is detected, the user is prompted to ask if a login is required to gain Internet access. The dnssec-trigger daemon continues to probe for DNSSEC resolvers every ten seconds. See Section 4.5.8, "Using Dnssec-trigger" for information on using the dnssec-trigger graphical utility. 4.5.4. VPN Supplied Domains and Name Servers Some types of VPN connections can convey a domain and a list of nameservers to use for that domain as part of the VPN tunnel setup. On Red Hat Enterprise Linux , this is supported by NetworkManager . This means that the combination of unbound , dnssec-trigger , and NetworkManager can properly support domains and name servers provided by VPN software. Once the VPN tunnel comes up, the local unbound cache is flushed for all entries of the domain name received, so that queries for names within the domain name are fetched fresh from the internal name servers reached using the VPN. When the VPN tunnel is terminated, the unbound cache is flushed again to ensure any queries for the domain will return the public IP addresses, and not the previously obtained private IP addresses. See Section 4.5.11, "Configuring DNSSEC Validation for Connection Supplied Domains" . 4.5.5. Recommended Naming Practices Red Hat recommends that both static and transient names match the fully-qualified domain name ( FQDN ) used for the machine in DNS , such as host.example.com . The Internet Corporation for Assigned Names and Numbers (ICANN) sometimes adds previously unregistered Top-Level Domains (such as .yourcompany ) to the public register. Therefore, Red Hat strongly recommends that you do not use a domain name that is not delegated to you, even on a private network, as this can result in a domain name that resolves differently depending on network configuration. As a result, network resources can become unavailable. Using domain names that are not delegated to you also makes DNSSEC more difficult to deploy and maintain, as domain name collisions require manual configuration to enable DNSSEC validation. See the ICANN FAQ on domain name collision for more information on this issue. 4.5.6. Understanding Trust Anchors In a hierarchical cryptographic system, a trust anchor is an authoritative entity which is assumed to be trustworthy. For example, in X.509 architecture, a root certificate is a trust anchor from which a chain of trust is derived. The trust anchor must be put in the possession of the trusting party beforehand to make path validation possible. In the context of DNSSEC, a trust anchor consists of a DNS name and public key (or hash of the public key) associated with that name. It is expressed as a base 64 encoded key. It is similar to a certificate in that it is a means of exchanging information, including a public key, which can be used to verify and authenticate DNS records. RFC 4033 defines a trust anchor as a configured DNSKEY RR or DS RR hash of a DNSKEY RR. A validating security-aware resolver uses this public key or hash as a starting point for building the authentication chain to a signed DNS response. In general, a validating resolver will have to obtain the initial values of its trust anchors through some secure or trusted means outside the DNS protocol. Presence of a trust anchor also implies that the resolver should expect the zone to which the trust anchor points to be signed. 4.5.7. Installing DNSSEC 4.5.7.1. Installing unbound In order to validate DNS using DNSSEC locally on a machine, it is necessary to install the DNS resolver unbound (or bind ). It is only necessary to install dnssec-trigger on mobile devices. For servers, unbound should be sufficient although a forwarding configuration for the local domain might be required depending on where the server is located (LAN or Internet). dnssec-trigger will currently only help with the global public DNS zone. NetworkManager , dhclient , and VPN applications can often gather the domain list (and nameserver list as well) automatically, but not dnssec-trigger nor unbound . To install unbound enter the following command as the root user: 4.5.7.2. Checking if unbound is Running To determine whether the unbound daemon is running, enter the following command: The systemctl status command will report unbound as Active: inactive (dead) if the unbound service is not running. 4.5.7.3. Starting unbound To start the unbound daemon for the current session, enter the following command as the root user: Run the systemctl enable command to ensure that unbound starts up every time the system boots: The unbound daemon allows configuration of local data or overrides using the following directories: The /etc/unbound/conf.d directory is used to add configurations for a specific domain name. This is used to redirect queries for a domain name to a specific DNS server. This is often used for sub-domains that only exist within a corporate WAN. The /etc/unbound/keys.d directory is used to add trust anchors for a specific domain name. This is required when an internal-only name is DNSSEC signed, but there is no publicly existing DS record to build a path of trust. Another use case is when an internal version of a domain is signed using a different DNSKEY than the publicly available name outside the corporate WAN. The /etc/unbound/local.d directory is used to add specific DNS data as a local override. This can be used to build blacklists or create manual overrides. This data will be returned to clients by unbound , but it will not be marked as DNSSEC signed. NetworkManager , as well as some VPN software, may change the configuration dynamically. These configuration directories contain commented out example entries. For further information see the unbound.conf(5) man page. 4.5.7.4. Installing Dnssec-trigger The dnssec-trigger application runs as a daemon, dnssec-triggerd . To install dnssec-trigger enter the following command as the root user: 4.5.7.5. Checking if the Dnssec-trigger Daemon is Running To determine whether dnssec-triggerd is running, enter the following command: The systemctl status command will report dnssec-triggerd as Active: inactive (dead) if the dnssec-triggerd daemon is not running. To start it for the current session enter the following command as the root user: Run the systemctl enable command to ensure that dnssec-triggerd starts up every time the system boots: 4.5.8. Using Dnssec-trigger The dnssec-trigger application has a GNOME panel utility for displaying DNSSEC probe results and for performing DNSSEC probe requests on demand. To start the utility, press the Super key to enter the Activities Overview, type DNSSEC and then press Enter . An icon resembling a ships anchor is added to the message tray at the bottom of the screen. Press the round blue notification icon in the bottom right of the screen to reveal it. Right click the anchor icon to display a pop-up menu. In normal operations unbound is used locally as the name server, and resolv.conf points to 127.0.0.1 . When you click OK on the Hotspot Sign-On panel this is changed. The DNS servers are queried from NetworkManager and put in resolv.conf . Now you can authenticate on the Hotspot's sign-on page. The anchor icon shows a big red exclamation mark to warn you that DNS queries are being made insecurely. When authenticated, dnssec-trigger should automatically detect this and switch back to secure mode, although in some cases it cannot and the user has to do this manually by selecting Reprobe . Dnssec-trigger does not normally require any user interaction. Once started, it works in the background and if a problem is encountered it notifies the user by means of a pop-up text box. It also informs unbound about changes to the resolv.conf file. 4.5.9. Using dig With DNSSEC To see whether DNSSEC is working, one can use various command line tools. The best tool to use is the dig command from the bind-utils package. Other tools that are useful are drill from the ldns package and unbound-host from the unbound package. The old DNS utilities nslookup and host are obsolete and should not be used. To send a query requesting DNSSEC data using dig , the option +dnssec is added to the command, for example: In addition to the A record, an RRSIG record is returned which contains the DNSSEC signature, as well as the inception time and expiration time of the signature. The unbound server indicated that the data was DNSSEC authenticated by returning the ad bit in the flags: section at the top. If DNSSEC validation fails, the dig command would return a SERVFAIL error: To request more information about the failure, DNSSEC checking can be disabled by specifying the +cd option to the dig command: Often, DNSSEC mistakes manifest themselves by bad inception or expiration time, although in this example, the people at www.dnssec-tools.org have mangled this RRSIG signature on purpose, which we would not be able to detect by looking at this output manually. The error will show in the output of systemctl status unbound and the unbound daemon logs these errors to syslog as follows: Aug 22 22:04:52 laptop unbound: [3065:0] info: validation failure badsign-a.test.dnssec-tools.org. A IN An example using unbound-host : 4.5.10. Setting up Hotspot Detection Infrastructure for Dnssec-trigger When connecting to a network, dnssec-trigger attempts to detect a Hotspot. A Hotspot is generally a device that forces user interaction with a web page before they can use the network resources. The detection is done by attempting to download a specific fixed web page with known content. If there is a Hotspot, then the content received will not be as expected. To set up a fixed web page with known content that can be used by dnssec-trigger to detect a Hotspot, proceed as follows: Set up a web server on some machine that is publicly reachable on the Internet. See the Web Servers chapter in the Red Hat Enterprise Linux 7 System Administrator's Guide. Once you have the server running, publish a static page with known content on it. The page does not need to be a valid HTML page. For example, you could use a plain-text file named hotspot.txt that contains only the string OK . Assuming your server is located at example.com and you published your hotspot.txt file in the web server document_root /static/ sub-directory, then the address to your static web page would be example.com/static/hotspot.txt . See the DocumentRoot directive in the Web Servers chapter in the Red Hat Enterprise Linux 7 System Administrator's Guide. Add the following line to the /etc/dnssec-trigger/dnssec-trigger.conf file: url: "http://example.com/static/hotspot.txt OK" This command adds a URL that is probed using HTTP (port 80). The first part is the URL that will be resolved and the page that will be downloaded. The second part of the command is the text string that the downloaded webpage is expected to contain. For more information on the configuration options see the man page dnssec-trigger.conf(8) . 4.5.11. Configuring DNSSEC Validation for Connection Supplied Domains By default, forward zones with proper nameservers are automatically added into unbound by dnssec-trigger for every domain provided by any connection, except Wi-Fi connections through NetworkManager . By default, all forward zones added into unbound are DNSSEC validated. The default behavior for validating forward zones can be altered, so that all forward zones will not be DNSSEC validated by default. To do this, change the validate_connection_provided_zones variable in the dnssec-trigger configuration file /etc/dnssec.conf . As root user, open and edit the line as follows: validate_connection_provided_zones=no The change is not done for any existing forward zones, but only for future forward zones. Therefore if you want to disable DNSSEC for the current provided domain, you need to reconnect. 4.5.11.1. Configuring DNSSEC Validation for Wi-Fi Supplied Domains Adding forward zones for Wi-Fi provided zones can be enabled. To do this, change the add_wifi_provided_zones variable in the dnssec-trigger configuration file, /etc/dnssec.conf . As root user, open and edit the line as follows: add_wifi_provided_zones=yes The change is not done for any existing forward zones, but only for future forward zones. Therefore, if you want to enable DNSSEC for the current Wi-Fi provided domain, you need to reconnect (restart) the Wi-Fi connection. Warning Turning on the addition of Wi-Fi provided domains as forward zones into unbound may have security implications such as: A Wi-Fi access point can intentionally provide you a domain through DHCP for which it does not have authority and route all your DNS queries to its DNS servers. If you have the DNSSEC validation of forward zones turned off , the Wi-Fi provided DNS servers can spoof the IP address for domain names from the provided domain without you knowing it. 4.5.12. Additional Resources The following are resources which explain more about DNSSEC. 4.5.12.1. Installed Documentation dnssec-trigger(8) man page - Describes command options for dnssec-triggerd , dnssec-trigger-control and dnssec-trigger-panel . dnssec-trigger.conf(8) man page - Describes the configuration options for dnssec-triggerd . unbound(8) man page - Describes the command options for unbound , the DNS validating resolver. unbound.conf(5) man page - Contains information on how to configure unbound . resolv.conf(5) man page - Contains information that is read by the resolver routines. 4.5.12.2. Online Documentation http://tools.ietf.org/html/rfc4033 RFC 4033 DNS Security Introduction and Requirements. http://www.dnssec.net/ A website with links to many DNSSEC resources. http://www.dnssec-deployment.org/ The DNSSEC Deployment Initiative, sponsored by the Department for Homeland Security, contains a lot of DNSSEC information and has a mailing list to discuss DNSSEC deployment issues. http://www.internetsociety.org/deploy360/dnssec/community/ The Internet Society's " Deploy 360 " initiative to stimulate and coordinate DNSSEC deployment is a good resource for finding communities and DNSSEC activities worldwide. http://www.unbound.net/ This document contains general information about the unbound DNS service. http://www.nlnetlabs.nl/projects/dnssec-trigger/ This document contains general information about dnssec-trigger . | [
"~]# yum install unbound",
"~]USD systemctl status unbound unbound.service - Unbound recursive Domain Name Server Loaded: loaded (/usr/lib/systemd/system/unbound.service; disabled) Active: active (running) since Wed 2013-03-13 01:19:30 CET; 6h ago",
"~]# systemctl start unbound",
"~]# systemctl enable unbound",
"~]# yum install dnssec-trigger",
"~]USD systemctl status dnssec-triggerd systemctl status dnssec-triggerd.service dnssec-triggerd.service - Reconfigure local DNS(SEC) resolver on network change Loaded: loaded (/usr/lib/systemd/system/dnssec-triggerd.service; enabled) Active: active (running) since Wed 2013-03-13 06:10:44 CET; 1h 41min ago",
"~]# systemctl start dnssec-triggerd",
"~]# systemctl enable dnssec-triggerd",
"~]USD dig +dnssec whitehouse.gov ; <<>> DiG 9.9.3-rl.13207.22-P2-RedHat-9.9.3-4.P2.el7 <<>> +dnssec whitehouse.gov ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 21388 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags: do; udp: 4096 ;; QUESTION SECTION: ;whitehouse.gov. IN A ;; ANSWER SECTION: whitehouse.gov. 20 IN A 72.246.36.110 whitehouse.gov. 20 IN RRSIG A 7 2 20 20130825124016 20130822114016 8399 whitehouse.gov. BB8VHWEkIaKpaLprt3hq1GkjDROvkmjYTBxiGhuki/BJn3PoIGyrftxR HH0377I0Lsybj/uZv5hL4UwWd/lw6Gn8GPikqhztAkgMxddMQ2IARP6p wbMOKbSUuV6NGUT1WWwpbi+LelFMqQcAq3Se66iyH0Jem7HtgPEUE1Zc 3oI= ;; Query time: 227 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Aug 22 22:01:52 EDT 2013 ;; MSG SIZE rcvd: 233",
"~]USD dig badsign-a.test.dnssec-tools.org ; <<>> DiG 9.9.3-rl.156.01-P1-RedHat-9.9.3-3.P1.el7 <<>> badsign-a.test.dnssec-tools.org ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 1010 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;badsign-a.test.dnssec-tools.org. IN A ;; Query time: 1284 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Aug 22 22:04:52 EDT 2013 ;; MSG SIZE rcvd: 60]",
"~]USD dig +cd +dnssec badsign-a.test.dnssec-tools.org ; <<>> DiG 9.9.3-rl.156.01-P1-RedHat-9.9.3-3.P1.el7 <<>> +cd +dnssec badsign-a.test.dnssec-tools.org ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26065 ;; flags: qr rd ra cd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags: do; udp: 4096 ;; QUESTION SECTION: ;badsign-a.test.dnssec-tools.org. IN A ;; ANSWER SECTION: badsign-a.test.dnssec-tools.org. 49 IN A 75.119.216.33 badsign-a.test.dnssec-tools.org. 49 IN RRSIG A 5 4 86400 20130919183720 20130820173720 19442 test.dnssec-tools.org. E572dLKMvYB4cgTRyAHIKKEvdOP7tockQb7hXFNZKVbfXbZJOIDREJrr zCgAfJ2hykfY0yJHAlnuQvM0s6xOnNBSvc2xLIybJdfTaN6kSR0YFdYZ n2NpPctn2kUBn5UR1BJRin3Gqy20LZlZx2KD7cZBtieMsU/IunyhCSc0 kYw= ;; Query time: 1 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Aug 22 22:06:31 EDT 2013 ;; MSG SIZE rcvd: 257",
"~]USD unbound-host -C /etc/unbound/unbound.conf -v whitehouse.gov whitehouse.gov has address 184.25.196.110 (secure) whitehouse.gov has IPv6 address 2600:1417:11:2:8800::fc4 (secure) whitehouse.gov has IPv6 address 2600:1417:11:2:8000::fc4 (secure) whitehouse.gov mail is handled by 105 mail1.eop.gov. (secure) whitehouse.gov mail is handled by 110 mail5.eop.gov. (secure) whitehouse.gov mail is handled by 105 mail4.eop.gov. (secure) whitehouse.gov mail is handled by 110 mail6.eop.gov. (secure) whitehouse.gov mail is handled by 105 mail2.eop.gov. (secure) whitehouse.gov mail is handled by 105 mail3.eop.gov. (secure)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-Securing_DNS_Traffic_with_DNSSEC |
4.3. Configuring Static Routes with GUI | 4.3. Configuring Static Routes with GUI To set a static route, open the IPv4 or IPv6 settings window for the connection you want to configure. See Section 3.4.1, "Connecting to a Network Using the control-center GUI" for instructions on how to do that. Routes Address - Enter the IP address of a remote network, sub-net, or host. Netmask - The netmask or prefix length of the IP address entered above. Gateway - The IP address of the gateway leading to the remote network, sub-net, or host entered above. Metric - A network cost, a preference value to give to this route. Lower values will be preferred over higher values. Automatic When Automatic is ON , routes from RA or DHCP are used, but you can also add additional static routes. When OFF , only static routes you define are used. Use this connection only for resources on its network Select this check box to prevent the connection from becoming the default route. Typical examples are where a connection is a VPN tunnel or a leased line to a head office and you do not want any Internet-bound traffic to pass over the connection. Selecting this option means that only traffic specifically destined for routes learned automatically over the connection or entered here manually will be routed over the connection. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-Configuring_Static_Routes_with_GUI |
Chapter 8. Prometheus [monitoring.coreos.com/v1] | Chapter 8. Prometheus [monitoring.coreos.com/v1] Description Prometheus defines a Prometheus deployment. Type object Required spec 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of the Prometheus cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status object Most recent observed status of the Prometheus cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 8.1.1. .spec Description Specification of the desired behavior of the Prometheus cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Property Type Description additionalAlertManagerConfigs object AdditionalAlertManagerConfigs specifies a key of a Secret containing additional Prometheus Alertmanager configurations. The Alertmanager configurations are appended to the configuration generated by the Prometheus Operator. They must be formatted according to the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alertmanager_config The user is responsible for making sure that the configurations are valid Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible AlertManager configs are going to break Prometheus after the upgrade. additionalAlertRelabelConfigs object AdditionalAlertRelabelConfigs specifies a key of a Secret containing additional Prometheus alert relabel configurations. The alert relabel configurations are appended to the configuration generated by the Prometheus Operator. They must be formatted according to the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alert_relabel_configs The user is responsible for making sure that the configurations are valid Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible alert relabel configs are going to break Prometheus after the upgrade. additionalArgs array AdditionalArgs allows setting additional arguments for the 'prometheus' container. It is intended for e.g. activating hidden flags which are not supported by the dedicated configuration options yet. The arguments are passed as-is to the Prometheus container which may cause issues if they are invalid or not supported by the given Prometheus version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument, the reconciliation will fail and an error will be logged. additionalArgs[] object Argument as part of the AdditionalArgs list. additionalScrapeConfigs object AdditionalScrapeConfigs allows specifying a key of a Secret containing additional Prometheus scrape configurations. Scrape configurations specified are appended to the configurations generated by the Prometheus Operator. Job configurations specified must have the form as specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config . As scrape configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible scrape configs are going to break Prometheus after the upgrade. affinity object Defines the Pods' affinity scheduling rules if specified. alerting object Defines the settings related to Alertmanager. allowOverlappingBlocks boolean AllowOverlappingBlocks enables vertical compaction and vertical query merge in Prometheus. Deprecated: this flag has no effect for Prometheus >= 2.39.0 where overlapping blocks are enabled by default. apiserverConfig object APIServerConfig allows specifying a host and auth methods to access the Kuberntees API server. If null, Prometheus is assumed to run inside of the cluster: it will discover the API servers automatically and use the Pod's CA certificate and bearer token file at /var/run/secrets/kubernetes.io/serviceaccount/. arbitraryFSAccessThroughSMs object When true, ServiceMonitor, PodMonitor and Probe object are forbidden to reference arbitrary files on the file system of the 'prometheus' container. When a ServiceMonitor's endpoint specifies a bearerTokenFile value (e.g. '/var/run/secrets/kubernetes.io/serviceaccount/token'), a malicious target can get access to the Prometheus service account's token in the Prometheus' scrape request. Setting spec.arbitraryFSAccessThroughSM to 'true' would prevent the attack. Users should instead provide the credentials using the spec.bearerTokenSecret field. baseImage string Deprecated: use 'spec.image' instead. bodySizeLimit string BodySizeLimit defines per-scrape on response body size. Only valid in Prometheus versions 2.45.0 and newer. configMaps array (string) ConfigMaps is a list of ConfigMaps in the same namespace as the Prometheus object, which shall be mounted into the Prometheus Pods. Each ConfigMap is added to the StatefulSet definition as a volume named configmap-<configmap-name> . The ConfigMaps are mounted into /etc/prometheus/configmaps/<configmap-name> in the 'prometheus' container. containers array Containers allows injecting additional containers or modifying operator generated containers. This can be used to allow adding an authentication proxy to the Pods or to change the behavior of an operator generated container. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The names of containers managed by the operator are: * prometheus * config-reloader * thanos-sidecar Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. containers[] object A single application container that you want to run within a pod. disableCompaction boolean When true, the Prometheus compaction is disabled. enableAdminAPI boolean Enables access to the Prometheus web admin API. WARNING: Enabling the admin APIs enables mutating endpoints, to delete data, shutdown Prometheus, and more. Enabling this should be done with care and the user is advised to add additional authentication authorization via a proxy to ensure only clients authorized to perform these actions can do so. For more information: https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-admin-apis enableFeatures array (string) Enable access to Prometheus feature flags. By default, no features are enabled. Enabling features which are disabled by default is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. For more information see https://prometheus.io/docs/prometheus/latest/feature_flags/ enableRemoteWriteReceiver boolean Enable Prometheus to be used as a receiver for the Prometheus remote write protocol. WARNING: This is not considered an efficient way of ingesting samples. Use it with caution for specific low-volume use cases. It is not suitable for replacing the ingestion via scraping and turning Prometheus into a push-based metrics collection system. For more information see https://prometheus.io/docs/prometheus/latest/querying/api/#remote-write-receiver It requires Prometheus >= v2.33.0. enforcedBodySizeLimit string When defined, enforcedBodySizeLimit specifies a global limit on the size of uncompressed response body that will be accepted by Prometheus. Targets responding with a body larger than this many bytes will cause the scrape to fail. It requires Prometheus >= v2.28.0. enforcedLabelLimit integer When defined, enforcedLabelLimit specifies a global limit on the number of labels per sample. The value overrides any spec.labelLimit set by ServiceMonitor, PodMonitor, Probe objects unless spec.labelLimit is greater than zero and less than spec.enforcedLabelLimit . It requires Prometheus >= v2.27.0. enforcedLabelNameLengthLimit integer When defined, enforcedLabelNameLengthLimit specifies a global limit on the length of labels name per sample. The value overrides any spec.labelNameLengthLimit set by ServiceMonitor, PodMonitor, Probe objects unless spec.labelNameLengthLimit is greater than zero and less than spec.enforcedLabelNameLengthLimit . It requires Prometheus >= v2.27.0. enforcedLabelValueLengthLimit integer When not null, enforcedLabelValueLengthLimit defines a global limit on the length of labels value per sample. The value overrides any spec.labelValueLengthLimit set by ServiceMonitor, PodMonitor, Probe objects unless spec.labelValueLengthLimit is greater than zero and less than spec.enforcedLabelValueLengthLimit . It requires Prometheus >= v2.27.0. enforcedNamespaceLabel string When not empty, a label will be added to 1. All metrics scraped from ServiceMonitor , PodMonitor , Probe and ScrapeConfig objects. 2. All metrics generated from recording rules defined in PrometheusRule objects. 3. All alerts generated from alerting rules defined in PrometheusRule objects. 4. All vector selectors of PromQL expressions defined in PrometheusRule objects. The label will not added for objects referenced in spec.excludedFromEnforcement . The label's name is this field's value. The label's value is the namespace of the ServiceMonitor , PodMonitor , Probe or PrometheusRule object. enforcedSampleLimit integer When defined, enforcedSampleLimit specifies a global limit on the number of scraped samples that will be accepted. This overrides any spec.sampleLimit set by ServiceMonitor, PodMonitor, Probe objects unless spec.sampleLimit is greater than zero and less than than spec.enforcedSampleLimit . It is meant to be used by admins to keep the overall number of samples/series under a desired limit. enforcedTargetLimit integer When defined, enforcedTargetLimit specifies a global limit on the number of scraped targets. The value overrides any spec.targetLimit set by ServiceMonitor, PodMonitor, Probe objects unless spec.targetLimit is greater than zero and less than spec.enforcedTargetLimit . It is meant to be used by admins to to keep the overall number of targets under a desired limit. evaluationInterval string Interval between rule evaluations. Default: "30s" excludedFromEnforcement array List of references to PodMonitor, ServiceMonitor, Probe and PrometheusRule objects to be excluded from enforcing a namespace label of origin. It is only applicable if spec.enforcedNamespaceLabel set to true. excludedFromEnforcement[] object ObjectReference references a PodMonitor, ServiceMonitor, Probe or PrometheusRule object. exemplars object Exemplars related settings that are runtime reloadable. It requires to enable the exemplar-storage feature flag to be effective. externalLabels object (string) The labels to add to any time series or alerts when communicating with external systems (federation, remote storage, Alertmanager). Labels defined by spec.replicaExternalLabelName and spec.prometheusExternalLabelName take precedence over this list. externalUrl string The external URL under which the Prometheus service is externally available. This is necessary to generate correct URLs (for instance if Prometheus is accessible behind an Ingress resource). hostAliases array Optional list of hosts and IPs that will be injected into the Pod's hosts file if specified. hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. hostNetwork boolean Use the host's network namespace if true. Make sure to understand the security implications if you want to enable it ( https://kubernetes.io/docs/concepts/configuration/overview/ ). When hostNetwork is enabled, this will set the DNS policy to ClusterFirstWithHostNet automatically. ignoreNamespaceSelectors boolean When true, spec.namespaceSelector from all PodMonitor, ServiceMonitor and Probe objects will be ignored. They will only discover targets within the namespace of the PodMonitor, ServiceMonitor and Probe objec. image string Container image name for Prometheus. If specified, it takes precedence over the spec.baseImage , spec.tag and spec.sha fields. Specifying spec.version is still necessary to ensure the Prometheus Operator knows which version of Prometheus is being configured. If neither spec.image nor spec.baseImage are defined, the operator will use the latest upstream version of Prometheus available at the time when the operator was released. imagePullPolicy string Image pull policy for the 'prometheus', 'init-config-reloader' and 'config-reloader' containers. See https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy for more details. imagePullSecrets array An optional list of references to Secrets in the same namespace to use for pulling images from registries. See http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array InitContainers allows injecting initContainers to the Pod definition. Those can be used to e.g. fetch secrets for injection into the Prometheus configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ InitContainers described here modify an operator generated init containers if they share the same name and modifications are done via a strategic merge patch. The names of init container name managed by the operator are: * init-config-reloader . Overriding init containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. initContainers[] object A single application container that you want to run within a pod. labelLimit integer Per-scrape limit on number of labels that will be accepted for a sample. Only valid in Prometheus versions 2.45.0 and newer. labelNameLengthLimit integer Per-scrape limit on length of labels name that will be accepted for a sample. Only valid in Prometheus versions 2.45.0 and newer. labelValueLengthLimit integer Per-scrape limit on length of labels value that will be accepted for a sample. Only valid in Prometheus versions 2.45.0 and newer. listenLocal boolean When true, the Prometheus server listens on the loopback address instead of the Pod IP's address. logFormat string Log format for Log level for Prometheus and the config-reloader sidecar. logLevel string Log level for Prometheus and the config-reloader sidecar. minReadySeconds integer Minimum number of seconds for which a newly created Pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) This is an alpha field from kubernetes 1.22 until 1.24 which requires enabling the StatefulSetMinReadySeconds feature gate. nodeSelector object (string) Defines on which Nodes the Pods are scheduled. overrideHonorLabels boolean When true, Prometheus resolves label conflicts by renaming the labels in the scraped data to "exported_<label value>" for all targets created from service and pod monitors. Otherwise the HonorLabels field of the service or pod monitor applies. overrideHonorTimestamps boolean When true, Prometheus ignores the timestamps for all the targets created from service and pod monitors. Otherwise the HonorTimestamps field of the service or pod monitor applies. paused boolean When a Prometheus deployment is paused, no actions except for deletion will be performed on the underlying objects. podMetadata object PodMetadata configures labels and annotations which are propagated to the Prometheus pods. podMonitorNamespaceSelector object Namespaces to match for PodMonitors discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. podMonitorSelector object Experimental PodMonitors to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. podTargetLabels array (string) PodTargetLabels are appended to the spec.podTargetLabels field of all PodMonitor and ServiceMonitor objects. portName string Port name used for the pods and governing service. Default: "web" priorityClassName string Priority class assigned to the Pods. probeNamespaceSelector object Experimental Namespaces to match for Probe discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. probeSelector object Experimental Probes to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. prometheusExternalLabelName string Name of Prometheus external label used to denote the Prometheus instance name. The external label will not be added when the field is set to the empty string ( "" ). Default: "prometheus" prometheusRulesExcludedFromEnforce array Defines the list of PrometheusRule objects to which the namespace label enforcement doesn't apply. This is only relevant when spec.enforcedNamespaceLabel is set to true. Deprecated: use spec.excludedFromEnforcement instead. prometheusRulesExcludedFromEnforce[] object PrometheusRuleExcludeConfig enables users to configure excluded PrometheusRule names and their namespaces to be ignored while enforcing namespace label for alerts and metrics. query object QuerySpec defines the configuration of the Promethus query service. queryLogFile string queryLogFile specifies where the file to which PromQL queries are logged. If the filename has an empty path, e.g. 'query.log', The Prometheus Pods will mount the file into an emptyDir volume at /var/log/prometheus . If a full path is provided, e.g. '/var/log/prometheus/query.log', you must mount a volume in the specified directory and it must be writable. This is because the prometheus container runs with a read-only root filesystem for security reasons. Alternatively, the location can be set to a standard I/O stream, e.g. /dev/stdout , to log query information to the default Prometheus log stream. remoteRead array Defines the list of remote read configurations. remoteRead[] object RemoteReadSpec defines the configuration for Prometheus to read back samples from a remote endpoint. remoteWrite array Defines the list of remote write configurations. remoteWrite[] object RemoteWriteSpec defines the configuration to write samples from Prometheus to a remote endpoint. replicaExternalLabelName string Name of Prometheus external label used to denote the replica name. The external label will not be added when the field is set to the empty string ( "" ). Default: "prometheus_replica" replicas integer Number of replicas of each shard to deploy for a Prometheus deployment. spec.replicas multiplied by spec.shards is the total number of Pods created. Default: 1 resources object Defines the resources requests and limits of the 'prometheus' container. retention string How long to retain the Prometheus data. Default: "24h" if spec.retention and spec.retentionSize are empty. retentionSize string Maximum number of bytes used by the Prometheus data. routePrefix string The route prefix Prometheus registers HTTP handlers for. This is useful when using spec.externalURL , and a proxy is rewriting HTTP routes of a request, and the actual ExternalURL is still true, but the server serves requests under a different route prefix. For example for use with kubectl proxy . ruleNamespaceSelector object Namespaces to match for PrometheusRule discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. ruleSelector object PrometheusRule objects to be selected for rule evaluation. An empty label selector matches all objects. A null label selector matches no objects. rules object Defines the configuration of the Prometheus rules' engine. sampleLimit integer SampleLimit defines per-scrape limit on number of scraped samples that will be accepted. Only valid in Prometheus versions 2.45.0 and newer. scrapeConfigNamespaceSelector object Namespaces to match for ScrapeConfig discovery. An empty label selector matches all namespaces. A null label selector matches the current current namespace only. scrapeConfigSelector object Experimental ScrapeConfigs to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. scrapeInterval string Interval between consecutive scrapes. Default: "30s" scrapeTimeout string Number of seconds to wait until a scrape request times out. secrets array (string) Secrets is a list of Secrets in the same namespace as the Prometheus object, which shall be mounted into the Prometheus Pods. Each Secret is added to the StatefulSet definition as a volume named secret-<secret-name> . The Secrets are mounted into /etc/prometheus/secrets/<secret-name> in the 'prometheus' container. securityContext object SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run the Prometheus Pods. serviceMonitorNamespaceSelector object Namespaces to match for ServicedMonitors discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. serviceMonitorSelector object ServiceMonitors to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. sha string Deprecated: use 'spec.image' instead. The image's digest can be specified as part of the image name. shards integer EXPERIMENTAL: Number of shards to distribute targets onto. spec.replicas multiplied by spec.shards is the total number of Pods created. Note that scaling down shards will not reshard data onto remaining instances, it must be manually moved. Increasing shards will not reshard data either but it will continue to be available from the same instances. To query globally, use Thanos sidecar and Thanos querier or remote write data to a central location. Sharding is performed on the content of the address target meta-label for PodMonitors and ServiceMonitors and param_target for Probes. Default: 1 storage object Storage defines the storage used by Prometheus. tag string Deprecated: use 'spec.image' instead. The image's tag can be specified as part of the image name. targetLimit integer TargetLimit defines a limit on the number of scraped targets that will be accepted. Only valid in Prometheus versions 2.45.0 and newer. thanos object Defines the configuration of the optional Thanos sidecar. This section is experimental, it may change significantly without deprecation notice in any release. tolerations array Defines the Pods' tolerations if specified. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array Defines the pod's topology spread constraints if specified. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. tracingConfig object EXPERIMENTAL: TracingConfig configures tracing in Prometheus. This is an experimental feature, it may change in any upcoming release in a breaking way. tsdb object Defines the runtime reloadable configuration of the timeseries database (TSDB). version string Version of Prometheus being deployed. The operator uses this information to generate the Prometheus StatefulSet + configuration files. If not specified, the operator assumes the latest upstream version of Prometheus available at the time when the version of the operator was released. volumeMounts array VolumeMounts allows the configuration of additional VolumeMounts. VolumeMounts will be appended to other VolumeMounts in the 'prometheus' container, that are generated as a result of StorageSpec objects. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. volumes array Volumes allows the configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. walCompression boolean Configures compression of the write-ahead log (WAL) using Snappy. WAL compression is enabled by default for Prometheus >= 2.20.0 Requires Prometheus v2.11.0 and above. web object Defines the configuration of the Prometheus web server. 8.1.2. .spec.additionalAlertManagerConfigs Description AdditionalAlertManagerConfigs specifies a key of a Secret containing additional Prometheus Alertmanager configurations. The Alertmanager configurations are appended to the configuration generated by the Prometheus Operator. They must be formatted according to the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alertmanager_config The user is responsible for making sure that the configurations are valid Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible AlertManager configs are going to break Prometheus after the upgrade. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.3. .spec.additionalAlertRelabelConfigs Description AdditionalAlertRelabelConfigs specifies a key of a Secret containing additional Prometheus alert relabel configurations. The alert relabel configurations are appended to the configuration generated by the Prometheus Operator. They must be formatted according to the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alert_relabel_configs The user is responsible for making sure that the configurations are valid Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible alert relabel configs are going to break Prometheus after the upgrade. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.4. .spec.additionalArgs Description AdditionalArgs allows setting additional arguments for the 'prometheus' container. It is intended for e.g. activating hidden flags which are not supported by the dedicated configuration options yet. The arguments are passed as-is to the Prometheus container which may cause issues if they are invalid or not supported by the given Prometheus version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument, the reconciliation will fail and an error will be logged. Type array 8.1.5. .spec.additionalArgs[] Description Argument as part of the AdditionalArgs list. Type object Required name Property Type Description name string Name of the argument, e.g. "scrape.discovery-reload-interval". value string Argument value, e.g. 30s. Can be empty for name-only arguments (e.g. --storage.tsdb.no-lockfile) 8.1.6. .spec.additionalScrapeConfigs Description AdditionalScrapeConfigs allows specifying a key of a Secret containing additional Prometheus scrape configurations. Scrape configurations specified are appended to the configurations generated by the Prometheus Operator. Job configurations specified must have the form as specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config . As scrape configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible scrape configs are going to break Prometheus after the upgrade. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.7. .spec.affinity Description Defines the Pods' affinity scheduling rules if specified. Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 8.1.8. .spec.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 8.1.9. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 8.1.10. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 8.1.11. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 8.1.12. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 8.1.13. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 8.1.14. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 8.1.15. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 8.1.16. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 8.1.17. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 8.1.18. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 8.1.19. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 8.1.20. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 8.1.21. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 8.1.22. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 8.1.23. .spec.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 8.1.24. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 8.1.25. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 8.1.26. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 8.1.27. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.28. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.29. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.30. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.31. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.32. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.33. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 8.1.34. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 8.1.35. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.36. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.37. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.38. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.39. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.40. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.41. .spec.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 8.1.42. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 8.1.43. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 8.1.44. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 8.1.45. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.46. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.47. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.48. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.49. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.50. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.51. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 8.1.52. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 8.1.53. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.54. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.55. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.56. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.57. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.58. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.59. .spec.alerting Description Defines the settings related to Alertmanager. Type object Required alertmanagers Property Type Description alertmanagers array AlertmanagerEndpoints Prometheus should fire alerts against. alertmanagers[] object AlertmanagerEndpoints defines a selection of a single Endpoints object containing Alertmanager IPs to fire alerts against. 8.1.60. .spec.alerting.alertmanagers Description AlertmanagerEndpoints Prometheus should fire alerts against. Type array 8.1.61. .spec.alerting.alertmanagers[] Description AlertmanagerEndpoints defines a selection of a single Endpoints object containing Alertmanager IPs to fire alerts against. Type object Required name namespace port Property Type Description apiVersion string Version of the Alertmanager API that Prometheus uses to send alerts. It can be "v1" or "v2". authorization object Authorization section for Alertmanager. Cannot be set at the same time as basicAuth , or bearerTokenFile . basicAuth object BasicAuth configuration for Alertmanager. Cannot be set at the same time as bearerTokenFile , or authorization . bearerTokenFile string File to read bearer token for Alertmanager. Cannot be set at the same time as basicAuth , or authorization . Deprecated: this will be removed in a future release. Prefer using authorization . enableHttp2 boolean Whether to enable HTTP2. name string Name of the Endpoints object in the namespace. namespace string Namespace of the Endpoints object. pathPrefix string Prefix for the HTTP path alerts are pushed to. port integer-or-string Port on which the Alertmanager API is exposed. scheme string Scheme to use when firing alerts. timeout string Timeout is a per-target Alertmanager timeout when pushing alerts. tlsConfig object TLS Config to use for Alertmanager. 8.1.62. .spec.alerting.alertmanagers[].authorization Description Authorization section for Alertmanager. Cannot be set at the same time as basicAuth , or bearerTokenFile . Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 8.1.63. .spec.alerting.alertmanagers[].authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.64. .spec.alerting.alertmanagers[].basicAuth Description BasicAuth configuration for Alertmanager. Cannot be set at the same time as bearerTokenFile , or authorization . Type object Property Type Description password object The secret in the service monitor namespace that contains the password for authentication. username object The secret in the service monitor namespace that contains the username for authentication. 8.1.65. .spec.alerting.alertmanagers[].basicAuth.password Description The secret in the service monitor namespace that contains the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.66. .spec.alerting.alertmanagers[].basicAuth.username Description The secret in the service monitor namespace that contains the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.67. .spec.alerting.alertmanagers[].tlsConfig Description TLS Config to use for Alertmanager. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 8.1.68. .spec.alerting.alertmanagers[].tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.69. .spec.alerting.alertmanagers[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.70. .spec.alerting.alertmanagers[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.71. .spec.alerting.alertmanagers[].tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.72. .spec.alerting.alertmanagers[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.73. .spec.alerting.alertmanagers[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.74. .spec.alerting.alertmanagers[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.75. .spec.apiserverConfig Description APIServerConfig allows specifying a host and auth methods to access the Kuberntees API server. If null, Prometheus is assumed to run inside of the cluster: it will discover the API servers automatically and use the Pod's CA certificate and bearer token file at /var/run/secrets/kubernetes.io/serviceaccount/. Type object Required host Property Type Description authorization object Authorization section for the API server. Cannot be set at the same time as basicAuth , bearerToken , or bearerTokenFile . basicAuth object BasicAuth configuration for the API server. Cannot be set at the same time as authorization , bearerToken , or bearerTokenFile . bearerToken string Warning: this field shouldn't be used because the token value appears in clear-text. Prefer using authorization . Deprecated: this will be removed in a future release. bearerTokenFile string File to read bearer token for accessing apiserver. Cannot be set at the same time as basicAuth , authorization , or bearerToken . Deprecated: this will be removed in a future release. Prefer using authorization . host string Kubernetes API address consisting of a hostname or IP address followed by an optional port number. tlsConfig object TLS Config to use for the API server. 8.1.76. .spec.apiserverConfig.authorization Description Authorization section for the API server. Cannot be set at the same time as basicAuth , bearerToken , or bearerTokenFile . Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. credentialsFile string File to read a secret from, mutually exclusive with credentials . type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 8.1.77. .spec.apiserverConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.78. .spec.apiserverConfig.basicAuth Description BasicAuth configuration for the API server. Cannot be set at the same time as authorization , bearerToken , or bearerTokenFile . Type object Property Type Description password object The secret in the service monitor namespace that contains the password for authentication. username object The secret in the service monitor namespace that contains the username for authentication. 8.1.79. .spec.apiserverConfig.basicAuth.password Description The secret in the service monitor namespace that contains the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.80. .spec.apiserverConfig.basicAuth.username Description The secret in the service monitor namespace that contains the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.81. .spec.apiserverConfig.tlsConfig Description TLS Config to use for the API server. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 8.1.82. .spec.apiserverConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.83. .spec.apiserverConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.84. .spec.apiserverConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.85. .spec.apiserverConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.86. .spec.apiserverConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.87. .spec.apiserverConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.88. .spec.apiserverConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.89. .spec.arbitraryFSAccessThroughSMs Description When true, ServiceMonitor, PodMonitor and Probe object are forbidden to reference arbitrary files on the file system of the 'prometheus' container. When a ServiceMonitor's endpoint specifies a bearerTokenFile value (e.g. '/var/run/secrets/kubernetes.io/serviceaccount/token'), a malicious target can get access to the Prometheus service account's token in the Prometheus' scrape request. Setting spec.arbitraryFSAccessThroughSM to 'true' would prevent the attack. Users should instead provide the credentials using the spec.bearerTokenSecret field. Type object Property Type Description deny boolean 8.1.90. .spec.containers Description Containers allows injecting additional containers or modifying operator generated containers. This can be used to allow adding an authentication proxy to the Pods or to change the behavior of an operator generated container. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The names of containers managed by the operator are: * prometheus * config-reloader * thanos-sidecar Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 8.1.91. .spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 8.1.92. .spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 8.1.93. .spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 8.1.94. .spec.containers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 8.1.95. .spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.96. .spec.containers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 8.1.97. .spec.containers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 8.1.98. .spec.containers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.99. .spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 8.1.100. .spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 8.1.101. .spec.containers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 8.1.102. .spec.containers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 8.1.103. .spec.containers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 8.1.104. .spec.containers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 8.1.105. .spec.containers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.106. .spec.containers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.107. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.108. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.109. .spec.containers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.110. .spec.containers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 8.1.111. .spec.containers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.112. .spec.containers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.113. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.114. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.115. .spec.containers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.116. .spec.containers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 8.1.117. .spec.containers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.118. .spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 8.1.119. .spec.containers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.120. .spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.121. .spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.122. .spec.containers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.123. .spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 8.1.124. .spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 8.1.125. .spec.containers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 8.1.126. .spec.containers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.127. .spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 8.1.128. .spec.containers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.129. .spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.130. .spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.131. .spec.containers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.132. .spec.containers[].resizePolicy Description Resources resize policy for the container. Type array 8.1.133. .spec.containers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 8.1.134. .spec.containers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.135. .spec.containers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 8.1.136. .spec.containers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 8.1.137. .spec.containers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 8.1.138. .spec.containers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 8.1.139. .spec.containers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 8.1.140. .spec.containers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 8.1.141. .spec.containers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 8.1.142. .spec.containers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 8.1.143. .spec.containers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.144. .spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 8.1.145. .spec.containers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.146. .spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.147. .spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.148. .spec.containers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.149. .spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 8.1.150. .spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 8.1.151. .spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 8.1.152. .spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 8.1.153. .spec.excludedFromEnforcement Description List of references to PodMonitor, ServiceMonitor, Probe and PrometheusRule objects to be excluded from enforcing a namespace label of origin. It is only applicable if spec.enforcedNamespaceLabel set to true. Type array 8.1.154. .spec.excludedFromEnforcement[] Description ObjectReference references a PodMonitor, ServiceMonitor, Probe or PrometheusRule object. Type object Required namespace resource Property Type Description group string Group of the referent. When not specified, it defaults to monitoring.coreos.com name string Name of the referent. When not set, all resources in the namespace are matched. namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resource string Resource of the referent. 8.1.155. .spec.exemplars Description Exemplars related settings that are runtime reloadable. It requires to enable the exemplar-storage feature flag to be effective. Type object Property Type Description maxSize integer Maximum number of exemplars stored in memory for all series. exemplar-storage itself must be enabled using the spec.enableFeature option for exemplars to be scraped in the first place. If not set, Prometheus uses its default value. A value of zero or less than zero disables the storage. 8.1.156. .spec.hostAliases Description Optional list of hosts and IPs that will be injected into the Pod's hosts file if specified. Type array 8.1.157. .spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Required hostnames ip Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 8.1.158. .spec.imagePullSecrets Description An optional list of references to Secrets in the same namespace to use for pulling images from registries. See http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod Type array 8.1.159. .spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 8.1.160. .spec.initContainers Description InitContainers allows injecting initContainers to the Pod definition. Those can be used to e.g. fetch secrets for injection into the Prometheus configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ InitContainers described here modify an operator generated init containers if they share the same name and modifications are done via a strategic merge patch. The names of init container name managed by the operator are: * init-config-reloader . Overriding init containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 8.1.161. .spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 8.1.162. .spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 8.1.163. .spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 8.1.164. .spec.initContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 8.1.165. .spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.166. .spec.initContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 8.1.167. .spec.initContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 8.1.168. .spec.initContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.169. .spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 8.1.170. .spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 8.1.171. .spec.initContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 8.1.172. .spec.initContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 8.1.173. .spec.initContainers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 8.1.174. .spec.initContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 8.1.175. .spec.initContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.176. .spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.177. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.178. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.179. .spec.initContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.180. .spec.initContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 8.1.181. .spec.initContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.182. .spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.183. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.184. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.185. .spec.initContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.186. .spec.initContainers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 8.1.187. .spec.initContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.188. .spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 8.1.189. .spec.initContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.190. .spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.191. .spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.192. .spec.initContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.193. .spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 8.1.194. .spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 8.1.195. .spec.initContainers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 8.1.196. .spec.initContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.197. .spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 8.1.198. .spec.initContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.199. .spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.200. .spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.201. .spec.initContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.202. .spec.initContainers[].resizePolicy Description Resources resize policy for the container. Type array 8.1.203. .spec.initContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 8.1.204. .spec.initContainers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.205. .spec.initContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 8.1.206. .spec.initContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 8.1.207. .spec.initContainers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 8.1.208. .spec.initContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 8.1.209. .spec.initContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 8.1.210. .spec.initContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 8.1.211. .spec.initContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 8.1.212. .spec.initContainers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 8.1.213. .spec.initContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.214. .spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 8.1.215. .spec.initContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.216. .spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.217. .spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.218. .spec.initContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.219. .spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 8.1.220. .spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 8.1.221. .spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 8.1.222. .spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 8.1.223. .spec.podMetadata Description PodMetadata configures labels and annotations which are propagated to the Prometheus pods. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 8.1.224. .spec.podMonitorNamespaceSelector Description Namespaces to match for PodMonitors discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.225. .spec.podMonitorNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.226. .spec.podMonitorNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.227. .spec.podMonitorSelector Description Experimental PodMonitors to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.228. .spec.podMonitorSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.229. .spec.podMonitorSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.230. .spec.probeNamespaceSelector Description Experimental Namespaces to match for Probe discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.231. .spec.probeNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.232. .spec.probeNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.233. .spec.probeSelector Description Experimental Probes to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.234. .spec.probeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.235. .spec.probeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.236. .spec.prometheusRulesExcludedFromEnforce Description Defines the list of PrometheusRule objects to which the namespace label enforcement doesn't apply. This is only relevant when spec.enforcedNamespaceLabel is set to true. Deprecated: use spec.excludedFromEnforcement instead. Type array 8.1.237. .spec.prometheusRulesExcludedFromEnforce[] Description PrometheusRuleExcludeConfig enables users to configure excluded PrometheusRule names and their namespaces to be ignored while enforcing namespace label for alerts and metrics. Type object Required ruleName ruleNamespace Property Type Description ruleName string Name of the excluded PrometheusRule object. ruleNamespace string Namespace of the excluded PrometheusRule object. 8.1.238. .spec.query Description QuerySpec defines the configuration of the Promethus query service. Type object Property Type Description lookbackDelta string The delta difference allowed for retrieving metrics during expression evaluations. maxConcurrency integer Number of concurrent queries that can be run at once. maxSamples integer Maximum number of samples a single query can load into memory. Note that queries will fail if they would load more samples than this into memory, so this also limits the number of samples a query can return. timeout string Maximum time a query may take before being aborted. 8.1.239. .spec.remoteRead Description Defines the list of remote read configurations. Type array 8.1.240. .spec.remoteRead[] Description RemoteReadSpec defines the configuration for Prometheus to read back samples from a remote endpoint. Type object Required url Property Type Description authorization object Authorization section for the URL. It requires Prometheus >= v2.26.0. Cannot be set at the same time as basicAuth , or oauth2 . basicAuth object BasicAuth configuration for the URL. Cannot be set at the same time as authorization , or oauth2 . bearerToken string Warning: this field shouldn't be used because the token value appears in clear-text. Prefer using authorization . Deprecated: this will be removed in a future release. bearerTokenFile string File from which to read the bearer token for the URL. Deprecated: this will be removed in a future release. Prefer using authorization . filterExternalLabels boolean Whether to use the external labels as selectors for the remote read endpoint. It requires Prometheus >= v2.34.0. followRedirects boolean Configure whether HTTP requests follow HTTP 3xx redirects. It requires Prometheus >= v2.26.0. headers object (string) Custom HTTP headers to be sent along with each remote read request. Be aware that headers that are set by Prometheus itself can't be overwritten. Only valid in Prometheus versions 2.26.0 and newer. name string The name of the remote read queue, it must be unique if specified. The name is used in metrics and logging in order to differentiate read configurations. It requires Prometheus >= v2.15.0. oauth2 object OAuth2 configuration for the URL. It requires Prometheus >= v2.27.0. Cannot be set at the same time as authorization , or basicAuth . proxyUrl string Optional ProxyURL. readRecent boolean Whether reads should be made for queries for time ranges that the local storage should have complete data for. remoteTimeout string Timeout for requests to the remote read endpoint. requiredMatchers object (string) An optional list of equality matchers which have to be present in a selector to query the remote read endpoint. tlsConfig object TLS Config to use for the URL. url string The URL of the endpoint to query from. 8.1.241. .spec.remoteRead[].authorization Description Authorization section for the URL. It requires Prometheus >= v2.26.0. Cannot be set at the same time as basicAuth , or oauth2 . Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. credentialsFile string File to read a secret from, mutually exclusive with credentials . type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 8.1.242. .spec.remoteRead[].authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.243. .spec.remoteRead[].basicAuth Description BasicAuth configuration for the URL. Cannot be set at the same time as authorization , or oauth2 . Type object Property Type Description password object The secret in the service monitor namespace that contains the password for authentication. username object The secret in the service monitor namespace that contains the username for authentication. 8.1.244. .spec.remoteRead[].basicAuth.password Description The secret in the service monitor namespace that contains the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.245. .spec.remoteRead[].basicAuth.username Description The secret in the service monitor namespace that contains the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.246. .spec.remoteRead[].oauth2 Description OAuth2 configuration for the URL. It requires Prometheus >= v2.27.0. Cannot be set at the same time as authorization , or basicAuth . Type object Required clientId clientSecret tokenUrl Property Type Description clientId object The secret or configmap containing the OAuth2 client id clientSecret object The secret containing the OAuth2 client secret endpointParams object (string) Parameters to append to the token URL scopes array (string) OAuth2 scopes used for the token request tokenUrl string The URL to fetch the token from 8.1.247. .spec.remoteRead[].oauth2.clientId Description The secret or configmap containing the OAuth2 client id Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.248. .spec.remoteRead[].oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.249. .spec.remoteRead[].oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.250. .spec.remoteRead[].oauth2.clientSecret Description The secret containing the OAuth2 client secret Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.251. .spec.remoteRead[].tlsConfig Description TLS Config to use for the URL. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 8.1.252. .spec.remoteRead[].tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.253. .spec.remoteRead[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.254. .spec.remoteRead[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.255. .spec.remoteRead[].tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.256. .spec.remoteRead[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.257. .spec.remoteRead[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.258. .spec.remoteRead[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.259. .spec.remoteWrite Description Defines the list of remote write configurations. Type array 8.1.260. .spec.remoteWrite[] Description RemoteWriteSpec defines the configuration to write samples from Prometheus to a remote endpoint. Type object Required url Property Type Description authorization object Authorization section for the URL. It requires Prometheus >= v2.26.0. Cannot be set at the same time as sigv4 , basicAuth , or oauth2 . basicAuth object BasicAuth configuration for the URL. Cannot be set at the same time as sigv4 , authorization , or oauth2 . bearerToken string Warning: this field shouldn't be used because the token value appears in clear-text. Prefer using authorization . Deprecated: this will be removed in a future release. bearerTokenFile string File from which to read bearer token for the URL. Deprecated: this will be removed in a future release. Prefer using authorization . headers object (string) Custom HTTP headers to be sent along with each remote write request. Be aware that headers that are set by Prometheus itself can't be overwritten. It requires Prometheus >= v2.25.0. metadataConfig object MetadataConfig configures the sending of series metadata to the remote storage. name string The name of the remote write queue, it must be unique if specified. The name is used in metrics and logging in order to differentiate queues. It requires Prometheus >= v2.15.0. oauth2 object OAuth2 configuration for the URL. It requires Prometheus >= v2.27.0. Cannot be set at the same time as sigv4 , authorization , or basicAuth . proxyUrl string Optional ProxyURL. queueConfig object QueueConfig allows tuning of the remote write queue parameters. remoteTimeout string Timeout for requests to the remote write endpoint. sendExemplars boolean Enables sending of exemplars over remote write. Note that exemplar-storage itself must be enabled using the spec.enableFeature option for exemplars to be scraped in the first place. It requires Prometheus >= v2.27.0. sendNativeHistograms boolean Enables sending of native histograms, also known as sparse histograms over remote write. It requires Prometheus >= v2.40.0. sigv4 object Sigv4 allows to configures AWS's Signature Verification 4 for the URL. It requires Prometheus >= v2.26.0. Cannot be set at the same time as authorization , basicAuth , or oauth2 . tlsConfig object TLS Config to use for the URL. url string The URL of the endpoint to send samples to. writeRelabelConfigs array The list of remote write relabel configurations. writeRelabelConfigs[] object RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config 8.1.261. .spec.remoteWrite[].authorization Description Authorization section for the URL. It requires Prometheus >= v2.26.0. Cannot be set at the same time as sigv4 , basicAuth , or oauth2 . Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. credentialsFile string File to read a secret from, mutually exclusive with credentials . type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 8.1.262. .spec.remoteWrite[].authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.263. .spec.remoteWrite[].basicAuth Description BasicAuth configuration for the URL. Cannot be set at the same time as sigv4 , authorization , or oauth2 . Type object Property Type Description password object The secret in the service monitor namespace that contains the password for authentication. username object The secret in the service monitor namespace that contains the username for authentication. 8.1.264. .spec.remoteWrite[].basicAuth.password Description The secret in the service monitor namespace that contains the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.265. .spec.remoteWrite[].basicAuth.username Description The secret in the service monitor namespace that contains the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.266. .spec.remoteWrite[].metadataConfig Description MetadataConfig configures the sending of series metadata to the remote storage. Type object Property Type Description send boolean Defines whether metric metadata is sent to the remote storage or not. sendInterval string Defines how frequently metric metadata is sent to the remote storage. 8.1.267. .spec.remoteWrite[].oauth2 Description OAuth2 configuration for the URL. It requires Prometheus >= v2.27.0. Cannot be set at the same time as sigv4 , authorization , or basicAuth . Type object Required clientId clientSecret tokenUrl Property Type Description clientId object The secret or configmap containing the OAuth2 client id clientSecret object The secret containing the OAuth2 client secret endpointParams object (string) Parameters to append to the token URL scopes array (string) OAuth2 scopes used for the token request tokenUrl string The URL to fetch the token from 8.1.268. .spec.remoteWrite[].oauth2.clientId Description The secret or configmap containing the OAuth2 client id Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.269. .spec.remoteWrite[].oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.270. .spec.remoteWrite[].oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.271. .spec.remoteWrite[].oauth2.clientSecret Description The secret containing the OAuth2 client secret Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.272. .spec.remoteWrite[].queueConfig Description QueueConfig allows tuning of the remote write queue parameters. Type object Property Type Description batchSendDeadline string BatchSendDeadline is the maximum time a sample will wait in buffer. capacity integer Capacity is the number of samples to buffer per shard before we start dropping them. maxBackoff string MaxBackoff is the maximum retry delay. maxRetries integer MaxRetries is the maximum number of times to retry a batch on recoverable errors. maxSamplesPerSend integer MaxSamplesPerSend is the maximum number of samples per send. maxShards integer MaxShards is the maximum number of shards, i.e. amount of concurrency. minBackoff string MinBackoff is the initial retry delay. Gets doubled for every retry. minShards integer MinShards is the minimum number of shards, i.e. amount of concurrency. retryOnRateLimit boolean Retry upon receiving a 429 status code from the remote-write storage. This is experimental feature and might change in the future. 8.1.273. .spec.remoteWrite[].sigv4 Description Sigv4 allows to configures AWS's Signature Verification 4 for the URL. It requires Prometheus >= v2.26.0. Cannot be set at the same time as authorization , basicAuth , or oauth2 . Type object Property Type Description accessKey object AccessKey is the AWS API key. If not specified, the environment variable AWS_ACCESS_KEY_ID is used. profile string Profile is the named AWS profile used to authenticate. region string Region is the AWS region. If blank, the region from the default credentials chain used. roleArn string RoleArn is the named AWS profile used to authenticate. secretKey object SecretKey is the AWS API secret. If not specified, the environment variable AWS_SECRET_ACCESS_KEY is used. 8.1.274. .spec.remoteWrite[].sigv4.accessKey Description AccessKey is the AWS API key. If not specified, the environment variable AWS_ACCESS_KEY_ID is used. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.275. .spec.remoteWrite[].sigv4.secretKey Description SecretKey is the AWS API secret. If not specified, the environment variable AWS_SECRET_ACCESS_KEY is used. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.276. .spec.remoteWrite[].tlsConfig Description TLS Config to use for the URL. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 8.1.277. .spec.remoteWrite[].tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.278. .spec.remoteWrite[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.279. .spec.remoteWrite[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.280. .spec.remoteWrite[].tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.281. .spec.remoteWrite[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.282. .spec.remoteWrite[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.283. .spec.remoteWrite[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.284. .spec.remoteWrite[].writeRelabelConfigs Description The list of remote write relabel configurations. Type array 8.1.285. .spec.remoteWrite[].writeRelabelConfigs[] Description RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type object Property Type Description action string Action to perform based on the regex matching. Uppercase and Lowercase actions require Prometheus >= v2.36.0. DropEqual and KeepEqual actions require Prometheus >= v2.41.0. Default: "Replace" modulus integer Modulus to take of the hash of the source label values. Only applicable when the action is HashMod . regex string Regular expression against which the extracted value is matched. replacement string Replacement value against which a Replace action is performed if the regular expression matches. Regex capture groups are available. separator string Separator is the string between concatenated SourceLabels. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured Separator and matched against the configured regular expression. targetLabel string Label to which the resulting string is written in a replacement. It is mandatory for Replace , HashMod , Lowercase , Uppercase , KeepEqual and DropEqual actions. Regex capture groups are available. 8.1.286. .spec.resources Description Defines the resources requests and limits of the 'prometheus' container. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.287. .spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 8.1.288. .spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 8.1.289. .spec.ruleNamespaceSelector Description Namespaces to match for PrometheusRule discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.290. .spec.ruleNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.291. .spec.ruleNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.292. .spec.ruleSelector Description PrometheusRule objects to be selected for rule evaluation. An empty label selector matches all objects. A null label selector matches no objects. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.293. .spec.ruleSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.294. .spec.ruleSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.295. .spec.rules Description Defines the configuration of the Prometheus rules' engine. Type object Property Type Description alert object Defines the parameters of the Prometheus rules' engine. Any update to these parameters trigger a restart of the pods. 8.1.296. .spec.rules.alert Description Defines the parameters of the Prometheus rules' engine. Any update to these parameters trigger a restart of the pods. Type object Property Type Description forGracePeriod string Minimum duration between alert and restored 'for' state. This is maintained only for alerts with a configured 'for' time greater than the grace period. forOutageTolerance string Max time to tolerate prometheus outage for restoring 'for' state of alert. resendDelay string Minimum amount of time to wait before resending an alert to Alertmanager. 8.1.297. .spec.scrapeConfigNamespaceSelector Description Namespaces to match for ScrapeConfig discovery. An empty label selector matches all namespaces. A null label selector matches the current current namespace only. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.298. .spec.scrapeConfigNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.299. .spec.scrapeConfigNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.300. .spec.scrapeConfigSelector Description Experimental ScrapeConfigs to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.301. .spec.scrapeConfigSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.302. .spec.scrapeConfigSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.303. .spec.securityContext Description SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. Type object Property Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 8.1.304. .spec.securityContext.seLinuxOptions Description The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 8.1.305. .spec.securityContext.seccompProfile Description The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 8.1.306. .spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 8.1.307. .spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 8.1.308. .spec.securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 8.1.309. .spec.serviceMonitorNamespaceSelector Description Namespaces to match for ServicedMonitors discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.310. .spec.serviceMonitorNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.311. .spec.serviceMonitorNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.312. .spec.serviceMonitorSelector Description ServiceMonitors to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.313. .spec.serviceMonitorSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.314. .spec.serviceMonitorSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.315. .spec.storage Description Storage defines the storage used by Prometheus. Type object Property Type Description disableMountSubPath boolean Deprecated: subPath usage will be removed in a future release. emptyDir object EmptyDirVolumeSource to be used by the StatefulSet. If specified, it takes precedence over ephemeral and volumeClaimTemplate . More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir ephemeral object EphemeralVolumeSource to be used by the StatefulSet. This is a beta field in k8s 1.21 and GA in 1.15. For lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes volumeClaimTemplate object Defines the PVC spec to be used by the Prometheus StatefulSets. The easiest way to use a volume that cannot be automatically provisioned is to use a label selector alongside manually created PersistentVolumes. 8.1.316. .spec.storage.emptyDir Description EmptyDirVolumeSource to be used by the StatefulSet. If specified, it takes precedence over ephemeral and volumeClaimTemplate . More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 8.1.317. .spec.storage.ephemeral Description EphemeralVolumeSource to be used by the StatefulSet. This is a beta field in k8s 1.21 and GA in 1.15. For lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 8.1.318. .spec.storage.ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 8.1.319. .spec.storage.ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 8.1.320. .spec.storage.ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 8.1.321. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 8.1.322. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 8.1.323. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.324. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 8.1.325. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 8.1.326. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.327. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.328. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.329. .spec.storage.volumeClaimTemplate Description Defines the PVC spec to be used by the Prometheus StatefulSets. The easiest way to use a volume that cannot be automatically provisioned is to use a label selector alongside manually created PersistentVolumes. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata object EmbeddedMetadata contains metadata relevant to an EmbeddedResource. spec object Defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims status object Deprecated: this field is never set. 8.1.330. .spec.storage.volumeClaimTemplate.metadata Description EmbeddedMetadata contains metadata relevant to an EmbeddedResource. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 8.1.331. .spec.storage.volumeClaimTemplate.spec Description Defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 8.1.332. .spec.storage.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 8.1.333. .spec.storage.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 8.1.334. .spec.storage.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.335. .spec.storage.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 8.1.336. .spec.storage.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 8.1.337. .spec.storage.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.338. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.339. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.340. .spec.storage.volumeClaimTemplate.status Description Deprecated: this field is never set. Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResources integer-or-string allocatedResources is the storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity integer-or-string capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. conditions[] object PersistentVolumeClaimCondition contains details about state of pvc phase string phase represents the current phase of PersistentVolumeClaim. resizeStatus string resizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. 8.1.341. .spec.storage.volumeClaimTemplate.status.conditions Description conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. Type array 8.1.342. .spec.storage.volumeClaimTemplate.status.conditions[] Description PersistentVolumeClaimCondition contains details about state of pvc Type object Required status type Property Type Description lastProbeTime string lastProbeTime is the time we probed the condition. lastTransitionTime string lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized. status string type string PersistentVolumeClaimConditionType is a valid value of PersistentVolumeClaimCondition.Type 8.1.343. .spec.thanos Description Defines the configuration of the optional Thanos sidecar. This section is experimental, it may change significantly without deprecation notice in any release. Type object Property Type Description additionalArgs array AdditionalArgs allows setting additional arguments for the Thanos container. The arguments are passed as-is to the Thanos container which may cause issues if they are invalid or not supported the given Thanos version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument, the reconciliation will fail and an error will be logged. additionalArgs[] object Argument as part of the AdditionalArgs list. baseImage string Deprecated: use 'image' instead. blockSize string BlockDuration controls the size of TSDB blocks produced by Prometheus. The default value is 2h to match the upstream Prometheus defaults. WARNING: Changing the block duration can impact the performance and efficiency of the entire Prometheus/Thanos stack due to how it interacts with memory and Thanos compactors. It is recommended to keep this value set to a multiple of 120 times your longest scrape or rule interval. For example, 30s * 120 = 1h. getConfigInterval string How often to retrieve the Prometheus configuration. getConfigTimeout string Maximum time to wait when retrieving the Prometheus configuration. grpcListenLocal boolean When true, the Thanos sidecar listens on the loopback interface instead of the Pod IP's address for the gRPC endpoints. It has no effect if listenLocal is true. grpcServerTlsConfig object Configures the TLS parameters for the gRPC server providing the StoreAPI. Note: Currently only the caFile , certFile , and keyFile fields are supported. httpListenLocal boolean When true, the Thanos sidecar listens on the loopback interface instead of the Pod IP's address for the HTTP endpoints. It has no effect if listenLocal is true. image string Container image name for Thanos. If specified, it takes precedence over the spec.thanos.baseImage , spec.thanos.tag and spec.thanos.sha fields. Specifying spec.thanos.version is still necessary to ensure the Prometheus Operator knows which version of Thanos is being configured. If neither spec.thanos.image nor spec.thanos.baseImage are defined, the operator will use the latest upstream version of Thanos available at the time when the operator was released. listenLocal boolean Deprecated: use grpcListenLocal and httpListenLocal instead. logFormat string Log format for the Thanos sidecar. logLevel string Log level for the Thanos sidecar. minTime string Defines the start of time range limit served by the Thanos sidecar's StoreAPI. The field's value should be a constant time in RFC3339 format or a time duration relative to current time, such as -1d or 2h45m. Valid duration units are ms, s, m, h, d, w, y. objectStorageConfig object Defines the Thanos sidecar's configuration to upload TSDB blocks to object storage. More info: https://thanos.io/tip/thanos/storage.md/ objectStorageConfigFile takes precedence over this field. objectStorageConfigFile string Defines the Thanos sidecar's configuration file to upload TSDB blocks to object storage. More info: https://thanos.io/tip/thanos/storage.md/ This field takes precedence over objectStorageConfig. readyTimeout string ReadyTimeout is the maximum time that the Thanos sidecar will wait for Prometheus to start. resources object Defines the resources requests and limits of the Thanos sidecar. sha string Deprecated: use 'image' instead. The image digest can be specified as part of the image name. tag string Deprecated: use 'image' instead. The image's tag can be specified as part of the image name. tracingConfig object Defines the tracing configuration for the Thanos sidecar. More info: https://thanos.io/tip/thanos/tracing.md/ This is an experimental feature, it may change in any upcoming release in a breaking way. tracingConfigFile takes precedence over this field. tracingConfigFile string Defines the tracing configuration file for the Thanos sidecar. More info: https://thanos.io/tip/thanos/tracing.md/ This is an experimental feature, it may change in any upcoming release in a breaking way. This field takes precedence over tracingConfig. version string Version of Thanos being deployed. The operator uses this information to generate the Prometheus StatefulSet + configuration files. If not specified, the operator assumes the latest upstream release of Thanos available at the time when the version of the operator was released. volumeMounts array VolumeMounts allows configuration of additional VolumeMounts for Thanos. VolumeMounts specified will be appended to other VolumeMounts in the 'thanos-sidecar' container. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. 8.1.344. .spec.thanos.additionalArgs Description AdditionalArgs allows setting additional arguments for the Thanos container. The arguments are passed as-is to the Thanos container which may cause issues if they are invalid or not supported the given Thanos version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument, the reconciliation will fail and an error will be logged. Type array 8.1.345. .spec.thanos.additionalArgs[] Description Argument as part of the AdditionalArgs list. Type object Required name Property Type Description name string Name of the argument, e.g. "scrape.discovery-reload-interval". value string Argument value, e.g. 30s. Can be empty for name-only arguments (e.g. --storage.tsdb.no-lockfile) 8.1.346. .spec.thanos.grpcServerTlsConfig Description Configures the TLS parameters for the gRPC server providing the StoreAPI. Note: Currently only the caFile , certFile , and keyFile fields are supported. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 8.1.347. .spec.thanos.grpcServerTlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.348. .spec.thanos.grpcServerTlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.349. .spec.thanos.grpcServerTlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.350. .spec.thanos.grpcServerTlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.351. .spec.thanos.grpcServerTlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.352. .spec.thanos.grpcServerTlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.353. .spec.thanos.grpcServerTlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.354. .spec.thanos.objectStorageConfig Description Defines the Thanos sidecar's configuration to upload TSDB blocks to object storage. More info: https://thanos.io/tip/thanos/storage.md/ objectStorageConfigFile takes precedence over this field. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.355. .spec.thanos.resources Description Defines the resources requests and limits of the Thanos sidecar. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.356. .spec.thanos.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 8.1.357. .spec.thanos.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 8.1.358. .spec.thanos.tracingConfig Description Defines the tracing configuration for the Thanos sidecar. More info: https://thanos.io/tip/thanos/tracing.md/ This is an experimental feature, it may change in any upcoming release in a breaking way. tracingConfigFile takes precedence over this field. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.359. .spec.thanos.volumeMounts Description VolumeMounts allows configuration of additional VolumeMounts for Thanos. VolumeMounts specified will be appended to other VolumeMounts in the 'thanos-sidecar' container. Type array 8.1.360. .spec.thanos.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 8.1.361. .spec.tolerations Description Defines the Pods' tolerations if specified. Type array 8.1.362. .spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 8.1.363. .spec.topologySpreadConstraints Description Defines the pod's topology spread constraints if specified. Type array 8.1.364. .spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector object LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. 8.1.365. .spec.topologySpreadConstraints[].labelSelector Description LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.366. .spec.topologySpreadConstraints[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.367. .spec.topologySpreadConstraints[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.368. .spec.tracingConfig Description EXPERIMENTAL: TracingConfig configures tracing in Prometheus. This is an experimental feature, it may change in any upcoming release in a breaking way. Type object Required endpoint Property Type Description clientType string Client used to export the traces. Supported values are http or grpc . compression string Compression key for supported compression types. The only supported value is gzip . endpoint string Endpoint to send the traces to. Should be provided in format <host>:<port>. headers object (string) Key-value pairs to be used as headers associated with gRPC or HTTP requests. insecure boolean If disabled, the client will use a secure connection. samplingFraction integer-or-string Sets the probability a given trace will be sampled. Must be a float from 0 through 1. timeout string Maximum time the exporter will wait for each batch export. tlsConfig object TLS Config to use when sending traces. 8.1.369. .spec.tracingConfig.tlsConfig Description TLS Config to use when sending traces. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 8.1.370. .spec.tracingConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.371. .spec.tracingConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.372. .spec.tracingConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.373. .spec.tracingConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.374. .spec.tracingConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.375. .spec.tracingConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.376. .spec.tracingConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.377. .spec.tsdb Description Defines the runtime reloadable configuration of the timeseries database (TSDB). Type object Property Type Description outOfOrderTimeWindow string Configures how old an out-of-order/out-of-bounds sample can be with respect to the TSDB max time. An out-of-order/out-of-bounds sample is ingested into the TSDB as long as the timestamp of the sample is >= (TSDB.MaxTime - outOfOrderTimeWindow). Out of order ingestion is an experimental feature. It requires Prometheus >= v2.39.0. 8.1.378. .spec.volumeMounts Description VolumeMounts allows the configuration of additional VolumeMounts. VolumeMounts will be appended to other VolumeMounts in the 'prometheus' container, that are generated as a result of StorageSpec objects. Type array 8.1.379. .spec.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 8.1.380. .spec.volumes Description Volumes allows the configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. Type array 8.1.381. .spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 8.1.382. .spec.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 8.1.383. .spec.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 8.1.384. .spec.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 8.1.385. .spec.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 8.1.386. .spec.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 8.1.387. .spec.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 8.1.388. .spec.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 8.1.389. .spec.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 8.1.390. .spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 8.1.391. .spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 8.1.392. .spec.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 8.1.393. .spec.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 8.1.394. .spec.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 8.1.395. .spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 8.1.396. .spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 8.1.397. .spec.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 8.1.398. .spec.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 8.1.399. .spec.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 8.1.400. .spec.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 8.1.401. .spec.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 8.1.402. .spec.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 8.1.403. .spec.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 8.1.404. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 8.1.405. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 8.1.406. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.407. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 8.1.408. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 8.1.409. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.410. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.411. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.412. .spec.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 8.1.413. .spec.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 8.1.414. .spec.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 8.1.415. .spec.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 8.1.416. .spec.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 8.1.417. .spec.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 8.1.418. .spec.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 8.1.419. .spec.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 8.1.420. .spec.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 8.1.421. .spec.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 8.1.422. .spec.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 8.1.423. .spec.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 8.1.424. .spec.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 8.1.425. .spec.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 8.1.426. .spec.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 8.1.427. .spec.volumes[].projected.sources Description sources is the list of volume projections Type array 8.1.428. .spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 8.1.429. .spec.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 8.1.430. .spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 8.1.431. .spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 8.1.432. .spec.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 8.1.433. .spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 8.1.434. .spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 8.1.435. .spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 8.1.436. .spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 8.1.437. .spec.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional field specify whether the Secret or its key must be defined 8.1.438. .spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 8.1.439. .spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 8.1.440. .spec.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 8.1.441. .spec.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 8.1.442. .spec.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 8.1.443. .spec.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 8.1.444. .spec.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 8.1.445. .spec.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 8.1.446. .spec.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 8.1.447. .spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 8.1.448. .spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 8.1.449. .spec.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 8.1.450. .spec.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 8.1.451. .spec.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 8.1.452. .spec.web Description Defines the configuration of the Prometheus web server. Type object Property Type Description httpConfig object Defines HTTP parameters for web server. maxConnections integer Defines the maximum number of simultaneous connections A zero value means that Prometheus doesn't accept any incoming connection. pageTitle string The prometheus web page title. tlsConfig object Defines the TLS parameters for HTTPS. 8.1.453. .spec.web.httpConfig Description Defines HTTP parameters for web server. Type object Property Type Description headers object List of headers that can be added to HTTP responses. http2 boolean Enable HTTP/2 support. Note that HTTP/2 is only supported with TLS. When TLSConfig is not configured, HTTP/2 will be disabled. Whenever the value of the field changes, a rolling update will be triggered. 8.1.454. .spec.web.httpConfig.headers Description List of headers that can be added to HTTP responses. Type object Property Type Description contentSecurityPolicy string Set the Content-Security-Policy header to HTTP responses. Unset if blank. strictTransportSecurity string Set the Strict-Transport-Security header to HTTP responses. Unset if blank. Please make sure that you use this with care as this header might force browsers to load Prometheus and the other applications hosted on the same domain and subdomains over HTTPS. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security xContentTypeOptions string Set the X-Content-Type-Options header to HTTP responses. Unset if blank. Accepted value is nosniff. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options xFrameOptions string Set the X-Frame-Options header to HTTP responses. Unset if blank. Accepted values are deny and sameorigin. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options xXSSProtection string Set the X-XSS-Protection header to all responses. Unset if blank. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection 8.1.455. .spec.web.tlsConfig Description Defines the TLS parameters for HTTPS. Type object Required cert keySecret Property Type Description cert object Contains the TLS certificate for the server. cipherSuites array (string) List of supported cipher suites for TLS versions up to TLS 1.2. If empty, Go default cipher suites are used. Available cipher suites are documented in the go documentation: https://golang.org/pkg/crypto/tls/#pkg-constants clientAuthType string Server policy for client authentication. Maps to ClientAuth Policies. For more detail on clientAuth options: https://golang.org/pkg/crypto/tls/#ClientAuthType client_ca object Contains the CA certificate for client certificate authentication to the server. curvePreferences array (string) Elliptic curves that will be used in an ECDHE handshake, in preference order. Available curves are documented in the go documentation: https://golang.org/pkg/crypto/tls/#CurveID keySecret object Secret containing the TLS key for the server. maxVersion string Maximum TLS version that is acceptable. Defaults to TLS13. minVersion string Minimum TLS version that is acceptable. Defaults to TLS12. preferServerCipherSuites boolean Controls whether the server selects the client's most preferred cipher suite, or the server's most preferred cipher suite. If true then the server's preference, as expressed in the order of elements in cipherSuites, is used. 8.1.456. .spec.web.tlsConfig.cert Description Contains the TLS certificate for the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.457. .spec.web.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.458. .spec.web.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.459. .spec.web.tlsConfig.client_ca Description Contains the CA certificate for client certificate authentication to the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.460. .spec.web.tlsConfig.client_ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.461. .spec.web.tlsConfig.client_ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.462. .spec.web.tlsConfig.keySecret Description Secret containing the TLS key for the server. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.463. .status Description Most recent observed status of the Prometheus cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Required availableReplicas paused replicas unavailableReplicas updatedReplicas Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this Prometheus deployment. conditions array The current state of the Prometheus deployment. conditions[] object Condition represents the state of the resources associated with the Prometheus, Alertmanager or ThanosRuler resource. paused boolean Represents whether any actions on the underlying managed objects are being performed. Only delete actions will be performed. replicas integer Total number of non-terminated pods targeted by this Prometheus deployment (their labels match the selector). shardStatuses array The list has one entry per shard. Each entry provides a summary of the shard status. shardStatuses[] object unavailableReplicas integer Total number of unavailable pods targeted by this Prometheus deployment. updatedReplicas integer Total number of non-terminated pods targeted by this Prometheus deployment that have the desired version spec. 8.1.464. .status.conditions Description The current state of the Prometheus deployment. Type array 8.1.465. .status.conditions[] Description Condition represents the state of the resources associated with the Prometheus, Alertmanager or ThanosRuler resource. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the time of the last update to the current status property. message string Human-readable message indicating details for the condition's last transition. observedGeneration integer ObservedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string Reason for the condition's last transition. status string Status of the condition. type string Type of the condition being reported. 8.1.466. .status.shardStatuses Description The list has one entry per shard. Each entry provides a summary of the shard status. Type array 8.1.467. .status.shardStatuses[] Description Type object Required availableReplicas replicas shardID unavailableReplicas updatedReplicas Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this shard. replicas integer Total number of pods targeted by this shard. shardID string Identifier of the shard. unavailableReplicas integer Total number of unavailable pods targeted by this shard. updatedReplicas integer Total number of non-terminated pods targeted by this shard that have the desired spec. 8.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/prometheuses GET : list objects of kind Prometheus /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses DELETE : delete collection of Prometheus GET : list objects of kind Prometheus POST : create Prometheus /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses/{name} DELETE : delete Prometheus GET : read the specified Prometheus PATCH : partially update the specified Prometheus PUT : replace the specified Prometheus /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses/{name}/status GET : read status of the specified Prometheus PATCH : partially update status of the specified Prometheus PUT : replace status of the specified Prometheus 8.2.1. /apis/monitoring.coreos.com/v1/prometheuses Table 8.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind Prometheus Table 8.2. HTTP responses HTTP code Reponse body 200 - OK PrometheusList schema 401 - Unauthorized Empty 8.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses Table 8.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Prometheus Table 8.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Prometheus Table 8.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.8. HTTP responses HTTP code Reponse body 200 - OK PrometheusList schema 401 - Unauthorized Empty HTTP method POST Description create Prometheus Table 8.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.10. Body parameters Parameter Type Description body Prometheus schema Table 8.11. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 201 - Created Prometheus schema 202 - Accepted Prometheus schema 401 - Unauthorized Empty 8.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses/{name} Table 8.12. Global path parameters Parameter Type Description name string name of the Prometheus namespace string object name and auth scope, such as for teams and projects Table 8.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete Prometheus Table 8.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 8.15. Body parameters Parameter Type Description body DeleteOptions schema Table 8.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Prometheus Table 8.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 8.18. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Prometheus Table 8.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.20. Body parameters Parameter Type Description body Patch schema Table 8.21. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Prometheus Table 8.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.23. Body parameters Parameter Type Description body Prometheus schema Table 8.24. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 201 - Created Prometheus schema 401 - Unauthorized Empty 8.2.4. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses/{name}/status Table 8.25. Global path parameters Parameter Type Description name string name of the Prometheus namespace string object name and auth scope, such as for teams and projects Table 8.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Prometheus Table 8.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 8.28. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Prometheus Table 8.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.30. Body parameters Parameter Type Description body Patch schema Table 8.31. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Prometheus Table 8.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.33. Body parameters Parameter Type Description body Prometheus schema Table 8.34. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 201 - Created Prometheus schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/monitoring_apis/prometheus-monitoring-coreos-com-v1 |
Chapter 5. Control plane backup and restore | Chapter 5. Control plane backup and restore 5.1. Backing up etcd etcd is the key-value store for OpenShift Container Platform, which persists the state of all resource objects. Back up your cluster's etcd data regularly and store in a secure location ideally outside the OpenShift Container Platform environment. Do not take an etcd backup before the first certificate rotation completes, which occurs 24 hours after installation, otherwise the backup will contain expired certificates. It is also recommended to take etcd backups during non-peak usage hours because the etcd snapshot has a high I/O cost. Be sure to take an etcd backup before you update your cluster. Taking a backup before you update is important because when you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.17.5 cluster must use an etcd backup that was taken from 4.17.5. Important Back up your cluster's etcd data by performing a single invocation of the backup script on a control plane host. Do not take a backup for each control plane host. After you have an etcd backup, you can restore to a cluster state . 5.1.1. Backing up etcd data Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd. Important Only save a backup from a single control plane host. Do not take a backup from each control plane host in the cluster. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have checked whether the cluster-wide proxy is enabled. Tip You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml . The proxy is enabled if the httpProxy , httpsProxy , and noProxy fields have values set. Procedure Start a debug session as root for a control plane node: USD oc debug --as-root node/<node_name> Change your root directory to /host in the debug shell: sh-4.4# chroot /host If the cluster-wide proxy is enabled, export the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY environment variables by running the following commands: USD export HTTP_PROXY=http://<your_proxy.example.com>:8080 USD export HTTPS_PROXY=https://<your_proxy.example.com>:8080 USD export NO_PROXY=<example.com> Run the cluster-backup.sh script in the debug shell and pass in the location to save the backup to. Tip The cluster-backup.sh script is maintained as a component of the etcd Cluster Operator and is a wrapper around the etcdctl snapshot save command. sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup Example script output found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {"level":"info","ts":1624647639.0188997,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part"} {"level":"info","ts":"2021-06-25T19:00:39.030Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"} {"level":"info","ts":1624647639.0301006,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://10.0.0.5:2379"} {"level":"info","ts":"2021-06-25T19:00:40.215Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"} {"level":"info","ts":1624647640.6032252,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://10.0.0.5:2379","size":"114 MB","took":1.584090459} {"level":"info","ts":1624647640.6047094,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {"hash":3866667823,"revision":31407,"totalKey":12828,"totalSize":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup In this example, two files are created in the /home/core/assets/backup/ directory on the control plane host: snapshot_<datetimestamp>.db : This file is the etcd snapshot. The cluster-backup.sh script confirms its validity. static_kuberesources_<datetimestamp>.tar.gz : This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot. Note If etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required to restore from the etcd snapshot. Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted. 5.1.2. Additional resources Recovering an unhealthy etcd cluster 5.1.3. Creating automated etcd backups The automated backup feature for etcd supports both recurring and single backups. Recurring backups create a cron job that starts a single backup each time the job triggers. Important Automating etcd backups is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Follow these steps to enable automated backups for etcd. Warning Enabling the TechPreviewNoUpgrade feature set on your cluster prevents minor version updates. The TechPreviewNoUpgrade feature set cannot be disabled. Do not enable this feature set on production clusters. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift CLI ( oc ). Procedure Create a FeatureGate custom resource (CR) file named enable-tech-preview-no-upgrade.yaml with the following contents: apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade Apply the CR and enable automated backups: USD oc apply -f enable-tech-preview-no-upgrade.yaml It takes time to enable the related APIs. Verify the creation of the custom resource definition (CRD) by running the following command: USD oc get crd | grep backup Example output backups.config.openshift.io 2023-10-25T13:32:43Z etcdbackups.operator.openshift.io 2023-10-25T13:32:04Z 5.1.3.1. Creating a single etcd backup Follow these steps to create a single etcd backup by creating and applying a custom resource (CR). Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift CLI ( oc ). Procedure If dynamically-provisioned storage is available, complete the following steps to create a single automated etcd backup: Create a persistent volume claim (PVC) named etcd-backup-pvc.yaml with contents such as the following example: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem 1 The amount of storage available to the PVC. Adjust this value for your requirements. Apply the PVC by running the following command: USD oc apply -f etcd-backup-pvc.yaml Verify the creation of the PVC by running the following command: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s Note Dynamic PVCs stay in the Pending state until they are mounted. Create a CR file named etcd-single-backup.yaml with contents such as the following example: apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1 1 The name of the PVC to save the backup to. Adjust this value according to your environment. Apply the CR to start a single backup: USD oc apply -f etcd-single-backup.yaml If dynamically-provisioned storage is not available, complete the following steps to create a single automated etcd backup: Create a StorageClass CR file named etcd-backup-local-storage.yaml with the following contents: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate Apply the StorageClass CR by running the following command: USD oc apply -f etcd-backup-local-storage.yaml Create a PV named etcd-backup-pv-fs.yaml with contents such as the following example: apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: etcd-backup-local-storage local: path: /mnt nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2 1 The amount of storage available to the PV. Adjust this value for your requirements. 2 Replace this value with the node to attach this PV to. Verify the creation of the PV by running the following command: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWO Retain Available etcd-backup-local-storage 10s Create a PVC named etcd-backup-pvc.yaml with contents such as the following example: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi 1 1 The amount of storage available to the PVC. Adjust this value for your requirements. Apply the PVC by running the following command: USD oc apply -f etcd-backup-pvc.yaml Create a CR file named etcd-single-backup.yaml with contents such as the following example: apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1 1 The name of the persistent volume claim (PVC) to save the backup to. Adjust this value according to your environment. Apply the CR to start a single backup: USD oc apply -f etcd-single-backup.yaml 5.1.3.2. Creating recurring etcd backups Follow these steps to create automated recurring backups of etcd. Use dynamically-provisioned storage to keep the created etcd backup data in a safe, external location if possible. If dynamically-provisioned storage is not available, consider storing the backup data on an NFS share to make backup recovery more accessible. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift CLI ( oc ). Procedure If dynamically-provisioned storage is available, complete the following steps to create automated recurring backups: Create a persistent volume claim (PVC) named etcd-backup-pvc.yaml with contents such as the following example: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem storageClassName: etcd-backup-local-storage 1 The amount of storage available to the PVC. Adjust this value for your requirements. Note Each of the following providers require changes to the accessModes and storageClassName keys: Provider accessModes value storageClassName value AWS with the versioned-installer-efc_operator-ci profile - ReadWriteMany efs-sc Google Cloud Platform - ReadWriteMany filestore-csi Microsoft Azure - ReadWriteMany azurefile-csi Apply the PVC by running the following command: USD oc apply -f etcd-backup-pvc.yaml Verify the creation of the PVC by running the following command: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s Note Dynamic PVCs stay in the Pending state until they are mounted. If dynamically-provisioned storage is unavailable, create a local storage PVC by completing the following steps: Warning If you delete or otherwise lose access to the node that contains the stored backup data, you can lose data. Create a StorageClass CR file named etcd-backup-local-storage.yaml with the following contents: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate Apply the StorageClass CR by running the following command: USD oc apply -f etcd-backup-local-storage.yaml Create a PV named etcd-backup-pv-fs.yaml from the applied StorageClass with contents such as the following example: apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Delete storageClassName: etcd-backup-local-storage local: path: /mnt/ nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2 1 The amount of storage available to the PV. Adjust this value for your requirements. 2 Replace this value with the master node to attach this PV to. Tip Run the following command to list the available nodes: USD oc get nodes Verify the creation of the PV by running the following command: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWX Delete Available etcd-backup-local-storage 10s Create a PVC named etcd-backup-pvc.yaml with contents such as the following example: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc spec: accessModes: - ReadWriteMany volumeMode: Filesystem resources: requests: storage: 10Gi 1 storageClassName: etcd-backup-local-storage 1 The amount of storage available to the PVC. Adjust this value for your requirements. Apply the PVC by running the following command: USD oc apply -f etcd-backup-pvc.yaml Create a custom resource definition (CRD) file named etcd-recurring-backups.yaml . The contents of the created CRD define the schedule and retention type of automated backups. For the default retention type of RetentionNumber with 15 retained backups, use contents such as the following example: apiVersion: config.openshift.io/v1alpha1 kind: Backup metadata: name: etcd-recurring-backup spec: etcd: schedule: "20 4 * * *" 1 timeZone: "UTC" pvcName: etcd-backup-pvc 1 The CronTab schedule for recurring backups. Adjust this value for your needs. To use retention based on the maximum number of backups, add the following key-value pairs to the etcd key: spec: etcd: retentionPolicy: retentionType: RetentionNumber 1 retentionNumber: maxNumberOfBackups: 5 2 1 The retention type. Defaults to RetentionNumber if unspecified. 2 The maximum number of backups to retain. Adjust this value for your needs. Defaults to 15 backups if unspecified. Warning A known issue causes the number of retained backups to be one greater than the configured value. For retention based on the file size of backups, use the following: spec: etcd: retentionPolicy: retentionType: RetentionSize retentionSize: maxSizeOfBackupsGb: 20 1 1 The maximum file size of the retained backups in gigabytes. Adjust this value for your needs. Defaults to 10 GB if unspecified. Warning A known issue causes the maximum size of retained backups to be up to 10 GB greater than the configured value. Create the cron job defined by the CRD by running the following command: USD oc create -f etcd-recurring-backup.yaml To find the created cron job, run the following command: USD oc get cronjob -n openshift-etcd 5.2. Replacing an unhealthy etcd member This document describes the process to replace a single unhealthy etcd member. This process depends on whether the etcd member is unhealthy because the machine is not running or the node is not ready, or whether it is unhealthy because the etcd pod is crashlooping. Note If you have lost the majority of your control plane hosts, follow the disaster recovery procedure to restore to a cluster state instead of this procedure. If the control plane certificates are not valid on the member being replaced, then you must follow the procedure to recover from expired control plane certificates instead of this procedure. If a control plane node is lost and a new one is created, the etcd cluster Operator handles generating the new TLS certificates and adding the node as an etcd member. 5.2.1. Prerequisites Take an etcd backup prior to replacing an unhealthy etcd member. 5.2.2. Identifying an unhealthy etcd member You can identify if your cluster has an unhealthy etcd member. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Check the status of the EtcdMembersAvailable status condition using the following command: USD oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="EtcdMembersAvailable")]}{.message}{"\n"}' Review the output: 2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy This example output shows that the ip-10-0-131-183.ec2.internal etcd member is unhealthy. 5.2.3. Determining the state of the unhealthy etcd member The steps to replace an unhealthy etcd member depend on which of the following states your etcd member is in: The machine is not running or the node is not ready The etcd pod is crashlooping This procedure determines which state your etcd member is in. This enables you to know which procedure to follow to replace the unhealthy etcd member. Note If you are aware that the machine is not running or the node is not ready, but you expect it to return to a healthy state soon, then you do not need to perform a procedure to replace the etcd member. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have identified an unhealthy etcd member. Procedure Determine if the machine is not running : USD oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{"\t"}{@.status.providerStatus.instanceState}{"\n"}' | grep -v running Example output ip-10-0-131-183.ec2.internal stopped 1 1 This output lists the node and the status of the node's machine. If the status is anything other than running , then the machine is not running . If the machine is not running , then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure. Determine if the node is not ready . If either of the following scenarios are true, then the node is not ready . If the machine is running, then check whether the node is unreachable: USD oc get nodes -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{range .spec.taints[*]}{.key}{" "}' | grep unreachable Example output ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable 1 1 If the node is listed with an unreachable taint, then the node is not ready . If the node is still reachable, then check whether the node is listed as NotReady : USD oc get nodes -l node-role.kubernetes.io/master | grep "NotReady" Example output ip-10-0-131-183.ec2.internal NotReady master 122m v1.28.5 1 1 If the node is listed as NotReady , then the node is not ready . If the node is not ready , then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure. Determine if the etcd pod is crashlooping . If the machine is running and the node is ready, then check whether the etcd pod is crashlooping. Verify that all control plane nodes are listed as Ready : USD oc get nodes -l node-role.kubernetes.io/master Example output NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.28.5 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.28.5 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.28.5 Check whether the status of an etcd pod is either Error or CrashloopBackoff : USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m 1 etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m 1 Since this status of this pod is Error , then the etcd pod is crashlooping . If the etcd pod is crashlooping , then follow the Replacing an unhealthy etcd member whose etcd pod is crashlooping procedure. 5.2.4. Replacing the unhealthy etcd member Depending on the state of your unhealthy etcd member, use one of the following procedures: Replacing an unhealthy etcd member whose machine is not running or whose node is not ready Replacing an unhealthy etcd member whose etcd pod is crashlooping Replacing an unhealthy stopped baremetal etcd member 5.2.4.1. Replacing an unhealthy etcd member whose machine is not running or whose node is not ready This procedure details the steps to replace an etcd member that is unhealthy either because the machine is not running or because the node is not ready. Note If your cluster uses a control plane machine set, see "Recovering a degraded etcd Operator" in "Troubleshooting the control plane machine set" for a more simple etcd recovery procedure. Prerequisites You have identified the unhealthy etcd member. You have verified that either the machine is not running or the node is not ready. Important You must wait if the other control plane nodes are powered off. The control plane nodes must remain powered off until the replacement of an unhealthy etcd member is complete. You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Important It is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues. Procedure Remove the unhealthy member. Choose a pod that is not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m Connect to the running etcd container, passing in the name of a pod that is not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 6fc1e7c9db35841d | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure. The USD etcdctl endpoint health command will list the removed member until the procedure of replacement is finished and a new member is added. Remove the unhealthy etcd member by providing the ID to the etcdctl member remove command: sh-4.2# etcdctl member remove 6fc1e7c9db35841d Example output Member 6fc1e7c9db35841d removed from cluster ead669ce1fbfb346 View the member list again and verify that the member was removed: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ You can now exit the node shell. Turn off the quorum guard by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' This command ensures that you can successfully re-create secrets and roll out the static pods. Important After you turn off the quorum guard, the cluster might be unreachable for a short time while the remaining etcd instances reboot to reflect the configuration change. Note etcd cannot tolerate any additional member failure when running with two members. Restarting either remaining member breaks the quorum and causes downtime in your cluster. The quorum guard protects etcd from restarts due to configuration changes that could cause downtime, so it must be disabled to complete this procedure. Delete the affected node by running the following command: USD oc delete node <node_name> Example command USD oc delete node ip-10-0-131-183.ec2.internal Remove the old secrets for the unhealthy etcd member that was removed. List the secrets for the unhealthy etcd member that was removed. USD oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1 1 Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure. There is a peer, serving, and metrics secret as shown in the following output: Example output etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m Delete the secrets for the unhealthy etcd member that was removed. Delete the peer secret: USD oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal Delete the serving secret: USD oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal Delete the metrics secret: USD oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal Delete and re-create the control plane machine. After this machine is re-created, a new revision is forced and etcd scales up automatically. If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new master by using the same method that was used to originally create it. Obtain the machine for the unhealthy member. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 This is the control plane machine for the unhealthy node, ip-10-0-131-183.ec2.internal . Delete the machine of the unhealthy member: USD oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1 1 Specify the name of the control plane machine for the unhealthy node. A new machine is automatically provisioned after deleting the machine of the unhealthy member. Verify that a new machine has been created: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-133-53.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 The new machine, clustername-8qw5l-master-3 is being created and is ready once the phase changes from Provisioning to Running . It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Note Verify the subnet IDs that you are using for your machine sets to ensure that they end up in the correct availability zone. Turn the quorum guard back on by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' You can verify that the unsupportedConfigOverrides section is removed from the object by entering this command: USD oc get etcd/cluster -oyaml If you are using single-node OpenShift, restart the node. Otherwise, you might encounter the following error in the etcd cluster Operator: Example output EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again] Verification Verify that all etcd pods are running properly. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m If the output from the command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. Verify that there are exactly three etcd members. Connect to the running etcd container, passing in the name of a pod that was not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 5eb0d6b8ca24730c | started | ip-10-0-133-53.ec2.internal | https://10.0.133.53:2380 | https://10.0.133.53:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ If the output from the command lists more than three etcd members, you must carefully remove the unwanted member. Warning Be sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss. Additional resources Recovering a degraded etcd Operator 5.2.4.2. Replacing an unhealthy etcd member whose etcd pod is crashlooping This procedure details the steps to replace an etcd member that is unhealthy because the etcd pod is crashlooping. Prerequisites You have identified the unhealthy etcd member. You have verified that the etcd pod is crashlooping. You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Important It is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues. Procedure Stop the crashlooping etcd pod. Debug the node that is crashlooping. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc debug node/ip-10-0-131-183.ec2.internal 1 1 Replace this with the name of the unhealthy node. Change your root directory to /host : sh-4.2# chroot /host Move the existing etcd pod file out of the kubelet manifest directory: sh-4.2# mkdir /var/lib/etcd-backup sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/ Move the etcd data directory to a different location: sh-4.2# mv /var/lib/etcd/ /tmp You can now exit the node shell. Remove the unhealthy member. Choose a pod that is not on the affected node. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m Connect to the running etcd container, passing in the name of a pod that is not on the affected node. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 62bcf33650a7170a | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure. Remove the unhealthy etcd member by providing the ID to the etcdctl member remove command: sh-4.2# etcdctl member remove 62bcf33650a7170a Example output Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346 View the member list again and verify that the member was removed: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ You can now exit the node shell. Turn off the quorum guard by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' This command ensures that you can successfully re-create secrets and roll out the static pods. Remove the old secrets for the unhealthy etcd member that was removed. List the secrets for the unhealthy etcd member that was removed. USD oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1 1 Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure. There is a peer, serving, and metrics secret as shown in the following output: Example output etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m Delete the secrets for the unhealthy etcd member that was removed. Delete the peer secret: USD oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal Delete the serving secret: USD oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal Delete the metrics secret: USD oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal Force etcd redeployment. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. When the etcd cluster Operator performs a redeployment, it ensures that all control plane nodes have a functioning etcd pod. Turn the quorum guard back on by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' You can verify that the unsupportedConfigOverrides section is removed from the object by entering this command: USD oc get etcd/cluster -oyaml If you are using single-node OpenShift, restart the node. Otherwise, you might encounter the following error in the etcd cluster Operator: Example output EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again] Verification Verify that the new member is available and healthy. Connect to the running etcd container again. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal Verify that all members are healthy: sh-4.2# etcdctl endpoint health Example output https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms 5.2.4.3. Replacing an unhealthy bare metal etcd member whose machine is not running or whose node is not ready This procedure details the steps to replace a bare metal etcd member that is unhealthy either because the machine is not running or because the node is not ready. If you are running installer-provisioned infrastructure or you used the Machine API to create your machines, follow these steps. Otherwise you must create the new control plane node using the same method that was used to originally create it. Prerequisites You have identified the unhealthy bare metal etcd member. You have verified that either the machine is not running or the node is not ready. You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Important You must take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues. Procedure Verify and remove the unhealthy member. Choose a pod that is not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd -o wide Example output etcd-openshift-control-plane-0 5/5 Running 11 3h56m 192.168.10.9 openshift-control-plane-0 <none> <none> etcd-openshift-control-plane-1 5/5 Running 0 3h54m 192.168.10.10 openshift-control-plane-1 <none> <none> etcd-openshift-control-plane-2 5/5 Running 0 3h58m 192.168.10.11 openshift-control-plane-2 <none> <none> Connect to the running etcd container, passing in the name of a pod that is not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-openshift-control-plane-0 View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380/ | https://192.168.10.9:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ Take note of the ID and the name of the unhealthy etcd member, because these values are required later in the procedure. The etcdctl endpoint health command will list the removed member until the replacement procedure is completed and the new member is added. Remove the unhealthy etcd member by providing the ID to the etcdctl member remove command: Warning Be sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss. sh-4.2# etcdctl member remove 7a8197040a5126c8 Example output Member 7a8197040a5126c8 removed from cluster b23536c33f2cdd1b View the member list again and verify that the member was removed: sh-4.2# etcdctl member list -w table Example output +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | cc3830a72fc357f9 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ You can now exit the node shell. Important After you remove the member, the cluster might be unreachable for a short time while the remaining etcd instances reboot. Turn off the quorum guard by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' This command ensures that you can successfully re-create secrets and roll out the static pods. Remove the old secrets for the unhealthy etcd member that was removed by running the following commands. List the secrets for the unhealthy etcd member that was removed. USD oc get secrets -n openshift-etcd | grep openshift-control-plane-2 Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure. There is a peer, serving, and metrics secret as shown in the following output: etcd-peer-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-metrics-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-openshift-control-plane-2 kubernetes.io/tls 2 134m Delete the secrets for the unhealthy etcd member that was removed. Delete the peer secret: USD oc delete secret etcd-peer-openshift-control-plane-2 -n openshift-etcd secret "etcd-peer-openshift-control-plane-2" deleted Delete the serving secret: USD oc delete secret etcd-serving-metrics-openshift-control-plane-2 -n openshift-etcd secret "etcd-serving-metrics-openshift-control-plane-2" deleted Delete the metrics secret: USD oc delete secret etcd-serving-openshift-control-plane-2 -n openshift-etcd secret "etcd-serving-openshift-control-plane-2" deleted Obtain the machine for the unhealthy member. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned 1 This is the control plane machine for the unhealthy node, examplecluster-control-plane-2 . Ensure that the Bare Metal Operator is available by running the following command: USD oc get clusteroperator baremetal Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.15.0 True False False 3d15h Remove the old BareMetalHost object by running the following command: USD oc delete bmh openshift-control-plane-2 -n openshift-machine-api Example output baremetalhost.metal3.io "openshift-control-plane-2" deleted Delete the machine of the unhealthy member by running the following command: USD oc delete machine -n openshift-machine-api examplecluster-control-plane-2 After you remove the BareMetalHost and Machine objects, then the Machine controller automatically deletes the Node object. If deletion of the machine is delayed for any reason or the command is obstructed and delayed, you can force deletion by removing the machine object finalizer field. Important Do not interrupt machine deletion by pressing Ctrl+c . You must allow the command to proceed to completion. Open a new terminal window to edit and delete the finalizer fields. A new machine is automatically provisioned after deleting the machine of the unhealthy member. Edit the machine configuration by running the following command: USD oc edit machine -n openshift-machine-api examplecluster-control-plane-2 Delete the following fields in the Machine custom resource, and then save the updated file: finalizers: - machine.machine.openshift.io Example output machine.machine.openshift.io/examplecluster-control-plane-2 edited Verify that the machine was deleted by running the following command: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned Verify that the node has been deleted by running the following command: USD oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 3h24m v1.28.5 openshift-control-plane-1 Ready master 3h24m v1.28.5 openshift-compute-0 Ready worker 176m v1.28.5 openshift-compute-1 Ready worker 176m v1.28.5 Create the new BareMetalHost object and the secret to store the BMC credentials: USD cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openshift-control-plane-2-bmc-secret namespace: openshift-machine-api data: password: <password> username: <username> type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-control-plane-2 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: redfish://10.46.61.18:443/redfish/v1/Systems/1 credentialsName: openshift-control-plane-2-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:b0:8a:a0 bootMode: UEFI externallyProvisioned: false online: true rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> userData: name: master-user-data-managed namespace: openshift-machine-api EOF Note The username and password can be found from the other bare metal host's secrets. The protocol to use in bmc:address can be taken from other bmh objects. Important If you reuse the BareMetalHost object definition from an existing control plane host, do not leave the externallyProvisioned field set to true . Existing control plane BareMetalHost objects may have the externallyProvisioned flag set to true if they were provisioned by the OpenShift Container Platform installation program. After the inspection is complete, the BareMetalHost object is created and available to be provisioned. Verify the creation process using available BareMetalHost objects: USD oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 available examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m Verify that a new machine has been created: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned 1 The new machine, clustername-8qw5l-master-3 is being created and is ready after the phase changes from Provisioning to Running . It should take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Verify that the bare metal host becomes provisioned and no error reported by running the following command: USD oc get bmh -n openshift-machine-api Example output USD oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 provisioned examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m Verify that the new node is added and in a ready state by running this command: USD oc get nodes Example output USD oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 4h26m v1.28.5 openshift-control-plane-1 Ready master 4h26m v1.28.5 openshift-control-plane-2 Ready master 12m v1.28.5 openshift-compute-0 Ready worker 3h58m v1.28.5 openshift-compute-1 Ready worker 3h58m v1.28.5 Turn the quorum guard back on by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' You can verify that the unsupportedConfigOverrides section is removed from the object by entering this command: USD oc get etcd/cluster -oyaml If you are using single-node OpenShift, restart the node. Otherwise, you might encounter the following error in the etcd cluster Operator: Example output EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again] Verification Verify that all etcd pods are running properly. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-openshift-control-plane-0 5/5 Running 0 105m etcd-openshift-control-plane-1 5/5 Running 0 107m etcd-openshift-control-plane-2 5/5 Running 0 103m If the output from the command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. To verify there are exactly three etcd members, connect to the running etcd container, passing in the name of a pod that was not on the affected node. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-openshift-control-plane-0 View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380 | https://192.168.10.11:2379 | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380 | https://192.168.10.10:2379 | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380 | https://192.168.10.9:2379 | false | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ Note If the output from the command lists more than three etcd members, you must carefully remove the unwanted member. Verify that all etcd members are healthy by running the following command: # etcdctl endpoint health --cluster Example output https://192.168.10.10:2379 is healthy: successfully committed proposal: took = 8.973065ms https://192.168.10.9:2379 is healthy: successfully committed proposal: took = 11.559829ms https://192.168.10.11:2379 is healthy: successfully committed proposal: took = 11.665203ms Validate that all nodes are at the latest revision by running the following command: USD oc get etcd -o=jsonpath='{range.items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' 5.2.5. Additional resources Quorum protection with machine lifecycle hooks 5.3. Disaster recovery 5.3.1. About disaster recovery The disaster recovery documentation provides information for administrators on how to recover from several disaster situations that might occur with their OpenShift Container Platform cluster. As an administrator, you might need to follow one or more of the following procedures to return your cluster to a working state. Important Disaster recovery requires you to have at least one healthy control plane host. Restoring to a cluster state This solution handles situations where you want to restore your cluster to a state, for example, if an administrator deletes something critical. This also includes situations where you have lost the majority of your control plane hosts, leading to etcd quorum loss and the cluster going offline. As long as you have taken an etcd backup, you can follow this procedure to restore your cluster to a state. If applicable, you might also need to recover from expired control plane certificates . Warning Restoring to a cluster state is a destructive and destablizing action to take on a running cluster. This procedure should only be used as a last resort. Prior to performing a restore, see About restoring cluster state for more information on the impact to the cluster. Note If you have a majority of your masters still available and have an etcd quorum, then follow the procedure to replace a single unhealthy etcd member . Recovering from expired control plane certificates This solution handles situations where your control plane certificates have expired. For example, if you shut down your cluster before the first certificate rotation, which occurs 24 hours after installation, your certificates will not be rotated and will expire. You can follow this procedure to recover from expired control plane certificates. 5.3.2. Restoring to a cluster state To restore the cluster to a state, you must have previously backed up etcd data by creating a snapshot. You will use this snapshot to restore the cluster state. 5.3.2.1. About restoring cluster state You can use an etcd backup to restore your cluster to a state. This can be used to recover from the following situations: The cluster has lost the majority of control plane hosts (quorum loss). An administrator has deleted something critical and must restore to recover the cluster. Warning Restoring to a cluster state is a destructive and destablizing action to take on a running cluster. This should only be used as a last resort. If you are able to retrieve data using the Kubernetes API server, then etcd is available and you should not restore using an etcd backup. Restoring etcd effectively takes a cluster back in time and all clients will experience a conflicting, parallel history. This can impact the behavior of watching components like kubelets, Kubernetes controller managers, SDN controllers, and persistent volume controllers. It can cause Operator churn when the content in etcd does not match the actual content on disk, causing Operators for the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, and etcd to get stuck when files on disk conflict with content in etcd. This can require manual actions to resolve the issues. In extreme cases, the cluster can lose track of persistent volumes, delete critical workloads that no longer exist, reimage machines, and rewrite CA bundles with expired certificates. 5.3.2.2. Restoring to a cluster state You can use a saved etcd backup to restore a cluster state or restore a cluster that has lost the majority of control plane hosts. Note If your cluster uses a control plane machine set, see "Troubleshooting the control plane machine set" for a more simple etcd recovery procedure. Important When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.7.2 cluster must use an etcd backup that was taken from 4.7.2. Prerequisites Access to the cluster as a user with the cluster-admin role through a certificate-based kubeconfig file, like the one that was used during installation. A healthy control plane host to use as the recovery host. SSH access to control plane hosts. A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db and static_kuberesources_<datetimestamp>.tar.gz . Important For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one. Procedure Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on. Establish SSH connectivity to each of the control plane nodes, including the recovery host. kube-apiserver becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal. Important If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state. Copy the etcd backup directory to the recovery control plane host. This procedure assumes that you copied the backup directory containing the etcd snapshot and the resources for the static pods to the /home/core/ directory of your recovery control plane host. Stop the static pods on any other control plane nodes. Note You do not need to stop the static pods on the recovery host. Access a control plane host that is not the recovery host. Move the existing etcd pod file out of the kubelet manifest directory by running: USD sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp Verify that the etcd pods are stopped by using: USD sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard" If the output of this command is not empty, wait a few minutes and check again. Move the existing kube-apiserver file out of the kubelet manifest directory by running: USD sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp Verify that the kube-apiserver containers are stopped by running: USD sudo crictl ps | grep kube-apiserver | egrep -v "operator|guard" If the output of this command is not empty, wait a few minutes and check again. Move the existing kube-controller-manager file out of the kubelet manifest directory by using: USD sudo mv -v /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /tmp Verify that the kube-controller-manager containers are stopped by running: USD sudo crictl ps | grep kube-controller-manager | egrep -v "operator|guard" If the output of this command is not empty, wait a few minutes and check again. Move the existing kube-scheduler file out of the kubelet manifest directory by using: USD sudo mv -v /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp Verify that the kube-scheduler containers are stopped by using: USD sudo crictl ps | grep kube-scheduler | egrep -v "operator|guard" If the output of this command is not empty, wait a few minutes and check again. Move the etcd data directory to a different location with the following example: USD sudo mv -v /var/lib/etcd/ /tmp If the /etc/kubernetes/manifests/keepalived.yaml file exists and the node is deleted, follow these steps: Move the /etc/kubernetes/manifests/keepalived.yaml file out of the kubelet manifest directory: USD sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp Verify that any containers managed by the keepalived daemon are stopped: USD sudo crictl ps --name keepalived The output of this command should be empty. If it is not empty, wait a few minutes and check again. Check if the control plane has any Virtual IPs (VIPs) assigned to it: USD ip -o address | egrep '<api_vip>|<ingress_vip>' For each reported VIP, run the following command to remove it: USD sudo ip address del <reported_vip> dev <reported_vip_device> Repeat this step on each of the other control plane hosts that is not the recovery host. Access the recovery control plane host. If the keepalived daemon is in use, verify that the recovery control plane node owns the VIP: USD ip -o address | grep <api_vip> The address of the VIP is highlighted in the output if it exists. This command returns an empty string if the VIP is not set or configured incorrectly. If the cluster-wide proxy is enabled, be sure that you have exported the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY environment variables. Tip You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml . The proxy is enabled if the httpProxy , httpsProxy , and noProxy fields have values set. Run the restore script on the recovery control plane host and pass in the path to the etcd backup directory: USD sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup Example script output ...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml The cluster-restore.sh script must show that etcd , kube-apiserver , kube-controller-manager , and kube-scheduler pods are stopped and then started at the end of the restore process. Note The restore process can cause nodes to enter the NotReady state if the node certificates were updated after the last etcd backup. Check the nodes to ensure they are in the Ready state. Run the following command: USD oc get nodes -w Sample output NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.28.5 host-172-25-75-38 Ready infra,worker 3d20h v1.28.5 host-172-25-75-40 Ready master 3d20h v1.28.5 host-172-25-75-65 Ready master 3d20h v1.28.5 host-172-25-75-74 Ready infra,worker 3d20h v1.28.5 host-172-25-75-79 Ready worker 3d20h v1.28.5 host-172-25-75-86 Ready worker 3d20h v1.28.5 host-172-25-75-98 Ready infra,worker 3d20h v1.28.5 It can take several minutes for all nodes to report their state. If any nodes are in the NotReady state, log in to the nodes and remove all of the PEM files from the /var/lib/kubelet/pki directory on each node. You can SSH into the nodes or use the terminal window in the web console. USD ssh -i <ssh-key-path> core@<master-hostname> Sample pki directory sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem Restart the kubelet service on all control plane hosts. From the recovery host, run: USD sudo systemctl restart kubelet.service Repeat this step on all other control plane hosts. Approve the pending Certificate Signing Requests (CSRs): Note Clusters with no worker nodes, such as single-node clusters or clusters consisting of three schedulable control plane nodes, will not have any pending CSRs to approve. You can skip all the commands listed in this step. Get the list of current CSRs by running: USD oc get csr Example output 1 2 A pending kubelet serving CSR, requested by the node for the kubelet serving endpoint. 3 4 A pending kubelet client CSR, requested with the node-bootstrapper node bootstrap credentials. Review the details of a CSR to verify that it is valid by running: USD oc describe csr <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. Approve each valid node-bootstrapper CSR by running: USD oc adm certificate approve <csr_name> For user-provisioned installations, approve each valid kubelet service CSR by running: USD oc adm certificate approve <csr_name> Verify that the single member control plane has started successfully. From the recovery host, verify that the etcd container is running by using: USD sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard" Example output 3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0 From the recovery host, verify that the etcd pod is running by using: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s If the status is Pending , or the output lists more than one running etcd pod, wait a few minutes and check again. If you are using the OVNKubernetes network plugin, you must restart ovnkube-controlplane pods. Delete all of the ovnkube-controlplane pods by running: USD oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-control-plane Verify that all of the ovnkube-controlplane pods were redeployed by using: USD oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-control-plane If you are using the OVN-Kubernetes network plugin, restart the Open Virtual Network (OVN) Kubernetes pods on all the nodes one by one. Use the following steps to restart OVN-Kubernetes pods on each node: Important Restart OVN-Kubernetes pods in the following order: The recovery control plane host The other control plane hosts (if available) The other nodes Note Validating and mutating admission webhooks can reject pods. If you add any additional webhooks with the failurePolicy set to Fail , then they can reject pods and the restoration process can fail. You can avoid this by saving and deleting webhooks while restoring the cluster state. After the cluster state is restored successfully, you can enable the webhooks again. Alternatively, you can temporarily set the failurePolicy to Ignore while restoring the cluster state. After the cluster state is restored successfully, you can set the failurePolicy to Fail . Remove the northbound database (nbdb) and southbound database (sbdb). Access the recovery host and the remaining control plane nodes by using Secure Shell (SSH) and run: USD sudo rm -f /var/lib/ovn-ic/etc/*.db Restart the OpenVSwitch services. Access the node by using Secure Shell (SSH) and run the following command: USD sudo systemctl restart ovs-vswitchd ovsdb-server Delete the ovnkube-node pod on the node by running the following command, replacing <node> with the name of the node that you are restarting: USD oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName==<node> Check the status of the OVN pods by running the following command: USD oc get po -n openshift-ovn-kubernetes If any OVN pods are in the Terminating status, delete the node that is running that OVN pod by running the following command. Replace <node> with the name of the node you are deleting: USD oc delete node <node> Use SSH to log in to the OVN pod node with the Terminating status by running the following command: USD ssh -i <ssh-key-path> core@<node> Move all PEM files from the /var/lib/kubelet/pki directory by running the following command: USD sudo mv /var/lib/kubelet/pki/* /tmp Restart the kubelet service by running the following command: USD sudo systemctl restart kubelet.service Return to the recovery etcd machines by running the following command: USD oc get csr Example output NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending Approve all new CSRs by running the following command, replacing csr-<uuid> with the name of the CSR: oc adm certificate approve csr-<uuid> Verify that the node is back by running the following command: USD oc get nodes Verify that the ovnkube-node pod is running again with: USD oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-node --field-selector=spec.nodeName==<node> Note It might take several minutes for the pods to restart. Delete and re-create other non-recovery, control plane machines, one by one. After the machines are re-created, a new revision is forced and etcd automatically scales up. If you use a user-provisioned bare metal installation, you can re-create a control plane machine by using the same method that you used to originally create it. For more information, see "Installing a user-provisioned cluster on bare metal". Warning Do not delete and re-create the machine for the recovery host. If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps: Warning Do not delete and re-create the machine for the recovery host. For bare metal installations on installer-provisioned infrastructure, control plane machines are not re-created. For more information, see "Replacing a bare-metal control plane node". Obtain the machine for one of the lost control plane hosts. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get machines -n openshift-machine-api -o wide Example output: NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 This is the control plane machine for the lost control plane host, ip-10-0-131-183.ec2.internal . Delete the machine of the lost control plane host by running: USD oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1 1 Specify the name of the control plane machine for the lost control plane host. A new machine is automatically provisioned after deleting the machine of the lost control plane host. Verify that a new machine has been created by running: USD oc get machines -n openshift-machine-api -o wide Example output: NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 The new machine, clustername-8qw5l-master-3 is being created and is ready after the phase changes from Provisioning to Running . It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Repeat these steps for each lost control plane host that is not the recovery host. Turn off the quorum guard by entering: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' This command ensures that you can successfully re-create secrets and roll out the static pods. In a separate terminal window within the recovery host, export the recovery kubeconfig file by running: USD export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig Force etcd redeployment. In the same terminal window where you exported the recovery kubeconfig file, run: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up. Turn the quorum guard back on by entering: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' You can verify that the unsupportedConfigOverrides section is removed from the object by running: USD oc get etcd/cluster -oyaml Verify all nodes are updated to the latest revision. In a terminal that has access to the cluster as a cluster-admin user, run: USD oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for etcd to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. After etcd is redeployed, force new rollouts for the control plane. kube-apiserver will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer. In a terminal that has access to the cluster as a cluster-admin user, run: Force a new rollout for kube-apiserver : USD oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Force a new rollout for the Kubernetes controller manager by running the following command: USD oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision by running: USD oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Force a new rollout for the kube-scheduler by running: USD oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision by using: USD oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Verify that all control plane hosts have started and joined the cluster. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h To ensure that all workloads return to normal operation following a recovery procedure, restart all control plane nodes. Note On completion of the procedural steps, you might need to wait a few minutes for all services to return to their restored state. For example, authentication by using oc login might not immediately work until the OAuth server pods are restarted. Consider using the system:admin kubeconfig file for immediate authentication. This method basis its authentication on SSL/TLS client certificates as against OAuth tokens. You can authenticate with this file by issuing the following command: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig Issue the following command to display your authenticated user name: USD oc whoami 5.3.2.3. Additional resources Installing a user-provisioned cluster on bare metal Creating a bastion host to access OpenShift Container Platform instances and the control plane nodes with SSH Replacing a bare-metal control plane node 5.3.2.4. Issues and workarounds for restoring a persistent storage state If your OpenShift Container Platform cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet object. When you restore from an etcd backup, the status of the workloads in OpenShift Container Platform is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated. Important The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OpenShift Container Platform cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa. The following are some example scenarios that produce an out-of-date status: MySQL database is running in a pod backed up by a PV object. Restoring OpenShift Container Platform from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume. Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OpenShift Container Platform is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start. Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators. A device is removed or renamed from OpenShift Container Platform nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from /dev/disk/by-id or /dev directories. This situation might cause the local PVs to refer to devices that no longer exist. To fix this problem, an administrator must: Manually remove the PVs with invalid devices. Remove symlinks from respective nodes. Delete LocalVolume or LocalVolumeSet objects (see Storage Configuring persistent storage Persistent storage using local volumes Deleting the Local Storage Operator Resources ). 5.3.3. Recovering from expired control plane certificates 5.3.3.1. Recovering from expired control plane certificates The cluster can automatically recover from expired control plane certificates. However, you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. For user-provisioned installations, you might also need to approve pending kubelet serving CSRs. Use the following steps to approve the pending CSRs: Procedure Get the list of current CSRs: USD oc get csr Example output 1 A pending kubelet service CSR (for user-provisioned installations). 2 A pending node-bootstrapper CSR. Review the details of a CSR to verify that it is valid: USD oc describe csr <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. Approve each valid node-bootstrapper CSR: USD oc adm certificate approve <csr_name> For user-provisioned installations, approve each valid kubelet serving CSR: USD oc adm certificate approve <csr_name> | [
"oc debug --as-root node/<node_name>",
"sh-4.4# chroot /host",
"export HTTP_PROXY=http://<your_proxy.example.com>:8080",
"export HTTPS_PROXY=https://<your_proxy.example.com>:8080",
"export NO_PROXY=<example.com>",
"sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup",
"found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade",
"oc apply -f enable-tech-preview-no-upgrade.yaml",
"oc get crd | grep backup",
"backups.config.openshift.io 2023-10-25T13:32:43Z etcdbackups.operator.openshift.io 2023-10-25T13:32:04Z",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem",
"oc apply -f etcd-backup-pvc.yaml",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s",
"apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1",
"oc apply -f etcd-single-backup.yaml",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate",
"oc apply -f etcd-backup-local-storage.yaml",
"apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: etcd-backup-local-storage local: path: /mnt nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWO Retain Available etcd-backup-local-storage 10s",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi 1",
"oc apply -f etcd-backup-pvc.yaml",
"apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1",
"oc apply -f etcd-single-backup.yaml",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem storageClassName: etcd-backup-local-storage",
"oc apply -f etcd-backup-pvc.yaml",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate",
"oc apply -f etcd-backup-local-storage.yaml",
"apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Delete storageClassName: etcd-backup-local-storage local: path: /mnt/ nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2",
"oc get nodes",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWX Delete Available etcd-backup-local-storage 10s",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc spec: accessModes: - ReadWriteMany volumeMode: Filesystem resources: requests: storage: 10Gi 1 storageClassName: etcd-backup-local-storage",
"oc apply -f etcd-backup-pvc.yaml",
"apiVersion: config.openshift.io/v1alpha1 kind: Backup metadata: name: etcd-recurring-backup spec: etcd: schedule: \"20 4 * * *\" 1 timeZone: \"UTC\" pvcName: etcd-backup-pvc",
"spec: etcd: retentionPolicy: retentionType: RetentionNumber 1 retentionNumber: maxNumberOfBackups: 5 2",
"spec: etcd: retentionPolicy: retentionType: RetentionSize retentionSize: maxSizeOfBackupsGb: 20 1",
"oc create -f etcd-recurring-backup.yaml",
"oc get cronjob -n openshift-etcd",
"oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"EtcdMembersAvailable\")]}{.message}{\"\\n\"}'",
"2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy",
"oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{\"\\t\"}{@.status.providerStatus.instanceState}{\"\\n\"}' | grep -v running",
"ip-10-0-131-183.ec2.internal stopped 1",
"oc get nodes -o jsonpath='{range .items[*]}{\"\\n\"}{.metadata.name}{\"\\t\"}{range .spec.taints[*]}{.key}{\" \"}' | grep unreachable",
"ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable 1",
"oc get nodes -l node-role.kubernetes.io/master | grep \"NotReady\"",
"ip-10-0-131-183.ec2.internal NotReady master 122m v1.28.5 1",
"oc get nodes -l node-role.kubernetes.io/master",
"NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.28.5 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.28.5 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.28.5",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m 1 etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 6fc1e7c9db35841d | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"sh-4.2# etcdctl member remove 6fc1e7c9db35841d",
"Member 6fc1e7c9db35841d removed from cluster ead669ce1fbfb346",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"oc delete node <node_name>",
"oc delete node ip-10-0-131-183.ec2.internal",
"oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1",
"etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m",
"oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-133-53.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 5eb0d6b8ca24730c | started | ip-10-0-133-53.ec2.internal | https://10.0.133.53:2380 | https://10.0.133.53:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"oc debug node/ip-10-0-131-183.ec2.internal 1",
"sh-4.2# chroot /host",
"sh-4.2# mkdir /var/lib/etcd-backup",
"sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/",
"sh-4.2# mv /var/lib/etcd/ /tmp",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 62bcf33650a7170a | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"sh-4.2# etcdctl member remove 62bcf33650a7170a",
"Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1",
"etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m",
"oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"single-master-recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl endpoint health",
"https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms",
"oc -n openshift-etcd get pods -l k8s-app=etcd -o wide",
"etcd-openshift-control-plane-0 5/5 Running 11 3h56m 192.168.10.9 openshift-control-plane-0 <none> <none> etcd-openshift-control-plane-1 5/5 Running 0 3h54m 192.168.10.10 openshift-control-plane-1 <none> <none> etcd-openshift-control-plane-2 5/5 Running 0 3h58m 192.168.10.11 openshift-control-plane-2 <none> <none>",
"oc rsh -n openshift-etcd etcd-openshift-control-plane-0",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380/ | https://192.168.10.9:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+",
"sh-4.2# etcdctl member remove 7a8197040a5126c8",
"Member 7a8197040a5126c8 removed from cluster b23536c33f2cdd1b",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | cc3830a72fc357f9 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"oc get secrets -n openshift-etcd | grep openshift-control-plane-2",
"etcd-peer-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-metrics-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-openshift-control-plane-2 kubernetes.io/tls 2 134m",
"oc delete secret etcd-peer-openshift-control-plane-2 -n openshift-etcd secret \"etcd-peer-openshift-control-plane-2\" deleted",
"oc delete secret etcd-serving-metrics-openshift-control-plane-2 -n openshift-etcd secret \"etcd-serving-metrics-openshift-control-plane-2\" deleted",
"oc delete secret etcd-serving-openshift-control-plane-2 -n openshift-etcd secret \"etcd-serving-openshift-control-plane-2\" deleted",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned",
"oc get clusteroperator baremetal",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.15.0 True False False 3d15h",
"oc delete bmh openshift-control-plane-2 -n openshift-machine-api",
"baremetalhost.metal3.io \"openshift-control-plane-2\" deleted",
"oc delete machine -n openshift-machine-api examplecluster-control-plane-2",
"oc edit machine -n openshift-machine-api examplecluster-control-plane-2",
"finalizers: - machine.machine.openshift.io",
"machine.machine.openshift.io/examplecluster-control-plane-2 edited",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned",
"oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 3h24m v1.28.5 openshift-control-plane-1 Ready master 3h24m v1.28.5 openshift-compute-0 Ready worker 176m v1.28.5 openshift-compute-1 Ready worker 176m v1.28.5",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openshift-control-plane-2-bmc-secret namespace: openshift-machine-api data: password: <password> username: <username> type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-control-plane-2 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: redfish://10.46.61.18:443/redfish/v1/Systems/1 credentialsName: openshift-control-plane-2-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:b0:8a:a0 bootMode: UEFI externallyProvisioned: false online: true rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> userData: name: master-user-data-managed namespace: openshift-machine-api EOF",
"oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 available examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned",
"oc get bmh -n openshift-machine-api",
"oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 provisioned examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m",
"oc get nodes",
"oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 4h26m v1.28.5 openshift-control-plane-1 Ready master 4h26m v1.28.5 openshift-control-plane-2 Ready master 12m v1.28.5 openshift-compute-0 Ready worker 3h58m v1.28.5 openshift-compute-1 Ready worker 3h58m v1.28.5",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-openshift-control-plane-0 5/5 Running 0 105m etcd-openshift-control-plane-1 5/5 Running 0 107m etcd-openshift-control-plane-2 5/5 Running 0 103m",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc rsh -n openshift-etcd etcd-openshift-control-plane-0",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380 | https://192.168.10.11:2379 | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380 | https://192.168.10.10:2379 | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380 | https://192.168.10.9:2379 | false | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+",
"etcdctl endpoint health --cluster",
"https://192.168.10.10:2379 is healthy: successfully committed proposal: took = 8.973065ms https://192.168.10.9:2379 is healthy: successfully committed proposal: took = 11.559829ms https://192.168.10.11:2379 is healthy: successfully committed proposal: took = 11.665203ms",
"oc get etcd -o=jsonpath='{range.items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision",
"sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp",
"sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"",
"sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp",
"sudo crictl ps | grep kube-apiserver | egrep -v \"operator|guard\"",
"sudo mv -v /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /tmp",
"sudo crictl ps | grep kube-controller-manager | egrep -v \"operator|guard\"",
"sudo mv -v /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp",
"sudo crictl ps | grep kube-scheduler | egrep -v \"operator|guard\"",
"sudo mv -v /var/lib/etcd/ /tmp",
"sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp",
"sudo crictl ps --name keepalived",
"ip -o address | egrep '<api_vip>|<ingress_vip>'",
"sudo ip address del <reported_vip> dev <reported_vip_device>",
"ip -o address | grep <api_vip>",
"sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup",
"...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml",
"oc get nodes -w",
"NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.28.5 host-172-25-75-38 Ready infra,worker 3d20h v1.28.5 host-172-25-75-40 Ready master 3d20h v1.28.5 host-172-25-75-65 Ready master 3d20h v1.28.5 host-172-25-75-74 Ready infra,worker 3d20h v1.28.5 host-172-25-75-79 Ready worker 3d20h v1.28.5 host-172-25-75-86 Ready worker 3d20h v1.28.5 host-172-25-75-98 Ready infra,worker 3d20h v1.28.5",
"ssh -i <ssh-key-path> core@<master-hostname>",
"sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem",
"sudo systemctl restart kubelet.service",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 4",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"oc adm certificate approve <csr_name>",
"sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"",
"3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s",
"oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-control-plane",
"oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-control-plane",
"sudo rm -f /var/lib/ovn-ic/etc/*.db",
"sudo systemctl restart ovs-vswitchd ovsdb-server",
"oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>",
"oc get po -n openshift-ovn-kubernetes",
"oc delete node <node>",
"ssh -i <ssh-key-path> core@<node>",
"sudo mv /var/lib/kubelet/pki/* /tmp",
"sudo systemctl restart kubelet.service",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending",
"adm certificate approve csr-<uuid>",
"oc get nodes",
"oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubeapiserver cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubescheduler cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig",
"oc whoami",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 2 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"oc adm certificate approve <csr_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/backup_and_restore/control-plane-backup-and-restore |
25.12. Scanning Storage Interconnects | 25.12. Scanning Storage Interconnects Certain commands allow you to reset, scan, or both reset and scan one or more interconnects, which potentially adds and removes multiple devices in one operation. This type of scan can be disruptive, as it can cause delays while I/O operations time out, and remove devices unexpectedly. Red Hat recommends using interconnect scanning only when necessary . Observe the following restrictions when scanning storage interconnects: All I/O on the effected interconnects must be paused and flushed before executing the procedure, and the results of the scan checked before I/O is resumed. As with removing a device, interconnect scanning is not recommended when the system is under memory pressure. To determine the level of memory pressure, run the vmstat 1 100 command. Interconnect scanning is not recommended if free memory is less than 5% of the total memory in more than 10 samples per 100. Also, interconnect scanning is not recommended if swapping is active (non-zero si and so columns in the vmstat output). The free command can also display the total memory. The following commands can be used to scan storage interconnects: echo "1" > /sys/class/fc_host/host N /issue_lip (Replace N with the host number.) This operation performs a Loop Initialization Protocol ( LIP ), scans the interconnect, and causes the SCSI layer to be updated to reflect the devices currently on the bus. Essentially, an LIP is a bus reset, and causes device addition and removal. This procedure is necessary to configure a new SCSI target on a Fibre Channel interconnect. Note that issue_lip is an asynchronous operation. The command can complete before the entire scan has completed. You must monitor /var/log/messages to determine when issue_lip finishes. The lpfc , qla2xxx , and bnx2fc drivers support issue_lip . For more information about the API capabilities supported by each driver in Red Hat Enterprise Linux, see Table 25.1, "Fibre Channel API Capabilities" . /usr/bin/rescan-scsi-bus.sh The /usr/bin/rescan-scsi-bus.sh script was introduced in Red Hat Enterprise Linux 5.4. By default, this script scans all the SCSI buses on the system, and updates the SCSI layer to reflect new devices on the bus. The script provides additional options to allow device removal, and the issuing of LIPs. For more information about this script, including known issues, see Section 25.18, "Adding/Removing a Logical Unit Through rescan-scsi-bus.sh" . echo "- - -" > /sys/class/scsi_host/host h /scan This is the same command as described in Section 25.11, "Adding a Storage Device or Path" to add a storage device or path. In this case, however, the channel number, SCSI target ID, and LUN values are replaced by wildcards. Any combination of identifiers and wildcards is allowed, so you can make the command as specific or broad as needed. This procedure adds LUNs, but does not remove them. modprobe --remove driver-name , modprobe driver-name Running the modprobe --remove driver-name command followed by the modprobe driver-name command completely re-initializes the state of all interconnects controlled by the driver. Despite being rather extreme, using the described commands can be appropriate in certain situations. The commands can be used, for example, to restart the driver with a different module parameter value. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/scanning-storage-interconnects |
Chapter 2. OAuthAccessToken [oauth.openshift.io/v1] | Chapter 2. OAuthAccessToken [oauth.openshift.io/v1] Description OAuthAccessToken describes an OAuth access token. The name of a token must be prefixed with a sha256~ string, must not contain "/" or "%" characters and must be at least 32 characters long. The name of the token is constructed from the actual token by sha256-hashing it and using URL-safe unpadded base64-encoding (as described in RFC4648) on the hashed result. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources authorizeToken string AuthorizeToken contains the token that authorized this token clientName string ClientName references the client that created this token. expiresIn integer ExpiresIn is the seconds from CreationTime before this token expires. inactivityTimeoutSeconds integer InactivityTimeoutSeconds is the value in seconds, from the CreationTimestamp, after which this token can no longer be used. The value is automatically incremented when the token is used. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 redirectURI string RedirectURI is the redirection associated with the token. refreshToken string RefreshToken is the value by which this token can be renewed. Can be blank. scopes array (string) Scopes is an array of the requested scopes. userName string UserName is the user name associated with this token userUID string UserUID is the unique UID associated with this token 2.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/oauthaccesstokens DELETE : delete collection of OAuthAccessToken GET : list or watch objects of kind OAuthAccessToken POST : create an OAuthAccessToken /apis/oauth.openshift.io/v1/watch/oauthaccesstokens GET : watch individual changes to a list of OAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/oauthaccesstokens/{name} DELETE : delete an OAuthAccessToken GET : read the specified OAuthAccessToken PATCH : partially update the specified OAuthAccessToken PUT : replace the specified OAuthAccessToken /apis/oauth.openshift.io/v1/watch/oauthaccesstokens/{name} GET : watch changes to an object of kind OAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/oauth.openshift.io/v1/oauthaccesstokens Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OAuthAccessToken Table 2.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 2.3. Body parameters Parameter Type Description body DeleteOptions schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind OAuthAccessToken Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK OAuthAccessTokenList schema 401 - Unauthorized Empty HTTP method POST Description create an OAuthAccessToken Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . Table 2.8. Body parameters Parameter Type Description body OAuthAccessToken schema Table 2.9. HTTP responses HTTP code Reponse body 200 - OK OAuthAccessToken schema 201 - Created OAuthAccessToken schema 202 - Accepted OAuthAccessToken schema 401 - Unauthorized Empty 2.2.2. /apis/oauth.openshift.io/v1/watch/oauthaccesstokens Table 2.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of OAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/oauth.openshift.io/v1/oauthaccesstokens/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the OAuthAccessToken Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OAuthAccessToken Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK OAuthAccessToken schema 202 - Accepted OAuthAccessToken schema 401 - Unauthorized Empty HTTP method GET Description read the specified OAuthAccessToken Table 2.17. HTTP responses HTTP code Reponse body 200 - OK OAuthAccessToken schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OAuthAccessToken Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.19. Body parameters Parameter Type Description body Patch schema Table 2.20. HTTP responses HTTP code Reponse body 200 - OK OAuthAccessToken schema 201 - Created OAuthAccessToken schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OAuthAccessToken Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . Table 2.22. Body parameters Parameter Type Description body OAuthAccessToken schema Table 2.23. HTTP responses HTTP code Reponse body 200 - OK OAuthAccessToken schema 201 - Created OAuthAccessToken schema 401 - Unauthorized Empty 2.2.4. /apis/oauth.openshift.io/v1/watch/oauthaccesstokens/{name} Table 2.24. Global path parameters Parameter Type Description name string name of the OAuthAccessToken Table 2.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind OAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/oauth_apis/oauthaccesstoken-oauth-openshift-io-v1 |
Planning your deployment | Planning your deployment Red Hat OpenShift Data Foundation 4.15 Important considerations when deploying Red Hat OpenShift Data Foundation 4.15 Red Hat Storage Documentation Team Abstract Read this document for important considerations when planning your Red Hat OpenShift Data Foundation deployment. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/planning_your_deployment/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/4.17_release_notes/making-open-source-more-inclusive |
Part IV. Billing | Part IV. Billing | null | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/billing-1 |
Chapter 95. ConnectorPlugin schema reference | Chapter 95. ConnectorPlugin schema reference Used in: KafkaConnectStatus , KafkaMirrorMaker2Status Property Description type The type of the connector plugin. The available types are sink and source . string version The version of the connector plugin. string class The class of the connector plugin. string | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-connectorplugin-reference |
Chapter 103. Checking services using IdM Healthcheck | Chapter 103. Checking services using IdM Healthcheck You can monitor services used by the Identity Management (IdM) server using the Healthcheck tool. For details, see Healthcheck in IdM . Prerequisites The Healthcheck tool is only available on RHEL 8.1 and newer 103.1. Services Healthcheck test The Healthcheck tool includes a test to check if any IdM services is not running. This test is important because services which are not running can cause failures in other tests. Therefore, check that all services are running first. You can then check all other test results. To see all services tests, run ipa-healthcheck with the --list-sources option: You can find all services tested with Healthcheck under the ipahealthcheck.meta.services source: certmonger dirsrv gssproxy httpd ipa_custodia ipa_dnskeysyncd ipa_otpd kadmin krb5kdc named pki_tomcatd sssd Note Run these tests on all IdM servers when trying to discover issues. 103.2. Screening services using Healthcheck Follow this procedure to run a standalone manual test of services running on the Identity Management (IdM) server using the Healthcheck tool. The Healthcheck tool includes many tests, whose results can be shortened with: Excluding all successful test: --failures-only Including only services tests: --source=ipahealthcheck.meta.services Procedure To run Healthcheck with warnings, errors and critical issues regarding services, enter: A successful test displays empty brackets: If one of the services fails, the result can looks similarly to this example: Additional resources See man ipa-healthcheck . | [
"ipa-healthcheck --list-sources",
"ipa-healthcheck --source=ipahealthcheck.meta.services --failures-only",
"[ ]",
"{ \"source\": \"ipahealthcheck.meta.services\", \"check\": \"httpd\", \"result\": \"ERROR\", \"kw\": { \"status\": false, \"msg\": \"httpd: not running\" } }"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/checking-services-using-idm-healthcheck_configuring-and-managing-idm |
1.3. Automated Installation | 1.3. Automated Installation Anaconda installations can be automated through the use of a Kickstart file. Kickstart files can be used to configure any aspect of installation, allowing installation without user interaction, and can be used to easily automate installation of multiple instances of Red Hat Enterprise Linux. In most situations, you can simply follow the procedure outlined in Section 4.2, "Automatic Installation" to create and configure a Kickstart file, which can be used to perform an arbitrary number of non-interactive installations of Red Hat Enterprise Linux. Kickstart files can be automatically created based on choices made using the graphical interface, through the online Kickstart Generator tool, or written from scratch using any text editor. For more information, see Section 27.2.1, "Creating a Kickstart File" . Kickstart files can be easily maintained and updated using various utilities in Red Hat Enterprise Linux. For more information, see Section 27.2.2, "Maintaining the Kickstart File" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-automated-installation |
Chapter 4. Resolved issues and known issues | Chapter 4. Resolved issues and known issues 4.1. Resolved issues See Resolved issues for JBoss EAP XP 5.0.0 to view the list of issues that have been resolved for this release. 4.2. Known issues See Known issues for JBoss EAP XP 5.0.0 to view the list of known issues for this release. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/red_hat_jboss_eap_xp_5.0_release_notes/resolved_issues_and_known_issues |
B.111. yum-rhn-plugin and rhn-client-tools | B.111. yum-rhn-plugin and rhn-client-tools B.111.1. RHEA-2010:0949 - yum-rhn-plugin and rhn-client-tools enhancement update Updated yum-rhn-plugin and rhn-client-tools packages that add an enhancement are now available for Red Hat Enterprise Linux 6. Red Hat Network Client Tools provide programs and libraries that allow a system to receive software updates from Red Hat Network (RHN). yum-rhn-plugin allows yum to access a Red Hat Network server for software updates. Enhancement BZ# 649435 These packages have been updated to support the Red Hat Network Satellite Server Maintenance Window, allowing a user to download scheduled packages and errata before the start of the maintenance window. Users of rhn-client-tools and yum-rhn-plugin are advised to upgrade to these updated packages, which add this enhancement. Note that this feature is disabled by default. For information on how to enable it, refer to https://access.redhat.com/site/solutions/42227 . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/yum-rhn-plugin |
Migrating from a standalone Manager to a self-hosted engine | Migrating from a standalone Manager to a self-hosted engine Red Hat Virtualization 4.3 How to migrate the Red Hat Virtualization Manager from a standalone server to a self-managed virtual machine Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract This document describes how to migrate your Red Hat Virtualization environment from a Manager installed on a physical server (or a virtual machine running in a separate environment) to a Manager installed on a virtual machine running on the same hosts it manages. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/migrating_from_a_standalone_manager_to_a_self-hosted_engine/index |
Chapter 4. CSIStorageCapacity [storage.k8s.io/v1] | Chapter 4. CSIStorageCapacity [storage.k8s.io/v1] Description CSIStorageCapacity stores the result of one CSI GetCapacity call. For a given StorageClass, this describes the available capacity in a particular topology segment. This can be used when considering where to instantiate new PersistentVolumes. For example this can express things like: - StorageClass "standard" has "1234 GiB" available in "topology.kubernetes.io/zone=us-east1" - StorageClass "localssd" has "10 GiB" available in "kubernetes.io/hostname=knode-abc123" The following three cases all imply that no capacity is available for a certain combination: - no object exists with suitable topology and storage class name - such an object exists, but the capacity is unset - such an object exists, but the capacity is zero The producer of these objects can decide which approach is more suitable. They are consumed by the kube-scheduler when a CSI driver opts into capacity-aware scheduling with CSIDriverSpec.StorageCapacity. The scheduler compares the MaximumVolumeSize against the requested size of pending volumes to filter out unsuitable nodes. If MaximumVolumeSize is unset, it falls back to a comparison against the less precise Capacity. If that is also unset, the scheduler assumes that capacity is insufficient and tries some other node. Type object Required storageClassName 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources capacity Quantity capacity is the value reported by the CSI driver in its GetCapacityResponse for a GetCapacityRequest with topology and parameters that match the fields. The semantic is currently (CSI spec 1.2) defined as: The available capacity, in bytes, of the storage that can be used to provision volumes. If not set, that information is currently unavailable. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds maximumVolumeSize Quantity maximumVolumeSize is the value reported by the CSI driver in its GetCapacityResponse for a GetCapacityRequest with topology and parameters that match the fields. This is defined since CSI spec 1.4.0 as the largest size that may be used in a CreateVolumeRequest.capacity_range.required_bytes field to create a volume with the same parameters as those in GetCapacityRequest. The corresponding value in the Kubernetes API is ResourceRequirements.Requests in a volume claim. metadata ObjectMeta Standard object's metadata. The name has no particular meaning. It must be a DNS subdomain (dots allowed, 253 characters). To ensure that there are no conflicts with other CSI drivers on the cluster, the recommendation is to use csisc-<uuid>, a generated name, or a reverse-domain name which ends with the unique CSI driver name. Objects are namespaced. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata nodeTopology LabelSelector nodeTopology defines which nodes have access to the storage for which capacity was reported. If not set, the storage is not accessible from any node in the cluster. If empty, the storage is accessible from all nodes. This field is immutable. storageClassName string storageClassName represents the name of the StorageClass that the reported capacity applies to. It must meet the same requirements as the name of a StorageClass object (non-empty, DNS subdomain). If that object no longer exists, the CSIStorageCapacity object is obsolete and should be removed by its creator. This field is immutable. 4.2. API endpoints The following API endpoints are available: /apis/storage.k8s.io/v1/csistoragecapacities GET : list or watch objects of kind CSIStorageCapacity /apis/storage.k8s.io/v1/watch/csistoragecapacities GET : watch individual changes to a list of CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/namespaces/{namespace}/csistoragecapacities DELETE : delete collection of CSIStorageCapacity GET : list or watch objects of kind CSIStorageCapacity POST : create a CSIStorageCapacity /apis/storage.k8s.io/v1/watch/namespaces/{namespace}/csistoragecapacities GET : watch individual changes to a list of CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/namespaces/{namespace}/csistoragecapacities/{name} DELETE : delete a CSIStorageCapacity GET : read the specified CSIStorageCapacity PATCH : partially update the specified CSIStorageCapacity PUT : replace the specified CSIStorageCapacity /apis/storage.k8s.io/v1/watch/namespaces/{namespace}/csistoragecapacities/{name} GET : watch changes to an object of kind CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 4.2.1. /apis/storage.k8s.io/v1/csistoragecapacities HTTP method GET Description list or watch objects of kind CSIStorageCapacity Table 4.1. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacityList schema 401 - Unauthorized Empty 4.2.2. /apis/storage.k8s.io/v1/watch/csistoragecapacities HTTP method GET Description watch individual changes to a list of CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead. Table 4.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/storage.k8s.io/v1/namespaces/{namespace}/csistoragecapacities HTTP method DELETE Description delete collection of CSIStorageCapacity Table 4.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CSIStorageCapacity Table 4.5. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacityList schema 401 - Unauthorized Empty HTTP method POST Description create a CSIStorageCapacity Table 4.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.7. Body parameters Parameter Type Description body CSIStorageCapacity schema Table 4.8. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacity schema 201 - Created CSIStorageCapacity schema 202 - Accepted CSIStorageCapacity schema 401 - Unauthorized Empty 4.2.4. /apis/storage.k8s.io/v1/watch/namespaces/{namespace}/csistoragecapacities HTTP method GET Description watch individual changes to a list of CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead. Table 4.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.5. /apis/storage.k8s.io/v1/namespaces/{namespace}/csistoragecapacities/{name} Table 4.10. Global path parameters Parameter Type Description name string name of the CSIStorageCapacity HTTP method DELETE Description delete a CSIStorageCapacity Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CSIStorageCapacity Table 4.13. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacity schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CSIStorageCapacity Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.15. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacity schema 201 - Created CSIStorageCapacity schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CSIStorageCapacity Table 4.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.17. Body parameters Parameter Type Description body CSIStorageCapacity schema Table 4.18. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacity schema 201 - Created CSIStorageCapacity schema 401 - Unauthorized Empty 4.2.6. /apis/storage.k8s.io/v1/watch/namespaces/{namespace}/csistoragecapacities/{name} Table 4.19. Global path parameters Parameter Type Description name string name of the CSIStorageCapacity HTTP method GET Description watch changes to an object of kind CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/storage_apis/csistoragecapacity-storage-k8s-io-v1 |
AMQ Clients Overview | AMQ Clients Overview Red Hat AMQ 2020.Q4 For Use with AMQ Clients 2.8 | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/amq_clients_overview/index |
5.300. setroubleshoot | 5.300. setroubleshoot 5.300.1. RHBA-2012:1005 - setroubleshoot bug fix update Updated setroubleshoot packages that fix one bug are now available for Red Hat Enterprise Linux 6. The setroubleshoot packages provide tools to help diagnose SELinux problems. When AVC messages are returned, an alert can be generated that provides information about the problem and helps to track its resolution. Alerts can be customized to user preference. The same tools can used to process log files. Bug Fix BZ# 832186 Previously, SELinux Alert Browser did not display alerts even if selinux denials were presented. This was caused by sedispatch, which did not handle audit messages correctly, and users were not able to fix their SELinux issues according to the SELinux alerts. With this update, this bug has been fixed so that users are now able to fix their SELinux issues using setroubleshoot, as expected. All users of setroubleshoot are advised to upgrade to these updated packages, which fix this bug. 5.300.2. RHBA-2012:0781 - setroubleshoot bug fix update Updated setroubleshoot packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The setroubleshoot packages provide tools to help diagnose SELinux problems. When AVC messages are returned, an alert can be generated that provides information about the problem and helps to track its resolution. Alerts can be customized to user preference. The same tools can be used to process log files. Bug Fixes BZ# 757857 Previously, when the system produced a large amount of SELinux denials, the setroubleshootd daemon continued to process all of these denials and increased its memory consumption. Consequently, a memory leak could occur. This bug has been fixed and memory leaks no longer occur in the described scenario. BZ# 633213 When a duplicated command-line option was passed to the sealert utility, sealert terminated with an error message. With this update, the duplicate options are ignored and sealert works as expected, thus fixing this bug. BZ# 575686 Previously, some translations for setroubleshoot were incomplete or missing. This update provides complete translations for 22 languages. Users of setroubleshoot are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/setroubleshoot |
Chapter 4. Red Hat OpenShift Cluster Manager | Chapter 4. Red Hat OpenShift Cluster Manager Red Hat OpenShift Cluster Manager is a managed service where you can install, modify, operate, and upgrade your Red Hat OpenShift clusters. This service allows you to work with all of your organization's clusters from a single dashboard. OpenShift Cluster Manager guides you to install OpenShift Container Platform, Red Hat OpenShift Service on AWS (ROSA), and OpenShift Dedicated clusters. It is also responsible for managing both OpenShift Container Platform clusters after self-installation as well as your ROSA and OpenShift Dedicated clusters. You can use OpenShift Cluster Manager to do the following actions: Create new clusters View cluster details and metrics Manage your clusters with tasks such as scaling, changing node labels, networking, authentication Manage access control Monitor clusters Schedule upgrades 4.1. Accessing Red Hat OpenShift Cluster Manager You can access OpenShift Cluster Manager with your configured OpenShift account. Prerequisites You have an account that is part of an OpenShift organization. If you are creating a cluster, your organization has specified quota. Procedure Log in to OpenShift Cluster Manager using your login credentials. 4.2. General actions On the top right of the cluster page, there are some actions that a user can perform on the entire cluster: Open console launches a web console so that the cluster owner can issue commands to the cluster. Actions drop-down menu allows the cluster owner to rename the display name of the cluster, change the amount of load balancers and persistent storage on the cluster, if applicable, manually set the node count, and delete the cluster. Refresh icon forces a refresh of the cluster. 4.3. Cluster tabs Selecting an active, installed cluster shows tabs associated with that cluster. The following tabs display after the cluster's installation completes: Overview Access control Add-ons Networking Insights Advisor Machine pools Support Settings 4.3.1. Overview tab The Overview tab provides information about how the cluster was configured: Cluster ID is the unique identification for the created cluster. This ID can be used when issuing commands to the cluster from the command line. Domain prefix is the prefix that is used throughout the cluster. The default value is the cluster's name. Type shows the OpenShift version that the cluster is using. Control plane type is the architecture type of the cluster. The field only displays if the cluster uses a hosted control plane architecture. Region is the server region. Availability shows which type of availability zone that the cluster uses, either single or multizone. Version is the OpenShift version that is installed on the cluster. If there is an update available, you can update from this field. Created at shows the date and time that the cluster was created. Owner identifies who created the cluster and has owner rights. Delete Protection: <status> shows whether or not the cluster's delete protection is enabled. Total vCPU shows the total available virtual CPU for this cluster. Total memory shows the total available memory for this cluster. Infrastructure AWS account displays the AWS account that is responsible for cluster creation and maintenance. Nodes shows the actual and desired nodes on the cluster. These numbers might not match due to cluster scaling. Network field shows the address and prefixes for network connectivity. OIDC configuration field shows the Open ID Connect configuration for the cluster. Resource usage section of the tab displays the resources in use with a graph. Advisor recommendations section gives insight in relation to security, performance, availability, and stability. This section requires the use of remote health functionality. See Using Insights to identify issues with the cluster in the Additional resources section. 4.3.2. Access control tab The Access control tab allows the cluster owner to set up an identity provider, grant elevated permissions, and grant roles to other users. Prerequisites You must be the cluster owner or have the correct permissions to grant roles on the cluster. Procedure Select the Grant role button. Enter the Red Hat account login for the user that you wish to grant a role on the cluster. Select the Grant role button on the dialog box. The dialog box closes, and the selected user shows the "Cluster Editor" access. 4.3.3. Add-ons tab 4.3.4. Insights Advisor tab The Insights Advisor tab uses the Remote Health functionality of the OpenShift Container Platform to identify and mitigate risks to security, performance, availability, and stability. See Using Insights to identify issues with your cluster in the OpenShift Container Platform documentation. 4.3.5. Machine pools tab The Machine pools tab allows the cluster owner to create new machine pools if there is enough available quota, or edit an existing machine pool. Selecting the > Edit option opens the "Edit machine pool" dialog. In this dialog, you can change the node count per availability zone, edit node labels and taints, and view any associated AWS security groups. 4.3.6. Support tab In the Support tab, you can add notification contacts for individuals that should receive cluster notifications. The username or email address that you provide must relate to a user account in the Red Hat organization where the cluster is deployed. Also from this tab, you can open a support case to request technical support for your cluster. 4.3.7. Settings tab The Settings tab provides a few options for the cluster owner: Update strategy allows you to determine if the cluster automatically updates on a certain day of the week at a specified time or if all updates are scheduled manually. Update status shows the current version and if there are any updates available. 4.4. Additional resources For the complete documentation for OpenShift Cluster Manager, see OpenShift Cluster Manager documentation . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/architecture/ocm-overview-ocp |
Chapter 1. Introduction to OpenStack networking | Chapter 1. Introduction to OpenStack networking The Networking service (neutron) is the software-defined networking (SDN) component of Red Hat OpenStack Services on OpenShift (RHOSO). The RHOSO Networking service manages internal and external traffic to and from virtual machine instances and provides core services such as routing, segmentation, DHCP, and metadata. It provides the API for virtual networking capabilities and management of switches, routers, ports, and firewalls. 1.1. Managing your RHOSO networks With the Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) you can effectively meet your site's networking goals. You can do the following tasks: Customize the networks used in your data plane. In RHOSO, the network configuration applied by default to the data plane nodes is the single NIC VLANs configuration. However, you can modify the network configuration that the OpenStack Operator applies for each data plane node set in your RHOSO environment. For more information, see Customizing data plane networks . Provide connectivity to VM instances within a project. Project networks primarily enable general (non-privileged) projects to manage networks without involving administrators. These networks are entirely virtual and require virtual routers to interact with other project networks and external networks such as the Internet. Project networks also usually provide DHCP and metadata services to VM (virtual machine) instances. RHOSO supports the following project network types: flat, VLAN, and GENEVE. For more information, see Managing project networks . Set ingress and egress limits for traffic on VM instances. You can offer varying service levels for instances by using quality of service (QoS) policies to apply rate limits to egress and ingress traffic. You can apply QoS policies to individual ports. You can also apply QoS policies to a project network, where ports with no specific policy attached inherit the policy. For more information, see Configuring Quality of Service (QoS) policies . Control which projects can attach instances to a shared network. Using role-based access control (RBAC) policies in the RHOSO Networking service, cloud administrators can remove the ability for some projects to create networks and can instead allow them to attach to pre-existing networks that correspond to their project. For more information, see Configuring RBAC policies . Secure your network at the port level. Security groups provide a container for virtual firewall rules that control ingress (inbound to instances) and egress (outbound from instances) network traffic at the port level. Security groups use a default deny policy and only contain rules that allow specific traffic. Each port can reference one or more security groups in an additive fashion. ML2/OVN uses the Open vSwitch firewall driver to translate security group rules to a configuration. By default, security groups are stateful. In ML2/OVN deployments, you can also create stateless security groups. A stateless security group can provide significant performance benefits. Unlike stateful security groups, stateless security groups do not automatically allow returning traffic, so you must create a complimentary security group rule to allow the return of related traffic. For more information, see Configuring shared security groups . 1.2. Networking service components The Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) includes the following components: API server The RHOSO networking API includes support for Layer 2 networking and IP Address Management (IPAM), as well as an extension for a Layer 3 router construct that enables routing between Layer 2 networks and gateways to external networks. RHOSO networking includes a growing list of plug-ins that enable interoperability with various commercial and open source network technologies, including routers, switches, virtual switches and software-defined networking (SDN) controllers. Modular Layer 2 (ML2) plug-in and agents ML2 plugs and unplugs ports, creates networks or subnets, and provides IP addressing. Messaging queue Accepts and routes RPC requests between RHOSO services to complete API operations. 1.3. Modular Layer 2 (ML2) networking Modular Layer 2 (ML2) is the Red Hat OpenStack Services on OpenShift (RHOSO) networking core plug-in. The ML2 modular design enables the concurrent operation of mixed network technologies through mechanism drivers. Open Virtual Network (OVN) is the default mechanism driver used with ML2. The ML2 framework distinguishes between the two kinds of drivers that can be configured: Type drivers Define how an RHOSO network is technically realized. Each available network type is managed by an ML2 type driver, and they maintain any required type-specific network state. Validating the type-specific information for provider networks, type drivers are responsible for the allocation of a free segment in project networks. Examples of type drivers are GENEVE, VLAN, and flat networks. Mechanism drivers Define the mechanism to access an RHOSO network of a certain type. The mechanism driver takes the information established by the type driver and applies it to the networking mechanisms that have been enabled. RHOSO uses the OVN mechanism driver. Mechanism drivers can employ L2 agents, and by using RPC interact directly with external devices or controllers. You can use multiple mechanism and type drivers simultaneously to access different ports of the same virtual network. 1.4. ML2 network types You can operate multiple network segments at the same time. ML2 supports the use and interconnection of multiple network segments. You don't have to bind a port to a network segment because ML2 binds ports to segments with connectivity. Depending on the mechanism driver, ML2 supports the following network segment types: Flat VLAN GENEVE tunnels Flat All virtual machine (VM) instances reside on the same network, which can also be shared with the hosts. No VLAN tagging or other network segregation occurs. VLAN With RHOSO networking users can create multiple provider or project networks using VLAN IDs (802.1Q tagged) that correspond to VLANs present in the physical network. This allows instances to communicate with each other across the environment. They can also communicate with dedicated servers, firewalls, load balancers and other network infrastructure on the same Layer 2 VLAN. You can use VLANs to segment network traffic for computers running on the same switch. This means that you can logically divide your switch by configuring the ports to be members of different networks - they are basically mini-LANs that you can use to separate traffic for security reasons. For example, if your switch has 24 ports in total, you can assign ports 1-6 to VLAN200, and ports 7-18 to VLAN201. As a result, computers connected to VLAN200 are completely separate from those on VLAN201; they cannot communicate directly, and if they wanted to, the traffic must pass through a router as if they were two separate physical switches. Firewalls can also be useful for governing which VLANs can communicate with each other. GENEVE tunnels Generic Network Virtualization Encapsulation (GENEVE) recognizes and accommodates changing capabilities and needs of different devices in network virtualization. It provides a framework for tunneling rather than being prescriptive about the entire system. GENEVE defines the content of the metadata flexibly that is added during encapsulation and tries to adapt to various virtualization scenarios. It uses UDP as its transport protocol and is dynamic in size using extensible option headers. GENEVE supports unicast, multicast, and broadcast. The GENEVE type driver is compatible with the ML2/OVN mechanism driver. 1.5. Extension drivers for the RHOSO Networking service The Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) is extensible. Extensions serve two purposes: they allow the introduction of new features in the API without requiring a version change and they allow the introduction of vendor specific niche functionality. Applications can programmatically list available extensions by performing a GET on the /extensions URI. Note that this is a versioned request; that is, an extension available in one API version might not be available in another. The ML2 plug-in also supports extension drivers that allows other pluggable drivers to extend the core resources implemented in the ML2 plug-in for network objects. Examples of extension drivers include support for QoS, port security, and so on. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_networking_services/networking-overview_rhoso-cfgnet |
3.7. memory | 3.7. memory The memory subsystem generates automatic reports on memory resources used by the tasks in a cgroup, and sets limits on memory use of those tasks: Note By default, the memory subsystem uses 40 bytes of memory per physical page on x86_64 systems. These resources are consumed even if memory is not used in any hierarchy. If you do not plan to use the memory subsystem, you can disable it to reduce the resource consumption of the kernel. To permanently disable the memory subsystem, open the /boot/grub/grub.conf configuration file as root and append the following text to the line that starts with the kernel keyword: cgroup_disable=memory For more information on working with /boot/grub/grub.conf , see the Configuring the GRUB Boot Loader chapter in the Red Hat Enterprise Linux 6 Deployment Guide . To temporarily disable the memory subsystem for a single session, perform the following steps when starting the system: At the GRUB boot screen, press any key to enter the GRUB interactive menu. Select Red Hat Enterprise Linux with the version of the kernel that you want to boot and press the a key to modify the kernel parameters. Type cgroup_disable=memory at the end of the line and press Enter to exit GRUB edit mode. With cgroup_disable=memory enabled, memory is not visible as an individually mountable subsystem and it is not automatically mounted when mounting all cgroups in a single hierarchy. Please note that memory is currently the only subsystem that can be effectively disabled with cgroup_disable to save resources. Using this option with other subsystems only disables their usage, but does not cut their resource consumption. However, other subsystems do not consume as much resources as the memory subsystem. The following tunable parameters are available for the memory subsystem: memory.stat reports a wide range of memory statistics, as described in the following table: Table 3.2. Values reported by memory.stat Statistic Description cache page cache, including tmpfs ( shmem ), in bytes rss anonymous and swap cache, not including tmpfs ( shmem ), in bytes mapped_file size of memory-mapped mapped files, including tmpfs ( shmem ), in bytes pgpgin number of pages paged into memory pgpgout number of pages paged out of memory swap swap usage, in bytes active_anon anonymous and swap cache on active least-recently-used (LRU) list, including tmpfs ( shmem ), in bytes inactive_anon anonymous and swap cache on inactive LRU list, including tmpfs ( shmem ), in bytes active_file file-backed memory on active LRU list, in bytes inactive_file file-backed memory on inactive LRU list, in bytes unevictable memory that cannot be reclaimed, in bytes hierarchical_memory_limit memory limit for the hierarchy that contains the memory cgroup, in bytes hierarchical_memsw_limit memory plus swap limit for the hierarchy that contains the memory cgroup, in bytes Additionally, each of these files other than hierarchical_memory_limit and hierarchical_memsw_limit has a counterpart prefixed total_ that reports not only on the cgroup, but on all its children as well. For example, swap reports the swap usage by a cgroup and total_swap reports the total swap usage by the cgroup and all its child groups. When you interpret the values reported by memory.stat , note how the various statistics inter-relate: active_anon + inactive_anon = anonymous memory + file cache for tmpfs + swap cache Therefore, active_anon + inactive_anon != rss , because rss does not include tmpfs . active_file + inactive_file = cache - size of tmpfs memory.usage_in_bytes reports the total current memory usage by processes in the cgroup (in bytes). memory.memsw.usage_in_bytes reports the sum of current memory usage plus swap space used by processes in the cgroup (in bytes). memory.max_usage_in_bytes reports the maximum memory used by processes in the cgroup (in bytes). memory.memsw.max_usage_in_bytes reports the maximum amount of memory and swap space used by processes in the cgroup (in bytes). memory.limit_in_bytes sets the maximum amount of user memory (including file cache). If no units are specified, the value is interpreted as bytes. However, it is possible to use suffixes to represent larger units - k or K for kilobytes, m or M for megabytes, and g or G for gigabytes. For example, to set the limit to 1 gigabyte, execute: ~]# echo 1G > /cgroup/memory/lab1/memory.limit_in_bytes You cannot use memory.limit_in_bytes to limit the root cgroup; you can only apply values to groups lower in the hierarchy. Write -1 to memory.limit_in_bytes to remove any existing limits. memory.memsw.limit_in_bytes sets the maximum amount for the sum of memory and swap usage. If no units are specified, the value is interpreted as bytes. However, it is possible to use suffixes to represent larger units - k or K for kilobytes, m or M for megabytes, and g or G for gigabytes. You cannot use memory.memsw.limit_in_bytes to limit the root cgroup; you can only apply values to groups lower in the hierarchy. Write -1 to memory.memsw.limit_in_bytes to remove any existing limits. Important It is important to set the memory.limit_in_bytes parameter before setting the memory.memsw.limit_in_bytes parameter: attempting to do so in the reverse order results in an error. This is because memory.memsw.limit_in_bytes becomes available only after all memory limitations (previously set in memory.limit_in_bytes ) are exhausted. Consider the following example: setting memory.limit_in_bytes = 2G and memory.memsw.limit_in_bytes = 4G for a certain cgroup will allow processes in that cgroup to allocate 2 GB of memory and, once exhausted, allocate another 2 GB of swap only. The memory.memsw.limit_in_bytes parameter represents the sum of memory and swap. Processes in a cgroup that does not have the memory.memsw.limit_in_bytes parameter set can potentially use up all the available swap (after exhausting the set memory limitation) and trigger an Out Of Memory situation caused by the lack of available swap. The order in which the memory.limit_in_bytes and memory.memsw.limit_in_bytes parameters are set in the /etc/cgconfig.conf file is important as well. The following is a correct example of such a configuration: memory.failcnt reports the number of times that the memory limit has reached the value set in memory.limit_in_bytes . memory.memsw.failcnt reports the number of times that the memory plus swap space limit has reached the value set in memory.memsw.limit_in_bytes . memory.soft_limit_in_bytes enables flexible sharing of memory. Under normal circumstances, control groups are allowed to use as much of the memory as needed, constrained only by their hard limits set with the memory.limit_in_bytes parameter. However, when the system detects memory contention or low memory, control groups are forced to restrict their consumption to their soft limits . To set the soft limit for example to 256 MB, execute: ~]# echo 256M > /cgroup/memory/lab1/memory.soft_limit_in_bytes This parameter accepts the same suffixes as memory.limit_in_bytes to represent units. To have any effect, the soft limit must be set below the hard limit. If lowering the memory usage to the soft limit does not solve the contention, cgroups are pushed back as much as possible to make sure that one control group does not starve the others of memory. Note that soft limits take effect over a long period of time, since they involve reclaiming memory for balancing between memory cgroups. memory.force_empty when set to 0 , empties memory of all pages used by tasks in the cgroup. This interface can only be used when the cgroup has no tasks. If memory cannot be freed, it is moved to a parent cgroup if possible. Use the memory.force_empty parameter before removing a cgroup to avoid moving out-of-use page caches to its parent cgroup. memory.swappiness sets the tendency of the kernel to swap out process memory used by tasks in this cgroup instead of reclaiming pages from the page cache. This is the same tendency, calculated the same way, as set in /proc/sys/vm/swappiness for the system as a whole. The default value is 60 . Values lower than 60 decrease the kernel's tendency to swap out process memory, values greater than 60 increase the kernel's tendency to swap out process memory, and values greater than 100 permit the kernel to swap out pages that are part of the address space of the processes in this cgroup. Note that a value of 0 does not prevent process memory being swapped out; swap out might still happen when there is a shortage of system memory because the global virtual memory management logic does not read the cgroup value. To lock pages completely, use mlock() instead of cgroups. You cannot change the swappiness of the following groups: the root cgroup, which uses the swappiness set in /proc/sys/vm/swappiness . a cgroup that has child groups below it. memory.move_charge_at_immigrate allows moving charges associated with a task along with task migration. Charging is a way of giving a penalty to cgroups which access shared pages too often. These penalties, also called charges , are by default not moved when a task migrates from one cgroup to another. The pages allocated from the original cgroup still remain charged to it; the charge is dropped when the page is freed or reclaimed. With memory.move_charge_at_immigrate enabled, the pages associated with a task are taken from the old cgroup and charged to the new cgroup. The following example shows how to enable memory.move_charge_at_immigrate : ~]# echo 1 > /cgroup/memory/lab1/memory.move_charge_at_immigrate Charges are moved only when the moved task is a leader of a thread group. If there is not enough memory for the task in the destination cgroup, an attempt to reclaim memory is performed. If the reclaim is not successful, the task migration is aborted. To disable memory.move_charge_at_immigrate , execute: ~]# echo 0 > /cgroup/memory/lab1/memory.move_charge_at_immigrate memory.use_hierarchy contains a flag ( 0 or 1 ) that specifies whether memory usage should be accounted for throughout a hierarchy of cgroups. If enabled ( 1 ), the memory subsystem reclaims memory from the children of and process that exceeds its memory limit. By default ( 0 ), the subsystem does not reclaim memory from a task's children. memory.oom_control contains a flag ( 0 or 1 ) that enables or disables the Out of Memory killer for a cgroup. If enabled ( 0 ), tasks that attempt to consume more memory than they are allowed are immediately killed by the OOM killer. The OOM killer is enabled by default in every cgroup using the memory subsystem; to disable it, write 1 to the memory.oom_control file: When the OOM killer is disabled, tasks that attempt to use more memory than they are allowed are paused until additional memory is freed. The memory.oom_control file also reports the OOM status of the current cgroup under the under_oom entry. If the cgroup is out of memory and tasks in it are paused, the under_oom entry reports the value 1 . The memory.oom_control file is capable of reporting an occurrence of an OOM situation using the notification API. For more information, refer to Section 2.13, "Using the Notification API" and Example 3.3, "OOM Control and Notifications" . 3.7.1. Example Usage Example 3.3. OOM Control and Notifications The following example demonstrates how the OOM killer takes action when a task in a cgroup attempts to use more memory than allowed, and how a notification handler can report OOM situations: Attach the memory subsystem to a hierarchy and create a cgroup: Set the amount of memory which tasks in the blue cgroup can use to 100 MB: Change into the blue directory and make sure the OOM killer is enabled: Move the current shell process into the tasks file of the blue cgroup so that all other processes started in this shell are automatically moved to the blue cgroup: Start a test program that attempts to allocate a large amount of memory exceeding the limit you set in step 2. As soon as the blue cgroup runs out of free memory, the OOM killer kills the test program and reports Killed to the standard output: The following is an example of such a test program [5] : #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #define KB (1024) #define MB (1024 * KB) #define GB (1024 * MB) int main(int argc, char *argv[]) { char *p; again: while ((p = (char *)malloc(GB))) memset(p, 0, GB); while ((p = (char *)malloc(MB))) memset(p, 0, MB); while ((p = (char *)malloc(KB))) memset(p, 0, KB); sleep(1); goto again; return 0; } Disable the OOM killer and rerun the test program. This time, the test program remains paused waiting for additional memory to be freed: While the test program is paused, note that the under_oom state of the cgroup has changed to indicate that the cgroup is out of available memory: Reenabling the OOM killer immediately kills the test program. To receive notifications about every OOM situation, create a program as specified in Section 2.13, "Using the Notification API" . For example [6] : #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <sys/eventfd.h> #include <errno.h> #include <string.h> #include <stdio.h> #include <stdlib.h> static inline void die(const char *msg) { fprintf(stderr, "error: %s: %s(%d)\n", msg, strerror(errno), errno); exit(EXIT_FAILURE); } static inline void usage(void) { fprintf(stderr, "usage: oom_eventfd_test <cgroup.event_control> <memory.oom_control>\n"); exit(EXIT_FAILURE); } #define BUFSIZE 256 int main(int argc, char *argv[]) { char buf[BUFSIZE]; int efd, cfd, ofd, rb, wb; uint64_t u; if (argc != 3) usage(); if ((efd = eventfd(0, 0)) == -1) die("eventfd"); if ((cfd = open(argv[1], O_WRONLY)) == -1) die("cgroup.event_control"); if ((ofd = open(argv[2], O_RDONLY)) == -1) die("memory.oom_control"); if ((wb = snprintf(buf, BUFSIZE, "%d %d", efd, ofd)) >= BUFSIZE) die("buffer too small"); if (write(cfd, buf, wb) == -1) die("write cgroup.event_control"); if (close(cfd) == -1) die("close cgroup.event_control"); for (;;) { if (read(efd, &u, sizeof(uint64_t)) != sizeof(uint64_t)) die("read eventfd"); printf("mem_cgroup oom event received\n"); } return 0; } The above program detects OOM situations in a cgroup specified as an argument on the command line and reports them using the mem_cgroup oom event received string to the standard output. Run the above notification handler program in a separate console, specifying the blue cgroup's control files as arguments: In a different console, run the mem_hog test program to create an OOM situation to see the oom_notification program report it in the standard output: [5] Source code provided by Red Hat Engineer Frantisek Hrbata. [6] Source code provided by Red Hat Engineer Frantisek Hrbata. | [
"memory { memory.limit_in_bytes = 1G; memory.memsw.limit_in_bytes = 1G; }",
"~]# echo 1 > /cgroup/memory/lab1/memory.oom_control",
"~]# mount -t cgroup -o memory memory /cgroup/memory ~]# mkdir /cgroup/memory/blue",
"~]# echo 104857600 > memory.limit_in_bytes",
"~]# cd /cgroup/memory/blue blue]# cat memory.oom_control oom_kill_disable 0 under_oom 0",
"blue]# echo USDUSD > tasks",
"blue]# ~/mem-hog Killed",
"#include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #define KB (1024) #define MB (1024 * KB) #define GB (1024 * MB) int main(int argc, char *argv[]) { char *p; again: while ((p = (char *)malloc(GB))) memset(p, 0, GB); while ((p = (char *)malloc(MB))) memset(p, 0, MB); while ((p = (char *)malloc(KB))) memset(p, 0, KB); sleep(1); goto again; return 0; }",
"blue]# echo 1 > memory.oom_control blue]# ~/mem-hog",
"~]# cat /cgroup/memory/blue/memory.oom_control oom_kill_disable 1 under_oom 1",
"#include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <sys/eventfd.h> #include <errno.h> #include <string.h> #include <stdio.h> #include <stdlib.h> static inline void die(const char *msg) { fprintf(stderr, \"error: %s: %s(%d)\\n\", msg, strerror(errno), errno); exit(EXIT_FAILURE); } static inline void usage(void) { fprintf(stderr, \"usage: oom_eventfd_test <cgroup.event_control> <memory.oom_control>\\n\"); exit(EXIT_FAILURE); } #define BUFSIZE 256 int main(int argc, char *argv[]) { char buf[BUFSIZE]; int efd, cfd, ofd, rb, wb; uint64_t u; if (argc != 3) usage(); if ((efd = eventfd(0, 0)) == -1) die(\"eventfd\"); if ((cfd = open(argv[1], O_WRONLY)) == -1) die(\"cgroup.event_control\"); if ((ofd = open(argv[2], O_RDONLY)) == -1) die(\"memory.oom_control\"); if ((wb = snprintf(buf, BUFSIZE, \"%d %d\", efd, ofd)) >= BUFSIZE) die(\"buffer too small\"); if (write(cfd, buf, wb) == -1) die(\"write cgroup.event_control\"); if (close(cfd) == -1) die(\"close cgroup.event_control\"); for (;;) { if (read(efd, &u, sizeof(uint64_t)) != sizeof(uint64_t)) die(\"read eventfd\"); printf(\"mem_cgroup oom event received\\n\"); } return 0; }",
"~]USD ./oom_notification /cgroup/memory/blue/cgroup.event_control /cgroup/memory/blue/memory.oom_control",
"blue]# ~/mem-hog"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sec-memory |
3.3. Kernel Address Space Randomization | 3.3. Kernel Address Space Randomization Red Hat Enterprise Linux 7.5 and later include the Kernel Address Space Randomization (KASLR) feature for KVM guest virtual machines. KASLR enables randomizing the physical and virtual address at which the kernel image is decompressed, and thus prevents guest security exploits based on the location of kernel objects. KASLR is activated by default, but can be deactivated on a specific guest by adding the nokaslr string to the guest's kernel command line. To edit the guest boot options, use the following command, where guestname is the name of your guest: Afterwards, modify the GRUB_CMDLINE_LINUX line, for example: Important Guest dump files created from guests that have with KASLR activated are not readable by the crash utility. To fix this, add the <vmcoreinfo/> element to the <features> section of the XML configuration files of your guests. Note, however, that migrating guests with <vmcoreinfo/> fails if the destination host is using an OS that does not support <vmcoreinfo/> . These include Red Hat Enterprise Linux 7.4 and earlier, as well as Red Hat Enterprise Linux 6.9 and earlier. | [
"virt-edit -d guestname /etc/default/grub",
"GRUB_CMDLINE_LINUX=\"rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet nokaslr\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_security_guide/sect-virtualization_security_guide-guest_security-kaslr |
Chapter 1. Understanding API tiers | Chapter 1. Understanding API tiers Important This guidance does not cover layered OpenShift Container Platform offerings. API tiers for bare-metal configurations also apply to virtualized configurations except for any feature that directly interacts with hardware. Those features directly related to hardware have no application operating environment (AOE) compatibility level beyond that which is provided by the hardware vendor. For example, applications that rely on Graphics Processing Units (GPU) features are subject to the AOE compatibility provided by the GPU vendor driver. API tiers in a cloud environment for cloud specific integration points have no API or AOE compatibility level beyond that which is provided by the hosting cloud vendor. For example, APIs that exercise dynamic management of compute, ingress, or storage are dependent upon the underlying API capabilities exposed by the cloud platform. Where a cloud vendor modifies a prerequisite API, Red Hat will provide commercially reasonable efforts to maintain support for the API with the capability presently offered by the cloud infrastructure vendor. Red Hat requests that application developers validate that any behavior they depend on is explicitly defined in the formal API documentation to prevent introducing dependencies on unspecified implementation-specific behavior or dependencies on bugs in a particular implementation of an API. For example, new releases of an ingress router may not be compatible with older releases if an application uses an undocumented API or relies on undefined behavior. 1.1. API tiers All commercially supported APIs, components, and features are associated under one of the following support levels: API tier 1 APIs and application operating environments (AOEs) are stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. API tier 2 APIs and AOEs are stable within a major release for a minimum of 9 months or 3 minor releases from the announcement of deprecation, whichever is longer. API tier 3 This level applies to languages, tools, applications, and optional Operators included with OpenShift Container Platform through Operator Hub. Each component will specify a lifetime during which the API and AOE will be supported. Newer versions of language runtime specific components will attempt to be as API and AOE compatible from minor version to minor version as possible. Minor version to minor version compatibility is not guaranteed, however. Components and developer tools that receive continuous updates through the Operator Hub, referred to as Operators and operands, should be considered API tier 3. Developers should use caution and understand how these components may change with each minor release. Users are encouraged to consult the compatibility guidelines documented by the component. API tier 4 No compatibility is provided. API and AOE can change at any point. These capabilities should not be used by applications needing long-term support. It is common practice for Operators to use custom resource definitions (CRDs) internally to accomplish a task. These objects are not meant for use by actors external to the Operator and are intended to be hidden. If any CRD is not meant for use by actors external to the Operator, the operators.operatorframework.io/internal-objects annotation in the Operators ClusterServiceVersion (CSV) should be specified to signal that the corresponding resource is internal use only and the CRD may be explicitly labeled as tier 4. 1.2. Mapping API tiers to API groups For each API tier defined by Red Hat, we provide a mapping table for specific API groups where the upstream communities are committed to maintain forward compatibility. Any API group that does not specify an explicit compatibility level and is not specifically discussed below is assigned API tier 3 by default except for v1alpha1 APIs which are assigned tier 4 by default. 1.2.1. Support for Kubernetes API groups API groups that end with the suffix *.k8s.io or have the form version.<name> with no suffix are governed by the Kubernetes deprecation policy and follow a general mapping between API version exposed and corresponding support tier unless otherwise specified. API version example API tier v1 Tier 1 v1beta1 Tier 2 v1alpha1 Tier 4 1.2.2. Support for OpenShift API groups API groups that end with the suffix *.openshift.io are governed by the OpenShift Container Platform deprecation policy and follow a general mapping between API version exposed and corresponding compatibility level unless otherwise specified. API version example API tier apps.openshift.io/v1 Tier 1 authorization.openshift.io/v1 Tier 1, some tier 1 deprecated build.openshift.io/v1 Tier 1, some tier 1 deprecated config.openshift.io/v1 Tier 1 image.openshift.io/v1 Tier 1 network.openshift.io/v1 Tier 1 network.operator.openshift.io/v1 Tier 1 oauth.openshift.io/v1 Tier 1 imagecontentsourcepolicy.operator.openshift.io/v1alpha1 Tier 1 project.openshift.io/v1 Tier 1 quota.openshift.io/v1 Tier 1 route.openshift.io/v1 Tier 1 quota.openshift.io/v1 Tier 1 security.openshift.io/v1 Tier 1 except for RangeAllocation (tier 4) and *Reviews (tier 2) template.openshift.io/v1 Tier 1 console.openshift.io/v1 Tier 2 1.2.3. Support for Monitoring API groups API groups that end with the suffix monitoring.coreos.com have the following mapping: API version example API tier v1 Tier 1 v1alpha1 Tier 1 v1beta1 Tier 1 1.2.4. Support for Operator Lifecycle Manager API groups Operator Lifecycle Manager (OLM) provides APIs that include API groups with the suffix operators.coreos.com . These APIs have the following mapping: API version example API tier v2 Tier 1 v1 Tier 1 v1alpha1 Tier 1 1.3. API deprecation policy OpenShift Container Platform is composed of many components sourced from many upstream communities. It is anticipated that the set of components, the associated API interfaces, and correlated features will evolve over time and might require formal deprecation in order to remove the capability. 1.3.1. Deprecating parts of the API OpenShift Container Platform is a distributed system where multiple components interact with a shared state managed by the cluster control plane through a set of structured APIs. Per Kubernetes conventions, each API presented by OpenShift Container Platform is associated with a group identifier and each API group is independently versioned. Each API group is managed in a distinct upstream community including Kubernetes, Metal3, Multus, Operator Framework, Open Cluster Management, OpenShift itself, and more. While each upstream community might define their own unique deprecation policy for a given API group and version, Red Hat normalizes the community specific policy to one of the compatibility levels defined prior based on our integration in and awareness of each upstream community to simplify end-user consumption and support. The deprecation policy and schedule for APIs vary by compatibility level. The deprecation policy covers all elements of the API including: REST resources, also known as API objects Fields of REST resources Annotations on REST resources, excluding version-specific qualifiers Enumerated or constant values Other than the most recent API version in each group, older API versions must be supported after their announced deprecation for a duration of no less than: API tier Duration Tier 1 Stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. Tier 2 9 months or 3 releases from the announcement of deprecation, whichever is longer. Tier 3 See the component-specific schedule. Tier 4 None. No compatibility is guaranteed. The following rules apply to all tier 1 APIs: API elements can only be removed by incrementing the version of the group. API objects must be able to round-trip between API versions without information loss, with the exception of whole REST resources that do not exist in some versions. In cases where equivalent fields do not exist between versions, data will be preserved in the form of annotations during conversion. API versions in a given group can not deprecate until a new API version at least as stable is released, except in cases where the entire API object is being removed. 1.3.2. Deprecating CLI elements Client-facing CLI commands are not versioned in the same way as the API, but are user-facing component systems. The two major ways a user interacts with a CLI are through a command or flag, which is referred to in this context as CLI elements. All CLI elements default to API tier 1 unless otherwise noted or the CLI depends on a lower tier API. Element API tier Generally available (GA) Flags and commands Tier 1 Technology Preview Flags and commands Tier 3 Developer Preview Flags and commands Tier 4 1.3.3. Deprecating an entire component The duration and schedule for deprecating an entire component maps directly to the duration associated with the highest API tier of an API exposed by that component. For example, a component that surfaced APIs with tier 1 and 2 could not be removed until the tier 1 deprecation schedule was met. API tier Duration Tier 1 Stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. Tier 2 9 months or 3 releases from the announcement of deprecation, whichever is longer. Tier 3 See the component-specific schedule. Tier 4 None. No compatibility is guaranteed. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/api_overview/understanding-api-support-tiers |
3.5. Example: List Host Collection | 3.5. Example: List Host Collection This example uses a Red Hat Virtualization Host. Red Hat Virtualization Manager automatically registers any configured Red Hat Virtualization Host. This example retrieves a representation of the hosts collection and shows a Red Hat Virtualization Host named hypervisor registered with the virtualization environment. Example 3.5. List hosts collection Request: cURL command: Result: Note the id code of your Default host. This code identifies this host in relation to other resources of your virtual environment. This host is a member of the Default cluster and accessing the nics sub-collection shows this host has a connection to the ovirtmgmt network. | [
"GET /ovirt-engine/api/hosts HTTP/1.1 Accept: application/xml",
"curl -X GET -H \"Accept: application/xml\" -u [USER:PASS] --cacert [CERT] https:// [RHEVM Host] :443/ovirt-engine/api/hosts",
"HTTP/1.1 200 OK Accept: application/xml <hosts> <host id=\"0656f432-923a-11e0-ad20-5254004ac988\" href=\"/ovirt-engine/api/hosts/0656f432-923a-11e0-ad20-5254004ac988\"> <name>hypervisor</name> <actions> <link rel=\"install\" href=\"/ovirt-engine/api/hosts/0656f432-923a-11e0-ad20-5254004ac988/install\"/> <link rel=\"activate\" href=\"/ovirt-engine/api/hosts/0656f432-923a-11e0-ad20-5254004ac988/activate\"/> <link rel=\"fence\" href=\"/ovirt-engine/api/hosts/0656f432-923a-11e0-ad20-5254004ac988/fence\"/> <link rel=\"deactivate\" href=\"/ovirt-engine/api/hosts/0656f432-923a-11e0-ad20-5254004ac988/deactivate\"/> <link rel=\"approve\" href=\"/ovirt-engine/api/hosts/0656f432-923a-11e0-ad20-5254004ac988/approve\"/> <link rel=\"iscsilogin\" href=\"/ovirt-engine/api/hosts/0656f432-923a-11e0-ad20-5254004ac988/iscsilogin\"/> <link rel=\"iscsidiscover\" href=\"/ovirt-engine/api/hosts/0656f432-923a-11e0-ad20-5254004ac988/iscsidiscover\"/> <link rel=\"commitnetconfig\" href=\"/ovirt-engine/api/hosts/0656f432-923a-11e0-ad20-5254004ac988/ commitnetconfig\"/> </actions> <link rel=\"storage\" href=\"/ovirt-engine/api/hosts/0656f432-923a-11e0-ad20-5254004ac988/storage\"/> <link rel=\"nics\" href=\"/ovirt-engine/api/hosts/0656f432-923a-11e0-ad20-5254004ac988/nics\"/> <link rel=\"tags\" href=\"/ovirt-engine/api/hosts/0656f432-923a-11e0-ad20-5254004ac988/tags\"/> <link rel=\"permissions\" href=\"/ovirt-engine/api/hosts/0656f432-923a-11e0-ad20-5254004ac988/permissions\"/> <link rel=\"statistics\" href=\"/ovirt-engine/api/hosts/0656f432-923a-11e0-ad20-5254004ac988/statistics\"/> <address>10.64.14.110</address> <status> <state>non_operational</state> </status> <cluster id=\"99408929-82cf-4dc7-a532-9d998063fa95\" href=\"/ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95\"/> <port>54321</port> <storage_manager>true</storage_manager> <power_management> <enabled>false</enabled> <options/> </power_management> <ksm> <enabled>false</enabled> </ksm> <transparent_hugepages> <enabled>true</enabled> </transparent_hugepages> <iscsi> <initiator>iqn.1994-05.com.example:644949fe81ce</initiator> </iscsi> <cpu> <topology cores=\"2\"/> <name>Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz</name> <speed>2993</speed> </cpu> <summary> <active>0</active> <migrating>0</migrating> <total>0</total> </summary> </host> </hosts>"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/example_list_host_collection |
Chapter 6. DNS Operator in OpenShift Container Platform | Chapter 6. DNS Operator in OpenShift Container Platform The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods, enabling DNS-based Kubernetes Service discovery in OpenShift Container Platform. 6.1. DNS Operator The DNS Operator implements the dns API from the operator.openshift.io API group. The Operator deploys CoreDNS using a daemon set, creates a service for the daemon set, and configures the kubelet to instruct pods to use the CoreDNS service IP address for name resolution. Procedure The DNS Operator is deployed during installation with a Deployment object. Use the oc get command to view the deployment status: USD oc get -n openshift-dns-operator deployment/dns-operator Example output NAME READY UP-TO-DATE AVAILABLE AGE dns-operator 1/1 1 1 23h Use the oc get command to view the state of the DNS Operator: USD oc get clusteroperator/dns Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE dns 4.1.0-0.11 True False False 92m AVAILABLE , PROGRESSING and DEGRADED provide information about the status of the operator. AVAILABLE is True when at least 1 pod from the CoreDNS daemon set reports an Available status condition. 6.2. Changing the DNS Operator managementState DNS manages the CoreDNS component to provide a name resolution service for pods and services in the cluster. The managementState of the DNS Operator is set to Managed by default, which means that the DNS Operator is actively managing its resources. You can change it to Unmanaged , which means the DNS Operator is not managing its resources. The following are use cases for changing the DNS Operator managementState : You are a developer and want to test a configuration change to see if it fixes an issue in CoreDNS. You can stop the DNS Operator from overwriting the fix by setting the managementState to Unmanaged . You are a cluster administrator and have reported an issue with CoreDNS, but need to apply a workaround until the issue is fixed. You can set the managementState field of the DNS Operator to Unmanaged to apply the workaround. Procedure Change managementState DNS Operator: oc patch dns.operator.openshift.io default --type merge --patch '{"spec":{"managementState":"Unmanaged"}}' 6.3. Controlling DNS pod placement The DNS Operator has two daemon sets: one for CoreDNS and one for managing the /etc/hosts file. The daemon set for /etc/hosts must run on every node host to add an entry for the cluster image registry to support pulling images. Security policies can prohibit communication between pairs of nodes, which prevents the daemon set for CoreDNS from running on every node. As a cluster administrator, you can use a custom node selector to configure the daemon set for CoreDNS to run or not run on certain nodes. Prerequisites You installed the oc CLI. You are logged in to the cluster with a user with cluster-admin privileges. Procedure To prevent communication between certain nodes, configure the spec.nodePlacement.nodeSelector API field: Modify the DNS Operator object named default : USD oc edit dns.operator/default Specify a node selector that includes only control plane nodes in the spec.nodePlacement.nodeSelector API field: spec: nodePlacement: nodeSelector: node-role.kubernetes.io/worker: "" To allow the daemon set for CoreDNS to run on nodes, configure a taint and toleration: Modify the DNS Operator object named default : USD oc edit dns.operator/default Specify a taint key and a toleration for the taint: spec: nodePlacement: tolerations: - effect: NoExecute key: "dns-only" operators: Equal value: abc tolerationSeconds: 3600 1 1 If the taint is dns-only , it can be tolerated indefinitely. You can omit tolerationSeconds . 6.4. View the default DNS Every new OpenShift Container Platform installation has a dns.operator named default . Procedure Use the oc describe command to view the default dns : USD oc describe dns.operator/default Example output Name: default Namespace: Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: DNS ... Status: Cluster Domain: cluster.local 1 Cluster IP: 172.30.0.10 2 ... 1 The Cluster Domain field is the base DNS domain used to construct fully qualified pod and service domain names. 2 The Cluster IP is the address pods query for name resolution. The IP is defined as the 10th address in the service CIDR range. To find the service CIDR of your cluster, use the oc get command: USD oc get networks.config/cluster -o jsonpath='{USD.status.serviceNetwork}' Example output [172.30.0.0/16] 6.5. Using DNS forwarding You can use DNS forwarding to override the default forwarding configuration in the /etc/resolv.conf file in the following ways: Specify name servers for every zone. If the forwarded zone is the Ingress domain managed by OpenShift Container Platform, then the upstream name server must be authorized for the domain. Provide a list of upstream DNS servers. Change the default forwarding policy. Note A DNS forwarding configuration for the default domain can have both the default servers specified in the /etc/resolv.conf file and the upstream DNS servers. Procedure Modify the DNS Operator object named default : USD oc edit dns.operator/default After you issue the command, the Operator creates and updates the config map named dns-default with additional server configuration blocks based on Server . If none of the servers have a zone that matches the query, then name resolution falls back to the upstream DNS servers. Configuring DNS forwarding apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: example-server 1 zones: 2 - example.com forwardPlugin: policy: Random 3 upstreams: 4 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: 5 policy: Random 6 upstreams: 7 - type: SystemResolvConf 8 - type: Network address: 1.2.3.4 9 port: 53 10 1 Must comply with the rfc6335 service name syntax. 2 Must conform to the definition of a subdomain in the rfc1123 service name syntax. The cluster domain, cluster.local , is an invalid subdomain for the zones field. 3 Defines the policy to select upstream resolvers. Default value is Random . You can also use the values RoundRobin , and Sequential . 4 A maximum of 15 upstreams is allowed per forwardPlugin . 5 Optional. You can use it to override the default policy and forward DNS resolution to the specified DNS resolvers (upstream resolvers) for the default domain. If you do not provide any upstream resolvers, the DNS name queries go to the servers in /etc/resolv.conf . 6 Determines the order in which upstream servers are selected for querying. You can specify one of these values: Random , RoundRobin , or Sequential . The default value is Sequential . 7 Optional. You can use it to provide upstream resolvers. 8 You can specify two types of upstreams - SystemResolvConf and Network . SystemResolvConf configures the upstream to use /etc/resolv.conf and Network defines a Networkresolver . You can specify one or both. 9 If the specified type is Network , you must provide an IP address. The address field must be a valid IPv4 or IPv6 address. 10 If the specified type is Network , you can optionally provide a port. The port field must have a value between 1 and 65535 . If you do not specify a port for the upstream, by default port 853 is tried. Optional: When working in a highly regulated environment, you might need the ability to secure DNS traffic when forwarding requests to upstream resolvers so that you can ensure additional DNS traffic and data privacy. Cluster administrators can configure transport layer security (TLS) for forwarded DNS queries. Configuring DNS forwarding with TLS apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: example-server 1 zones: 2 - example.com forwardPlugin: transportConfig: transport: TLS 3 tls: caBundle: name: mycacert serverName: dnstls.example.com 4 policy: Random 5 upstreams: 6 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: 7 transportConfig: transport: TLS tls: caBundle: name: mycacert serverName: dnstls.example.com upstreams: - type: Network 8 address: 1.2.3.4 9 port: 53 10 1 Must comply with the rfc6335 service name syntax. 2 Must conform to the definition of a subdomain in the rfc1123 service name syntax. The cluster domain, cluster.local , is an invalid subdomain for the zones field. The cluster domain, cluster.local , is an invalid subdomain for zones . 3 When configuring TLS for forwarded DNS queries, set the transport field to have the value TLS . By default, CoreDNS caches forwarded connections for 10 seconds. CoreDNS will hold a TCP connection open for those 10 seconds if no request is issued. With large clusters, ensure that your DNS server is aware that it might get many new connections to hold open because you can initiate a connection per node. Set up your DNS hierarchy accordingly to avoid performance issues. 4 When configuring TLS for forwarded DNS queries, this is a mandatory server name used as part of the server name indication (SNI) to validate the upstream TLS server certificate. 5 Defines the policy to select upstream resolvers. Default value is Random . You can also use the values RoundRobin , and Sequential . 6 Required. You can use it to provide upstream resolvers. A maximum of 15 upstreams entries are allowed per forwardPlugin entry. 7 Optional. You can use it to override the default policy and forward DNS resolution to the specified DNS resolvers (upstream resolvers) for the default domain. If you do not provide any upstream resolvers, the DNS name queries go to the servers in /etc/resolv.conf . 8 Network type indicates that this upstream resolver should handle forwarded requests separately from the upstream resolvers listed in /etc/resolv.conf . Only the Network type is allowed when using TLS and you must provide an IP address. 9 The address field must be a valid IPv4 or IPv6 address. 10 You can optionally provide a port. The port must have a value between 1 and 65535 . If you do not specify a port for the upstream, by default port 853 is tried. Note If servers is undefined or invalid, the config map only contains the default server. Verification View the config map: USD oc get configmap/dns-default -n openshift-dns -o yaml Sample DNS ConfigMap based on sample DNS apiVersion: v1 data: Corefile: | example.com:5353 { forward . 1.1.1.1 2.2.2.2:5353 } bar.com:5353 example.com:5353 { forward . 3.3.3.3 4.4.4.4:5454 1 } .:5353 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf 1.2.3.4:53 { policy Random } cache 30 reload } kind: ConfigMap metadata: labels: dns.operator.openshift.io/owning-dns: default name: dns-default namespace: openshift-dns 1 Changes to the forwardPlugin triggers a rolling update of the CoreDNS daemon set. Additional resources For more information on DNS forwarding, see the CoreDNS forward documentation . 6.6. DNS Operator status You can inspect the status and view the details of the DNS Operator using the oc describe command. Procedure View the status of the DNS Operator: USD oc describe clusteroperators/dns 6.7. DNS Operator logs You can view DNS Operator logs by using the oc logs command. Procedure View the logs of the DNS Operator: USD oc logs -n openshift-dns-operator deployment/dns-operator -c dns-operator 6.8. Setting the CoreDNS log level You can configure the CoreDNS log level to determine the amount of detail in logged error messages. The valid values for CoreDNS log level are Normal , Debug , and Trace . The default logLevel is Normal . Note The errors plugin is always enabled. The following logLevel settings report different error responses: logLevel : Normal enables the "errors" class: log . { class error } . logLevel : Debug enables the "denial" class: log . { class denial error } . logLevel : Trace enables the "all" class: log . { class all } . Procedure To set logLevel to Debug , enter the following command: USD oc patch dnses.operator.openshift.io/default -p '{"spec":{"logLevel":"Debug"}}' --type=merge To set logLevel to Trace , enter the following command: USD oc patch dnses.operator.openshift.io/default -p '{"spec":{"logLevel":"Trace"}}' --type=merge Verification To ensure the desired log level was set, check the config map: USD oc get configmap/dns-default -n openshift-dns -o yaml 6.9. Setting the CoreDNS Operator log level Cluster administrators can configure the Operator log level to more quickly track down OpenShift DNS issues. The valid values for operatorLogLevel are Normal , Debug , and Trace . Trace has the most detailed information. The default operatorlogLevel is Normal . There are seven logging levels for issues: Trace, Debug, Info, Warning, Error, Fatal and Panic. After the logging level is set, log entries with that severity or anything above it will be logged. operatorLogLevel: "Normal" sets logrus.SetLogLevel("Info") . operatorLogLevel: "Debug" sets logrus.SetLogLevel("Debug") . operatorLogLevel: "Trace" sets logrus.SetLogLevel("Trace") . Procedure To set operatorLogLevel to Debug , enter the following command: USD oc patch dnses.operator.openshift.io/default -p '{"spec":{"operatorLogLevel":"Debug"}}' --type=merge To set operatorLogLevel to Trace , enter the following command: USD oc patch dnses.operator.openshift.io/default -p '{"spec":{"operatorLogLevel":"Trace"}}' --type=merge 6.10. Tuning the CoreDNS cache You can configure the maximum duration of both successful or unsuccessful caching, also known as positive or negative caching respectively, done by CoreDNS. Tuning the duration of caching of DNS query responses can reduce the load for any upstream DNS resolvers. Procedure Edit the DNS Operator object named default by running the following command: USD oc edit dns.operator.openshift.io/default Modify the time-to-live (TTL) caching values: Configuring DNS caching apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: cache: positiveTTL: 1h 1 negativeTTL: 0.5h10m 2 1 The string value 1h is converted to its respective number of seconds by CoreDNS. If this field is omitted, the value is assumed to be 0s and the cluster uses the internal default value of 900s as a fallback. 2 The string value can be a combination of units such as 0.5h10m and is converted to its respective number of seconds by CoreDNS. If this field is omitted, the value is assumed to be 0s and the cluster uses the internal default value of 30s as a fallback. Warning Setting TTL fields to low values could lead to an increased load on the cluster, any upstream resolvers, or both. | [
"oc get -n openshift-dns-operator deployment/dns-operator",
"NAME READY UP-TO-DATE AVAILABLE AGE dns-operator 1/1 1 1 23h",
"oc get clusteroperator/dns",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE dns 4.1.0-0.11 True False False 92m",
"patch dns.operator.openshift.io default --type merge --patch '{\"spec\":{\"managementState\":\"Unmanaged\"}}'",
"oc edit dns.operator/default",
"spec: nodePlacement: nodeSelector: node-role.kubernetes.io/worker: \"\"",
"oc edit dns.operator/default",
"spec: nodePlacement: tolerations: - effect: NoExecute key: \"dns-only\" operators: Equal value: abc tolerationSeconds: 3600 1",
"oc describe dns.operator/default",
"Name: default Namespace: Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: DNS Status: Cluster Domain: cluster.local 1 Cluster IP: 172.30.0.10 2",
"oc get networks.config/cluster -o jsonpath='{USD.status.serviceNetwork}'",
"[172.30.0.0/16]",
"oc edit dns.operator/default",
"apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: example-server 1 zones: 2 - example.com forwardPlugin: policy: Random 3 upstreams: 4 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: 5 policy: Random 6 upstreams: 7 - type: SystemResolvConf 8 - type: Network address: 1.2.3.4 9 port: 53 10",
"apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: example-server 1 zones: 2 - example.com forwardPlugin: transportConfig: transport: TLS 3 tls: caBundle: name: mycacert serverName: dnstls.example.com 4 policy: Random 5 upstreams: 6 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: 7 transportConfig: transport: TLS tls: caBundle: name: mycacert serverName: dnstls.example.com upstreams: - type: Network 8 address: 1.2.3.4 9 port: 53 10",
"oc get configmap/dns-default -n openshift-dns -o yaml",
"apiVersion: v1 data: Corefile: | example.com:5353 { forward . 1.1.1.1 2.2.2.2:5353 } bar.com:5353 example.com:5353 { forward . 3.3.3.3 4.4.4.4:5454 1 } .:5353 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf 1.2.3.4:53 { policy Random } cache 30 reload } kind: ConfigMap metadata: labels: dns.operator.openshift.io/owning-dns: default name: dns-default namespace: openshift-dns",
"oc describe clusteroperators/dns",
"oc logs -n openshift-dns-operator deployment/dns-operator -c dns-operator",
"oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"logLevel\":\"Debug\"}}' --type=merge",
"oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"logLevel\":\"Trace\"}}' --type=merge",
"oc get configmap/dns-default -n openshift-dns -o yaml",
"oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"operatorLogLevel\":\"Debug\"}}' --type=merge",
"oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"operatorLogLevel\":\"Trace\"}}' --type=merge",
"oc edit dns.operator.openshift.io/default",
"apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: cache: positiveTTL: 1h 1 negativeTTL: 0.5h10m 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/networking/dns-operator |
Chapter 42. Hardware Enablement | Chapter 42. Hardware Enablement LSI Syncro CS HA-DAS adapters Red Hat Enterprise Linux 7.1 included code in the megaraid_sas driver to enable LSI Syncro CS high-availability direct-attached storage (HA-DAS) adapters. While the megaraid_sas driver is fully supported for previously enabled adapters, the use of this driver for Syncro CS is available as a Technology Preview. Support for this adapter is provided directly by LSI, your system integrator, or system vendor. Users deploying Syncro CS on Red Hat Enterprise Linux 7.2 and later are encouraged to provide feedback to Red Hat and LSI. (BZ#1062759) tss2 enables TPM 2.0 for IBM Power LE The tss2 package adds IBM implementation of a Trusted Computing Group Software Stack (TSS) 2.0 as a Technology Preview for the IBM Power LE architecture. This package enables users to interact with TPM 2.0 devices. (BZ#1384452) The ibmvnic device driver available as a Technology Preview Since Red Hat Enterprise Linux 7.3, the IBM Virtual Network Interface Controller (vNIC) driver for IBM POWER architectures, ibmvnic , has been available as a Technology Preview. vNIC is a PowerVM virtual networking technology that delivers enterprise capabilities and simplifies network management. It is a high-performance, efficient technology that when combined with SR-IOV NIC provides bandwidth control Quality of Service (QoS) capabilities at the virtual NIC level. vNIC significantly reduces virtualization overhead, resulting in lower latencies and fewer server resources, including CPU and memory, required for network virtualization. With Red Hat Enterprise Linux 7.6, the ibmvnic driver has been upgraded to version 1.0, which provides a number of bug fixes and enhancements over the version. Notable changes include: The code that previously requested error information has been removed because no error ID is provided by the Virtual Input-Output (VIOS) Server. Error reporting has been updated with the cause string. As a result, during a recovery, the driver classifies the string as a warning rather than an error. Error recovery on a login failure has been fixed. The failed state that occurred after a failover while migrating Logical Partitioning (LPAR) has been fixed. The driver can now handle all possible login response return values. A driver crash that happened during a failover or Link Power Management (LPM) if the Transmit and Receive (Tx/Rx) queues have changed has been fixed. (BZ#1519746) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/technology_previews_hardware_enablement |
Chapter 22. Using Ansible playbooks to manage self-service rules in IdM | Chapter 22. Using Ansible playbooks to manage self-service rules in IdM This section introduces self-service rules in Identity Management (IdM) and describes how to create and edit self-service access rules using Ansible playbooks. Self-service access control rules allow an IdM entity to perform specified operations on its IdM Directory Server entry. Self-service access control in IdM Using Ansible to ensure that a self-service rule is present Using Ansible to ensure that a self-service rule is absent Using Ansible to ensure that a self-service rule has specific attributes Using Ansible to ensure that a self-service rule does not have specific attributes 22.1. Self-service access control in IdM Self-service access control rules define which operations an Identity Management (IdM) entity can perform on its IdM Directory Server entry: for example, IdM users have the ability to update their own passwords. This method of control allows an authenticated IdM entity to edit specific attributes within its LDAP entry, but does not allow add or delete operations on the entire entry. Warning Be careful when working with self-service access control rules: configuring access control rules improperly can inadvertently elevate an entity's privileges. 22.2. Using Ansible to ensure that a self-service rule is present The following procedure describes how to use an Ansible playbook to define self-service rules and ensure their presence on an Identity Management (IdM) server. In this example, the new Users can manage their own name details rule grants users the ability to change their own givenname , displayname , title and initials attributes. This allows them to, for example, change their display name or initials if they want to. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the selfservice-present.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/selfservice/ directory: Open the selfservice-present-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipaselfservice task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the new self-service rule. Set the permission variable to a comma-separated list of permissions to grant: read and write . Set the attribute variable to a list of attributes that users can manage themselves: givenname , displayname , title , and initials . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Self-service access control in IdM The README-selfservice.md file in the /usr/share/doc/ansible-freeipa/ directory The /usr/share/doc/ansible-freeipa/playbooks/selfservice directory 22.3. Using Ansible to ensure that a self-service rule is absent The following procedure describes how to use an Ansible playbook to ensure a specified self-service rule is absent from your IdM configuration. The example below describes how to make sure the Users can manage their own name details self-service rule does not exist in IdM. This will ensure that users cannot, for example, change their own display name or initials. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the selfservice-absent.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/selfservice/ directory: Open the selfservice-absent-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipaselfservice task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the self-service rule. Set the state variable to absent . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Self-service access control in IdM The README-selfservice.md file in the /usr/share/doc/ansible-freeipa/ directory Sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/selfservice directory 22.4. Using Ansible to ensure that a self-service rule has specific attributes The following procedure describes how to use an Ansible playbook to ensure that an already existing self-service rule has specific settings. In the example, you ensure the Users can manage their own name details self-service rule also has the surname member attribute. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The Users can manage their own name details self-service rule exists in IdM. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the selfservice-member-present.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/selfservice/ directory: Open the selfservice-member-present-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipaselfservice task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the self-service rule to modify. Set the attribute variable to surname . Set the action variable to member . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Self-service access control in IdM The README-selfservice.md file available in the /usr/share/doc/ansible-freeipa/ directory The sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/selfservice directory 22.5. Using Ansible to ensure that a self-service rule does not have specific attributes The following procedure describes how to use an Ansible playbook to ensure that a self-service rule does not have specific settings. You can use this playbook to make sure a self-service rule does not grant undesired access. In the example, you ensure the Users can manage their own name details self-service rule does not have the givenname and surname member attributes. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The Users can manage their own name details self-service rule exists in IdM. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the selfservice-member-absent.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/selfservice/ directory: Open the selfservice-member-absent-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipaselfservice task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the self-service rule you want to modify. Set the attribute variable to givenname and surname . Set the action variable to member . Set the state variable to absent . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Self-service access control in IdM The README-selfservice.md file in the /usr/share/doc/ansible-freeipa/ directory The sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/selfservice directory | [
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-present.yml selfservice-present-copy.yml",
"--- - name: Self-service present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure self-service rule \"Users can manage their own name details\" is present ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" permission: read, write attribute: - givenname - displayname - title - initials",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-absent.yml selfservice-absent-copy.yml",
"--- - name: Self-service absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure self-service rule \"Users can manage their own name details\" is absent ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-absent-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-member-present.yml selfservice-member-present-copy.yml",
"--- - name: Self-service member present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure selfservice \"Users can manage their own name details\" member attribute surname is present ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" attribute: - surname action: member",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-member-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-member-absent.yml selfservice-member-absent-copy.yml",
"--- - name: Self-service member absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure selfservice \"Users can manage their own name details\" member attributes givenname and surname are absent ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" attribute: - givenname - surname action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-member-absent-copy.yml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/using-ansible-playbooks-to-manage-self-service-rules-in-idm_managing-users-groups-hosts |
Chapter 9. Building container images | Chapter 9. Building container images Building container images involves creating a blueprint for a containerized application. Blueprints rely on base images from other public repositories that define how the application should be installed and configured. Note Because blueprints rely on images from other public repositories, they might be subject to rate limiting. Consequently, your build could fail. Quay.io supports the ability to build Docker and Podman container images. This functionality is valuable for developers and organizations who rely on container and container orchestration. On Quay.io, this feature works the same across both free, and paid, tier plans. Note Quay.io limits the number of simultaneous builds that a single user can submit at one time. 9.1. Build contexts When building an image with Docker or Podman, a directory is specified to become the build context . This is true for both manual Builds and Build triggers, because the Build that is created by Quay.io is not different than running docker build or podman build on your local machine. Quay.io Build contexts are always specified in the subdirectory from the Build setup, and fallback to the root of the Build source if a directory is not specified. When a build is triggered, Quay.io Build workers clone the Git repository to the worker machine, and then enter the Build context before conducting a Build. For Builds based on .tar archives, Build workers extract the archive and enter the Build context. For example: Extracted Build archive example ├── .git ├── Dockerfile ├── file └── subdir └── Dockerfile Imagine that the Extracted Build archive is the directory structure got a Github repository called example. If no subdirectory is specified in the Build trigger setup, or when manually starting the Build, the Build operates in the example directory. If a subdirectory is specified in the Build trigger setup, for example, subdir , only the Dockerfile within it is visible to the Build. This means that you cannot use the ADD command in the Dockerfile to add file , because it is outside of the Build context. Unlike Docker Hub, the Dockerfile is part of the Build context on Quay.io. As a result, it must not appear in the .dockerignore file. 9.2. Tag naming for Build triggers Custom tags are available for use in Quay.io. One option is to include any string of characters assigned as a tag for each built image. Alternatively, you can use the following tag templates on the Configure Tagging section of the build trigger to tag images with information from each commit: USD{commit} : Full SHA of the issued commit USD{parsed_ref.branch} : Branch information (if available) USD{parsed_ref.tag} : Tag information (if available) USD{parsed_ref.remote} : The remote name USD{commit_info.date} : Date when the commit was issued USD{commit_info.author.username} : Username of the author of the commit USD{commit_info.short_sha} : First 7 characters of the commit SHA USD{committer.properties.username} : Username of the committer This list is not complete, but does contain the most useful options for tagging purposes. You can find the complete tag template schema on this page . For more information, see Set up custom tag templates in build triggers for Red Hat Quay and Quay.io 9.3. Skipping a source control-triggered build To specify that a commit should be ignored by the Quay.io build system, add the text [skip build] or [build skip] anywhere in your commit message. 9.4. Viewing and managing builds Repository Builds can be viewed and managed on the Quay.io UI. Procedure Navigate to Quay.io and select a repository. In the navigation pane, select Builds . 9.5. Creating a new Build By default, Quay.io users can create new Builds out-of-the-box. Prerequisites You have navigated to the Builds page of your repository. Procedure On the Builds page, click Start New Build . When prompted, click Upload Dockerfile to upload a Dockerfile or an archive that contains a Dockerfile at the root directory. Click Start Build . Note Currently, users cannot specify the Docker build context when manually starting a build. Currently, BitBucket is unsupported on the Red Hat Quay v2 UI. You are redirected to the Build, which can be viewed in real-time. Wait for the Dockerfile Build to be completed and pushed. Optional. you can click Download Logs to download the logs, or Copy Logs to copy the logs. Click the back button to return to the Repository Builds page, where you can view the Build History. 9.6. Build triggers Build triggers invoke builds whenever the triggered condition is met, for example, a source control push, creating a webhook call , and so on. 9.6.1. Creating a Build trigger Use the following procedure to create a Build trigger using a custom Git repository. Note The following procedure assumes that you have not included Github credentials in your config.yaml file. Prerequisites You have navigated to the Builds page of your repository. Procedure On the Builds page, click Create Build Trigger . Select the desired platform, for example, Github, BitBucket, Gitlab, or use a custom Git repository. For this example, we are using a custom Git repository from Github. Enter a custom Git repository name, for example, [email protected]:<username>/<repo>.git . Then, click . When prompted, configure the tagging options by selecting one of, or both of, the following options: Tag manifest with the branch or tag name . When selecting this option, the built manifest the name of the branch or tag for the git commit are tagged. Add latest tag if on default branch . When selecting this option, the built manifest with latest if the build occurred on the default branch for the repository are tagged. Optionally, you can add a custom tagging template. There are multiple tag templates that you can enter here, including using short SHA IDs, timestamps, author names, committer, and branch names from the commit as tags. For more information, see "Tag naming for Build triggers". After you have configured tagging, click . When prompted, select the location of the Dockerfile to be built when the trigger is invoked. If the Dockerfile is located at the root of the git repository and named Dockerfile, enter /Dockerfile as the Dockerfile path. Then, click . When prompted, select the context for the Docker build. If the Dockerfile is located at the root of the Git repository, enter / as the build context directory. Then, click . Optional. Choose an optional robot account. This allows you to pull a private base image during the build process. If you know that a private base image is not used, you can skip this step. Click . Check for any verification warnings. If necessary, fix the issues before clicking Finish . You are alerted that the trigger has been successfully activated. Note that using this trigger requires the following actions: You must give the following public key read access to the git repository. You must set your repository to POST to the following URL to trigger a build. Save the SSH Public Key, then click Return to <organization_name>/<repository_name> . You are redirected to the Builds page of your repository. On the Builds page, you now have a Build trigger. For example: 9.6.2. Manually triggering a Build Builds can be triggered manually by using the following procedure. Procedure On the Builds page, Start new build . When prompted, select Invoke Build Trigger . Click Run Trigger Now to manually start the process. After the build starts, you can see the Build ID on the Repository Builds page. 9.7. Setting up a custom Git trigger A custom Git trigger is a generic way for any Git server to act as a Build trigger. It relies solely on SSH keys and webhook endpoints. Everything else is left for the user to implement. 9.7.1. Creating a trigger Creating a custom Git trigger is similar to the creation of any other trigger, with the exception of the following: Quay.io cannot automatically detect the proper Robot Account to use with the trigger. This must be done manually during the creation process. There are extra steps after the creation of the trigger that must be done. These steps are detailed in the following sections. 9.7.2. Custom trigger creation setup When creating a custom Git trigger, two additional steps are required: You must provide read access to the SSH public key that is generated when creating the trigger. You must setup a webhook that POSTs to the Quay.io endpoint to trigger the build. The key and the URL are available by selecting View Credentials from the Settings , or gear icon. View and modify tags from your repository 9.7.2.1. SSH public key access Depending on the Git server configuration, there are multiple ways to install the SSH public key that Quay.io generates for a custom Git trigger. For example, Git documentation describes a small server setup in which adding the key to USDHOME/.ssh/authorize_keys would provide access for Builders to clone the repository. For any git repository management software that is not officially supported, there is usually a location to input the key often labeled as Deploy Keys . 9.7.2.2. Webhook To automatically trigger a build, one must POST a .json payload to the webhook URL using the following format. This can be accomplished in various ways depending on the server setup, but for most cases can be done with a post-receive Git Hook . Note This request requires a Content-Type header containing application/json in order to be valid. Example webhook { "commit": "1c002dd", // required "ref": "refs/heads/master", // required "default_branch": "master", // required "commit_info": { // optional "url": "gitsoftware.com/repository/commits/1234567", // required "message": "initial commit", // required "date": "timestamp", // required "author": { // optional "username": "user", // required "avatar_url": "gravatar.com/user.png", // required "url": "gitsoftware.com/users/user" // required }, "committer": { // optional "username": "user", // required "avatar_url": "gravatar.com/user.png", // required "url": "gitsoftware.com/users/user" // required } } } | [
"example ├── .git ├── Dockerfile ├── file └── subdir └── Dockerfile",
"{ \"commit\": \"1c002dd\", // required \"ref\": \"refs/heads/master\", // required \"default_branch\": \"master\", // required \"commit_info\": { // optional \"url\": \"gitsoftware.com/repository/commits/1234567\", // required \"message\": \"initial commit\", // required \"date\": \"timestamp\", // required \"author\": { // optional \"username\": \"user\", // required \"avatar_url\": \"gravatar.com/user.png\", // required \"url\": \"gitsoftware.com/users/user\" // required }, \"committer\": { // optional \"username\": \"user\", // required \"avatar_url\": \"gravatar.com/user.png\", // required \"url\": \"gitsoftware.com/users/user\" // required } } }"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/about_quay_io/building-dockerfiles |
11.2. Configuring SNMP with the Red Hat High Availability Add-On | 11.2. Configuring SNMP with the Red Hat High Availability Add-On To configure SNMP with the Red Hat High Availability Add-On, perform the following steps on each node in the cluster to ensure that the necessary services are enabled and running. To use SNMP traps with the Red Hat High Availability Add-On, the snmpd service is required and acts as the master agent. Since the foghorn service is the subagent and uses the AgentX protocol, you must add the following line to the /etc/snmp/snmpd.conf file to enable AgentX support: To specify the host where the SNMP trap notifications should be sent, add the following line to the to the /etc/snmp/snmpd.conf file: For more information on notification handling, see the snmpd.conf man page. Make sure that the snmpd daemon is enabled and running by executing the following commands: If the messagebus daemon is not already enabled and running, execute the following commands: Make sure that the foghorn daemon is enabled and running by executing the following commands: Execute the following command to configure your system so that the COROSYNC-MIB generates SNMP traps and to ensure that the corosync-notifyd daemon is enabled and running: After you have configured each node in the cluster for SNMP and ensured that the necessary services are running, D-bus signals will be received by the foghorn service and translated into SNMPv2 traps. These traps are then passed to the host that you defined with the trapsink entry to receive SNMPv2 traps. | [
"master agentx",
"trap2sink host",
"chkconfig snmpd on service snmpd start",
"chkconfig messagebus on service messagebus start",
"chkconfig foghorn on service foghorn start",
"echo \"OPTIONS=\\\"-d\\\" \" > /etc/sysconfig/corosync-notifyd chkconfig corosync-notifyd on service corosync-notifyd start"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-master-agent-config-CA |
Chapter 1. About the Migration Toolkit for Containers | Chapter 1. About the Migration Toolkit for Containers The Migration Toolkit for Containers (MTC) enables you to migrate stateful application workloads between OpenShift Container Platform 4 clusters at the granularity of a namespace. Note If you are migrating from OpenShift Container Platform 3, see About migrating from OpenShift Container Platform 3 to 4 and Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . You can migrate applications within the same cluster or between clusters by using state migration. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. The MTC console is installed on the target cluster by default. You can configure the Migration Toolkit for Containers Operator to install the console on a remote cluster . See Advanced migration options for information about the following topics: Automating your migration with migration hooks and the MTC API. Configuring your migration plan to exclude resources, support large-scale migrations, and enable automatic PV resizing for direct volume migration. 1.1. MTC 1.8 support MTC 1.8.3 and earlier using OADP 1.3.z is supported on all {OCP-short} versions 4.15 and earlier. MTC 1.8.4 and later using OADP 1.3.z is currently supported on all {OCP-short} versions 4.15 and earlier. MTC 1.8.4 and later using OADP 1.4.z is currently supported on all supported {OCP-short} versions 4.13 and later. 1.1.1. Support for Migration Toolkit for Containers (MTC) Table 1.1. Supported versions of MTC MTC version OpenShift Container Platform version General availability Full support ends Maintenance ends Extended Update Support (EUS) 1.8 4.14 4.15 4.16 4.17 02 Oct 2023 Release of 1.9 Release of 1.10 N/A 1.7 3 4.14 4.15 4.16 01 Mar 2022 01 Jul 2024 01 Jul 2025 N/A For more details about EUS, see Extended Update Support . 1.2. Terminology Table 1.2. MTC terminology Term Definition Source cluster Cluster from which the applications are migrated. Destination cluster [1] Cluster to which the applications are migrated. Replication repository Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. Host cluster Cluster on which the migration-controller pod and the web console are running. The host cluster is usually the destination cluster but this is not required. The host cluster does not require an exposed registry route for direct image migration. Remote cluster A remote cluster is usually the source cluster but this is not required. A remote cluster requires a Secret custom resource that contains the migration-controller service account token. A remote cluster requires an exposed secure registry route for direct image migration. Indirect migration Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. Direct volume migration Persistent volumes are copied directly from the source cluster to the destination cluster. Direct image migration Images are copied directly from the source cluster to the destination cluster. Stage migration Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. Cutover migration The application is stopped on the source cluster and its resources are migrated to the destination cluster. State migration Application state is migrated by copying specific persistent volume claims to the destination cluster. Rollback migration Rollback migration rolls back a completed migration. 1 Called the target cluster in the MTC web console. 1.3. MTC workflow You can migrate Kubernetes resources, persistent volume data, and internal container images to OpenShift Container Platform 4.14 by using the Migration Toolkit for Containers (MTC) web console or the Kubernetes API. MTC migrates the following resources: A namespace specified in a migration plan. Namespace-scoped resources: When the MTC migrates a namespace, it migrates all the objects and resources associated with that namespace, such as services or pods. Additionally, if a resource that exists in the namespace but not at the cluster level depends on a resource that exists at the cluster level, the MTC migrates both resources. For example, a security context constraint (SCC) is a resource that exists at the cluster level and a service account (SA) is a resource that exists at the namespace level. If an SA exists in a namespace that the MTC migrates, the MTC automatically locates any SCCs that are linked to the SA and also migrates those SCCs. Similarly, the MTC migrates persistent volumes that are linked to the persistent volume claims of the namespace. Note Cluster-scoped resources might have to be migrated manually, depending on the resource. Custom resources (CRs) and custom resource definitions (CRDs): MTC automatically migrates CRs and CRDs at the namespace level. Migrating an application with the MTC web console involves the following steps: Install the Migration Toolkit for Containers Operator on all clusters. You can install the Migration Toolkit for Containers Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry. Configure the replication repository, an intermediate object storage that MTC uses to migrate data. The source and target clusters must have network access to the replication repository during migration. If you are using a proxy server, you must configure it to allow network traffic between the replication repository and the clusters. Add the source cluster to the MTC web console. Add the replication repository to the MTC web console. Create a migration plan, with one of the following data migration options: Copy : MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster. Note If you are using direct image migration or direct volume migration, the images or volumes are copied directly from the source cluster to the target cluster. Move : MTC unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters. Note Although the replication repository does not appear in this diagram, it is required for migration. Run the migration plan, with one of the following options: Stage copies data to the target cluster without stopping the application. A stage migration can be run multiple times so that most of the data is copied to the target before migration. Running one or more stage migrations reduces the duration of the cutover migration. Cutover stops the application on the source cluster and moves the resources to the target cluster. Optional: You can clear the Halt transactions on the source cluster during migration checkbox. 1.4. About data copy methods The Migration Toolkit for Containers (MTC) supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. 1.4.1. File system copy method MTC copies data files from the source cluster to the replication repository, and from there to the target cluster. The file system copy method uses Restic for indirect migration or Rsync for direct volume migration. Table 1.3. File system copy method summary Benefits Limitations Clusters can have different storage classes. Supported for all S3 storage providers. Optional data verification with checksum. Supports direct volume migration, which significantly increases performance. Slower than the snapshot copy method. Optional data verification significantly reduces performance. Note The Restic and Rsync PV migration assumes that the PVs supported are only volumeMode=filesystem . Using volumeMode=Block for file system migration is not supported. 1.4.2. Snapshot copy method MTC copies a snapshot of the source cluster data to the replication repository of a cloud provider. The data is restored on the target cluster. The snapshot copy method can be used with Amazon Web Services, Google Cloud Provider, and Microsoft Azure. Table 1.4. Snapshot copy method summary Benefits Limitations Faster than the file system copy method. Cloud provider must support snapshots. Clusters must be on the same cloud provider. Clusters must be in the same location or region. Clusters must have the same storage class. Storage class must be compatible with snapshots. Does not support direct volume migration. 1.5. Direct volume migration and direct image migration You can use direct image migration (DIM) and direct volume migration (DVM) to migrate images and data directly from the source cluster to the target cluster. If you run DVM with nodes that are in different availability zones, the migration might fail because the migrated pods cannot access the persistent volume claim. DIM and DVM have significant performance benefits because the intermediate steps of backing up files from the source cluster to the replication repository and restoring files from the replication repository to the target cluster are skipped. The data is transferred with Rsync . DIM and DVM have additional prerequisites. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/migration_toolkit_for_containers/about-mtc |
Chapter 2. Process Description | Chapter 2. Process Description RHEL OpenStack Platform includes all the drivers required for all Dell devices supported by the Block Storage service. In addition, the Director also has the puppet manifests, environment files, and Orchestration templates necessary for integrating the device as a back end to the Overcloud. Configuring a single Dell device as a back end involves editing the default environment file and including it in the Overcloud deployment. This file is available locally on the Undercloud, and can be edited to suit your environment. After editing this file, invoke it through the Director. Doing so ensures that it will persist through future Overcloud updates. The following sections describe this process in greater detail. In addition, the default environment file already contains enough information to call the necessary puppet manifests and Orchestration (Heat) templates that will configure the rest of the required Block Storage settings. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/dell_storage_center_back_end_guide/process_description |
function::sock_fam_num2str | function::sock_fam_num2str Name function::sock_fam_num2str - Given a protocol family number, return a string representation Synopsis Arguments family The family number | [
"sock_fam_num2str:string(family:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sock-fam-num2str |
4.15. Appendix - Building Red Hat Gluster Storage Compute Engine Image from Scratch | 4.15. Appendix - Building Red Hat Gluster Storage Compute Engine Image from Scratch It is possible to deploy an existing Red Hat Enterprise Linux public image and perform a layered install of Red Hat Gluster Storage. This creates an effective "double charge" for each Red Hat Enterprise Linux instance. Note Google Compute Engine charges a premium fee for using a public Red Hat Enterprise Linux image for instances in order to cover the expense of the Red Hat subscription. When deploying a layered install, you must re-register the instances with Red Hat Subscription Manager, thus consuming a Red Hat Enterprise Linux entitlement that you have paid for separately. After registering with Subscription Manager, however, Google Compute Engine will continue to charge the premium fee for the instances. To avoid this, we will build a custom image, which will not be subject to the Google Compute Engine premium fee. For infomration on building a custom image from scratch, see https://cloud.google.com/compute/docs/tutorials/building-images . 4.15.1. Installing Red Hat Gluster Storage from the ISO to a RAW Disk Image File Using your local virtualization manager, create a virtual machine with a RAW format sparse flat-file backing the system disk. The suggested minimum disk size for the Red Hat Gluster Storage 3.4 (and later) system disk is 20 GB and the maximum disk size for import into Google Compute Engine is 100 GB. Google Compute Engine additionally requires the disk size be in whole GB increments, that is, 20 GB or 21 GB, but not 20.5 GB. The RAW disk file should have the disk.raw file name. The disk.raw file must include an MS-DOS (MBR) partition table. For example, run the following dd command to create a 20 GB sparse file to serve as the RAW disk image: Refer to the Google Compute Engine Hardware Manifest guide at https://cloud.google.com/compute/docs/tutorials/building-images#hardwaremanifest to ensure your virtual machine image is compatible with the Google Compute Engine platform. Note The steps below assumes KVM/QEMU as your local virtualization platform. Attach the Red Hat Gluster Storage ISO, available from the Red Hat Customer Portal, as a bootable CD-ROM device to the image. Boot the VM to the ISO, and perform the installation of Red Hat Gluster Storage according to the instructions available in the Red Hat Gluster Storage 3.5 Installation Guide . 4.15.2. Enabling and Starting the Network Interface To enable and start the network interface: Enable the default eth0 network interface at boot time: Start the eth0 network interface: 4.15.3. Subscribing to the Red Hat Gluster Storage Server Channels You must register the system and enable the required channels for Red Hat Gluster Storage. For information on subscribing and connecting to the appropriate pool and repositories, refer to the Installing Red Hat Gluster Storagechapter in the Red Hat Gluster Storage 3.5 Installation Guide . . 4.15.4. Updating your System Update your systems using the following command: 4.15.5. Tuning and Miscellaneous Configuration Set the tuned profile to rhgs-sequential-io using the following command: Note The rhgs-sequential-io profile is appropriate for this environment, but the rhgs-random-io profile may be more appropriate for different workloads. Disable SElinux: If SELinux support is required, refer to the Enabling SELinux chapter in the Red Hat Gluster Storage 3.5 Installation Guide . 4.15.6. Customizing the Virtual Machine for Google Compute Engine The Google Compute Engine's "Build a Compute Engine Image from Scratch" documentation includes specific instructions for configuring the kernel, network, packages, SSH, and security of the virtual machine. It is recommended that you reference this documentation directly for updated information to ensure compatibility of your image with Google Compute Engine. Power off the instance to apply all changes and prepare the image import: | [
"dd if=/dev/zero of=disk.raw bs=1 count=0 seek=20G",
"sed -i s/ONBOOT=no/ONBOOT=yes/ /etc/sysconfig/network-scripts/ifcfg-eth0",
"ifup eth0",
"yum -y update",
"tuned-adm profile rhgs-sequential-io",
"setenforce 0",
"init 0"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/deployment_guide_for_public_cloud/sect-documentation-deployment_guide_for_public_cloud-google_cloud_platform-building_rhgs_compute_engine_from_scratch |
B.2. Tracepoints | B.2. Tracepoints The tracepoints can be found under the /sys/kernel/debug/tracing/ directory assuming that debugfs is mounted in the standard place at the /sys/kernel/debug directory. The events subdirectory contains all the tracing events that may be specified and, provided the gfs2 module is loaded, there will be a gfs2 subdirectory containing further subdirectories, one for each GFS2 event. The contents of the /sys/kernel/debug/tracing/events/gfs2 directory should look roughly like the following: To enable all the GFS2 tracepoints, enter the following command: To enable a specific tracepoint, there is an enable file in each of the individual event subdirectories. The same is true of the filter file which can be used to set an event filter for each event or set of events. The meaning of the individual events is explained in more detail below. The output from the tracepoints is available in ASCII or binary format. This appendix does not currently cover the binary interface. The ASCII interface is available in two ways. To list the current content of the ring buffer, you can enter the following command: This interface is useful in cases where you are using a long-running process for a certain period of time and, after some event, want to look back at the latest captured information in the buffer. An alternative interface, /sys/kernel/debug/tracing/trace_pipe , can be used when all the output is required. Events are read from this file as they occur; there is no historical information available through this interface. The format of the output is the same from both interfaces and is described for each of the GFS2 events in the later sections of this appendix. A utility called trace-cmd is available for reading tracepoint data. For more information on this utility, see the link in Section B.10, "References" . The trace-cmd utility can be used in a similar way to the strace utility, for example to run a command while gathering trace data from various sources. | [
"ls enable gfs2_bmap gfs2_glock_queue gfs2_log_flush filter gfs2_demote_rq gfs2_glock_state_change gfs2_pin gfs2_block_alloc gfs2_glock_put gfs2_log_blocks gfs2_promote",
"echo -n 1 >/sys/kernel/debug/tracing/events/gfs2/enable",
"cat /sys/kernel/debug/tracing/trace"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/ap-tracepoints-gfs2 |
3.2. Using the Red Hat Customer Portal | 3.2. Using the Red Hat Customer Portal The Red Hat Customer Portal at https://access.redhat.com/ is the main customer-oriented resource for official information related to Red Hat products. You can use it to find documentation, manage your subscriptions, download products and updates, open support cases, and learn about security updates. 3.2.1. Viewing Security Advisories on the Customer Portal To view security advisories (errata) relevant to the systems for which you have active subscriptions, log into the Customer Portal at https://access.redhat.com/ and click on the Download Products & Updates button on the main page. When you enter the Software & Download Center page, continue by clicking on the Errata button to see a list of advisories pertinent to your registered systems. To browse a list of all security updates for all active Red Hat products, go to Security Security Updates Active Products using the navigation menu at the top of the page. Click on the erratum code in the left part of the table to display more detailed information about the individual advisories. The page contains not only a description of the given erratum, including its causes, consequences, and required fixes, but also a list of all packages that the particular erratum updates along with instructions on how to apply the updates. The page also includes links to relevant references, such as related CVE . 3.2.2. Navigating CVE Customer Portal Pages The CVE ( Common Vulnerabilities and Exposures ) project, maintained by The MITRE Corporation, is a list of standardized names for vulnerabilities and security exposures. To browse a list of CVE that pertain to Red Hat products on the Customer Portal, log into your account at https://access.redhat.com/ and navigate to Security Resources CVE Database using the navigation menu at the top of the page. Click on the CVE code in the left part of the table to display more detailed information about the individual vulnerabilities. The page contains not only a description of the given CVE but also a list of affected Red Hat products along with links to relevant Red Hat errata. 3.2.3. Understanding Issue Severity Classification All security issues discovered in Red Hat products are assigned an impact rating by Red Hat Product Security according to the severity of the problem. The four-point scale consists of the following levels: Low, Moderate, Important, and Critical. In addition to that, every security issue is rated using the Common Vulnerability Scoring System ( CVSS ) base scores. Together, these ratings help you understand the impact of security issues, allowing you to schedule and prioritize upgrade strategies for your systems. Note that the ratings reflect the potential risk of a given vulnerability, which is based on a technical analysis of the bug, not the current threat level. This means that the security impact rating does not change if an exploit is released for a particular flaw. To see a detailed description of the individual levels of severity ratings on the Customer Portal, visit the Severity Ratings page. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-using_the_red_hat_customer_portal |
Chapter 3. SELinux Contexts | Chapter 3. SELinux Contexts Processes and files are labeled with an SELinux context that contains additional information, such as an SELinux user, role, type, and, optionally, a level. When running SELinux, all of this information is used to make access control decisions. In Red Hat Enterprise Linux, SELinux provides a combination of Role-Based Access Control (RBAC), Type Enforcement (TE), and, optionally, Multi-Level Security (MLS). The following is an example showing SELinux context. SELinux contexts are used on processes, Linux users, and files, on Linux operating systems that run SELinux. Use the ls -Z command to view the SELinux context of files and directories: SELinux contexts follow the SELinux user:role:type:level syntax. The fields are as follows: SELinux user The SELinux user identity is an identity known to the policy that is authorized for a specific set of roles, and for a specific MLS/MCS range. Each Linux user is mapped to an SELinux user via SELinux policy. This allows Linux users to inherit the restrictions placed on SELinux users. The mapped SELinux user identity is used in the SELinux context for processes in that session, in order to define what roles and levels they can enter. Run the semanage login -l command as the Linux root user to view a list of mappings between SELinux and Linux user accounts (you need to have the policycoreutils-python package installed): Output may differ slightly from system to system. The Login Name column lists Linux users, and the SELinux User column lists which SELinux user the Linux user is mapped to. For processes, the SELinux user limits which roles and levels are accessible. The last column, MLS/MCS Range , is the level used by Multi-Level Security (MLS) and Multi-Category Security (MCS). role Part of SELinux is the Role-Based Access Control (RBAC) security model. The role is an attribute of RBAC. SELinux users are authorized for roles, and roles are authorized for domains. The role serves as an intermediary between domains and SELinux users. The roles that can be entered determine which domains can be entered; ultimately, this controls which object types can be accessed. This helps reduce vulnerability to privilege escalation attacks. type The type is an attribute of Type Enforcement. The type defines a domain for processes, and a type for files. SELinux policy rules define how types can access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. level The level is an attribute of MLS and MCS. An MLS range is a pair of levels, written as lowlevel-highlevel if the levels differ, or lowlevel if the levels are identical ( s0-s0 is the same as s0 ). Each level is a sensitivity-category pair, with categories being optional. If there are categories, the level is written as sensitivity:category-set . If there are no categories, it is written as sensitivity . If the category set is a contiguous series, it can be abbreviated. For example, c0.c3 is the same as c0,c1,c2,c3 . The /etc/selinux/targeted/setrans.conf file maps levels ( s0:c0 ) to human-readable form (that is CompanyConfidential ). Do not edit setrans.conf with a text editor: use the semanage command to make changes. Refer to the semanage (8) manual page for further information. In Red Hat Enterprise Linux, targeted policy enforces MCS, and in MCS, there is just one sensitivity, s0 . MCS in Red Hat Enterprise Linux supports 1024 different categories: c0 through to c1023 . s0-s0:c0.c1023 is sensitivity s0 and authorized for all categories. MLS enforces the Bell-La Padula Mandatory Access Model, and is used in Labeled Security Protection Profile (LSPP) environments. To use MLS restrictions, install the selinux-policy-mls package, and configure MLS to be the default SELinux policy. The MLS policy shipped with Red Hat Enterprise Linux omits many program domains that were not part of the evaluated configuration, and therefore, MLS on a desktop workstation is unusable (no support for the X Window System); however, an MLS policy from the upstream SELinux Reference Policy can be built that includes all program domains. For more information on MLS configuration, refer to Section 5.11, "Multi-Level Security (MLS)" . 3.1. Domain Transitions A process in one domain transitions to another domain by executing an application that has the entrypoint type for the new domain. The entrypoint permission is used in SELinux policy, and controls which applications can be used to enter a domain. The following example demonstrates a domain transition: A user wants to change their password. To do this, they run the passwd application. The /usr/bin/passwd executable is labeled with the passwd_exec_t type: The passwd application accesses /etc/shadow , which is labeled with the shadow_t type: An SELinux policy rule states that processes running in the passwd_t domain are allowed to read and write to files labeled with the shadow_t type. The shadow_t type is only applied to files that are required for a password change. This includes /etc/gshadow , /etc/shadow , and their backup files. An SELinux policy rule states that the passwd_t domain has entrypoint permission to the passwd_exec_t type. When a user runs the passwd application, the user's shell process transitions to the passwd_t domain. With SELinux, since the default action is to deny, and a rule exists that allows (among other things) applications running in the passwd_t domain to access files labeled with the shadow_t type, the passwd application is allowed to access /etc/shadow , and update the user's password. This example is not exhaustive, and is used as a basic example to explain domain transition. Although there is an actual rule that allows subjects running in the passwd_t domain to access objects labeled with the shadow_t file type, other SELinux policy rules must be met before the subject can transition to a new domain. In this example, Type Enforcement ensures: The passwd_t domain can only be entered by executing an application labeled with the passwd_exec_t type; can only execute from authorized shared libraries, such as the lib_t type; and cannot execute any other applications. Only authorized domains, such as passwd_t , can write to files labeled with the shadow_t type. Even if other processes are running with superuser privileges, those processes cannot write to files labeled with the shadow_t type, as they are not running in the passwd_t domain. Only authorized domains can transition to the passwd_t domain. For example, the sendmail process running in the sendmail_t domain does not have a legitimate reason to execute passwd ; therefore, it can never transition to the passwd_t domain. Processes running in the passwd_t domain can only read and write to authorized types, such as files labeled with the etc_t or shadow_t types. This prevents the passwd application from being tricked into reading or writing arbitrary files. | [
"~]USD ls -Z file1 -rwxrw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1",
"~]# semanage login -l Login Name SELinux User MLS/MCS Range __default__ unconfined_u s0-s0:c0.c1023 root unconfined_u s0-s0:c0.c1023 system_u system_u s0-s0:c0.c1023",
"~]USD ls -Z /usr/bin/passwd -rwsr-xr-x root root system_u:object_r:passwd_exec_t:s0 /usr/bin/passwd",
"~]USD ls -Z /etc/shadow -r--------. root root system_u:object_r:shadow_t:s0 /etc/shadow"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/chap-security-enhanced_linux-selinux_contexts |
Chapter 156. KafkaRebalanceSpec schema reference | Chapter 156. KafkaRebalanceSpec schema reference Used in: KafkaRebalance Property Property type Description mode string (one of [remove-disks, remove-brokers, full, add-brokers]) Mode to run the rebalancing. The supported modes are full , add-brokers , remove-brokers . If not specified, the full mode is used by default. full mode runs the rebalancing across all the brokers in the cluster. add-brokers mode can be used after scaling up the cluster to move some replicas to the newly added brokers. remove-brokers mode can be used before scaling down the cluster to move replicas out of the brokers to be removed. remove-disks mode can be used to move data across the volumes within the same broker . brokers integer array The list of newly added brokers in case of scaling up or the ones to be removed in case of scaling down to use for rebalancing. This list can be used only with rebalancing mode add-brokers and removed-brokers . It is ignored with full mode. goals string array A list of goals, ordered by decreasing priority, to use for generating and executing the rebalance proposal. The supported goals are available at https://github.com/linkedin/cruise-control#goals . If an empty goals list is provided, the goals declared in the default.goals Cruise Control configuration parameter are used. skipHardGoalCheck boolean Whether to allow the hard goals specified in the Kafka CR to be skipped in optimization proposal generation. This can be useful when some of those hard goals are preventing a balance solution being found. Default is false. rebalanceDisk boolean Enables intra-broker disk balancing, which balances disk space utilization between disks on the same broker. Only applies to Kafka deployments that use JBOD storage with multiple disks. When enabled, inter-broker balancing is disabled. Default is false. excludedTopics string A regular expression where any matching topics will be excluded from the calculation of optimization proposals. This expression will be parsed by the java.util.regex.Pattern class; for more information on the supported format consult the documentation for that class. concurrentPartitionMovementsPerBroker integer The upper bound of ongoing partition replica movements going into/out of each broker. Default is 5. concurrentIntraBrokerPartitionMovements integer The upper bound of ongoing partition replica movements between disks within each broker. Default is 2. concurrentLeaderMovements integer The upper bound of ongoing partition leadership movements. Default is 1000. replicationThrottle integer The upper bound, in bytes per second, on the bandwidth used to move replicas. There is no limit by default. replicaMovementStrategies string array A list of strategy class names used to determine the execution order for the replica movements in the generated optimization proposal. By default BaseReplicaMovementStrategy is used, which will execute the replica movements in the order that they were generated. moveReplicasOffVolumes BrokerAndVolumeIds array List of brokers and their corresponding volumes from which replicas need to be moved. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkarebalancespec-reference |
Chapter 30. event | Chapter 30. event This chapter describes the commands under the event command. 30.1. event trigger create Create new trigger. Usage: Table 30.1. Positional arguments Value Summary name Event trigger name workflow_id Workflow id exchange Event trigger exchange topic Event trigger topic event Event trigger event name workflow_input Workflow input Table 30.2. Command arguments Value Summary -h, --help Show this help message and exit --params PARAMS Workflow params Table 30.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 30.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 30.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 30.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 30.2. event trigger delete Delete trigger. Usage: Table 30.7. Positional arguments Value Summary event_trigger_id Id of event trigger(s). Table 30.8. Command arguments Value Summary -h, --help Show this help message and exit 30.3. event trigger list List all event triggers. Usage: Table 30.9. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 30.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 30.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 30.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 30.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 30.4. event trigger show Show specific event trigger. Usage: Table 30.14. Positional arguments Value Summary event_trigger Event trigger id Table 30.15. Command arguments Value Summary -h, --help Show this help message and exit Table 30.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 30.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 30.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 30.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack event trigger create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--params PARAMS] name workflow_id exchange topic event [workflow_input]",
"openstack event trigger delete [-h] event_trigger_id [event_trigger_id ...]",
"openstack event trigger list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]",
"openstack event trigger show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] event_trigger"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/event |
C.3. IdM Domain Services and Log Rotation | C.3. IdM Domain Services and Log Rotation Several IdM domain services use the system logrotate service to handle log rotation and compression: named (DNS) httpd (Apache) tomcat sssd krb5kdc (the Kerberos domain controller) The logrotate configuration files are stored in the /etc/logrotate.d/ directory. Example C.1. Default httpd Log Rotation File at /etc/logrotate.d/httpd Warning The logrotate policy files for most of the services create a new log file with the same name, default owner, and default permissions as the log. However, with the files for named and tomcat , a special create rule sets this behavior with explicit permissions as well as user and group ownership. Do not change the permissions or the user and group which own the named and tomcat log files. This is required for both IdM operations and SELinux settings. Changing the ownership of the log rotation policy or of the files can cause the IdM domains services to fail. Additional Resources The 389 Directory Server instances used by IdM as a back end and by the Dogtag Certificate System have their own internal log rotation policies. See the Configuring Subsystem Logs in the Red Hat Directory Server 10 Administration Guide . For details about other potential log rotation settings, such as compression settings or the size of the log files, see the Log Rotation in the System Administrator's Guide or the logrotate (8) man page. | [
"/var/log/httpd/*log { missingok notifempty sharedscripts delaycompress postrotate /sbin/service httpd reload > /dev/null 2>/dev/null || true endscript }"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/logrotate |
Preface | Preface As a developer of business decisions, you can use Red Hat Process Automation Manager to develop decision services using Decision Model and Notation (DMN) models, Drools Rule Language (DRL) rules, guided decision tables, and other rule-authoring assets. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/pr01 |
Chapter 5. Recommended host practices for IBM Z & IBM(R) LinuxONE environments | Chapter 5. Recommended host practices for IBM Z & IBM(R) LinuxONE environments This topic provides recommended host practices for OpenShift Container Platform on IBM Z and IBM(R) LinuxONE. Note The s390x architecture is unique in many aspects. Therefore, some recommendations made here might not apply to other platforms. Note Unless stated otherwise, these practices apply to both z/VM and Red Hat Enterprise Linux (RHEL) KVM installations on IBM Z and IBM(R) LinuxONE. 5.1. Managing CPU overcommitment In a highly virtualized IBM Z environment, you must carefully plan the infrastructure setup and sizing. One of the most important features of virtualization is the capability to do resource overcommitment, allocating more resources to the virtual machines than actually available at the hypervisor level. This is very workload dependent and there is no golden rule that can be applied to all setups. Depending on your setup, consider these best practices regarding CPU overcommitment: At LPAR level (PR/SM hypervisor), avoid assigning all available physical cores (IFLs) to each LPAR. For example, with four physical IFLs available, you should not define three LPARs with four logical IFLs each. Check and understand LPAR shares and weights. An excessive number of virtual CPUs can adversely affect performance. Do not define more virtual processors to a guest than logical processors are defined to the LPAR. Configure the number of virtual processors per guest for peak workload, not more. Start small and monitor the workload. Increase the vCPU number incrementally if necessary. Not all workloads are suitable for high overcommitment ratios. If the workload is CPU intensive, you will probably not be able to achieve high ratios without performance problems. Workloads that are more I/O intensive can keep consistent performance even with high overcommitment ratios. Additional resources z/VM Common Performance Problems and Solutions z/VM overcommitment considerations LPAR CPU management 5.2. Disable Transparent Huge Pages Transparent Huge Pages (THP) attempt to automate most aspects of creating, managing, and using huge pages. Since THP automatically manages the huge pages, this is not always handled optimally for all types of workloads. THP can lead to performance regressions, since many applications handle huge pages on their own. Therefore, consider disabling THP. 5.3. Boost networking performance with Receive Flow Steering Receive Flow Steering (RFS) extends Receive Packet Steering (RPS) by further reducing network latency. RFS is technically based on RPS, and improves the efficiency of packet processing by increasing the CPU cache hit rate. RFS achieves this, and in addition considers queue length, by determining the most convenient CPU for computation so that cache hits are more likely to occur within the CPU. Thus, the CPU cache is invalidated less and requires fewer cycles to rebuild the cache. This can help reduce packet processing run time. 5.3.1. Use the Machine Config Operator (MCO) to activate RFS Procedure Copy the following MCO sample profile into a YAML file. For example, enable-rfs.yaml : apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-enable-rfs spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:text/plain;charset=US-ASCII,%23%20turn%20on%20Receive%20Flow%20Steering%20%28RFS%29%20for%20all%20network%20interfaces%0ASUBSYSTEM%3D%3D%22net%22%2C%20ACTION%3D%3D%22add%22%2C%20RUN%7Bprogram%7D%2B%3D%22/bin/bash%20-c%20%27for%20x%20in%20/sys/%24DEVPATH/queues/rx-%2A%3B%20do%20echo%208192%20%3E%20%24x/rps_flow_cnt%3B%20%20done%27%22%0A filesystem: root mode: 0644 path: /etc/udev/rules.d/70-persistent-net.rules - contents: source: data:text/plain;charset=US-ASCII,%23%20define%20sock%20flow%20enbtried%20for%20%20Receive%20Flow%20Steering%20%28RFS%29%0Anet.core.rps_sock_flow_entries%3D8192%0A filesystem: root mode: 0644 path: /etc/sysctl.d/95-enable-rps.conf Create the MCO profile: USD oc create -f enable-rfs.yaml Verify that an entry named 50-enable-rfs is listed: USD oc get mc To deactivate, enter: USD oc delete mc 50-enable-rfs Additional resources OpenShift Container Platform on IBM Z: Tune your network performance with RFS Configuring Receive Flow Steering (RFS) Scaling in the Linux Networking Stack 5.4. Choose your networking setup The networking stack is one of the most important components for a Kubernetes-based product like OpenShift Container Platform. For IBM Z setups, the networking setup depends on the hypervisor of your choice. Depending on the workload and the application, the best fit usually changes with the use case and the traffic pattern. Depending on your setup, consider these best practices: Consider all options regarding networking devices to optimize your traffic pattern. Explore the advantages of OSA-Express, RoCE Express, HiperSockets, z/VM VSwitch, Linux Bridge (KVM), and others to decide which option leads to the greatest benefit for your setup. Always use the latest available NIC version. For example, OSA Express 7S 10 GbE shows great improvement compared to OSA Express 6S 10 GbE with transactional workload types, although both are 10 GbE adapters. Each virtual switch adds an additional layer of latency. The load balancer plays an important role for network communication outside the cluster. Consider using a production-grade hardware load balancer if this is critical for your application. OpenShift Container Platform SDN introduces flows and rules, which impact the networking performance. Make sure to consider pod affinities and placements, to benefit from the locality of services where communication is critical. Balance the trade-off between performance and functionality. Additional resources OpenShift Container Platform on IBM Z - Performance Experiences, Hints and Tips OpenShift Container Platform on IBM Z Networking Performance Controlling pod placement on nodes using node affinity rules 5.5. Ensure high disk performance with HyperPAV on z/VM DASD and ECKD devices are commonly used disk types in IBM Z environments. In a typical OpenShift Container Platform setup in z/VM environments, DASD disks are commonly used to support the local storage for the nodes. You can set up HyperPAV alias devices to provide more throughput and overall better I/O performance for the DASD disks that support the z/VM guests. Using HyperPAV for the local storage devices leads to a significant performance benefit. However, you must be aware that there is a trade-off between throughput and CPU costs. 5.5.1. Use the Machine Config Operator (MCO) to activate HyperPAV aliases in nodes using z/VM full-pack minidisks For z/VM-based OpenShift Container Platform setups that use full-pack minidisks, you can leverage the advantage of MCO profiles by activating HyperPAV aliases in all of the nodes. You must add YAML configurations for both control plane and compute nodes. Procedure Copy the following MCO sample profile into a YAML file for the control plane node. For example, 05-master-kernelarg-hpav.yaml : USD cat 05-master-kernelarg-hpav.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-master-kernelarg-hpav spec: config: ignition: version: 3.1.0 kernelArguments: - rd.dasd=800-805 Copy the following MCO sample profile into a YAML file for the compute node. For example, 05-worker-kernelarg-hpav.yaml : USD cat 05-worker-kernelarg-hpav.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-hpav spec: config: ignition: version: 3.1.0 kernelArguments: - rd.dasd=800-805 Note You must modify the rd.dasd arguments to fit the device IDs. Create the MCO profiles: USD oc create -f 05-master-kernelarg-hpav.yaml USD oc create -f 05-worker-kernelarg-hpav.yaml To deactivate, enter: USD oc delete -f 05-master-kernelarg-hpav.yaml USD oc delete -f 05-worker-kernelarg-hpav.yaml Additional resources Using HyperPAV for ECKD DASD Scaling HyperPAV alias devices on Linux guests on z/VM 5.6. RHEL KVM on IBM Z host recommendations Optimizing a KVM virtual server environment strongly depends on the workloads of the virtual servers and on the available resources. The same action that enhances performance in one environment can have adverse effects in another. Finding the best balance for a particular setting can be a challenge and often involves experimentation. The following section introduces some best practices when using OpenShift Container Platform with RHEL KVM on IBM Z and IBM(R) LinuxONE environments. 5.6.1. Use I/O threads for your virtual block devices To make virtual block devices use I/O threads, you must configure one or more I/O threads for the virtual server and each virtual block device to use one of these I/O threads. The following example specifies <iothreads>3</iothreads> to configure three I/O threads, with consecutive decimal thread IDs 1, 2, and 3. The iothread="2" parameter specifies the driver element of the disk device to use the I/O thread with ID 2. Sample I/O thread specification ... <domain> <iothreads>3</iothreads> 1 ... <devices> ... <disk type="block" device="disk"> 2 <driver ... iothread="2"/> </disk> ... </devices> ... </domain> 1 The number of I/O threads. 2 The driver element of the disk device. Threads can increase the performance of I/O operations for disk devices, but they also use memory and CPU resources. You can configure multiple devices to use the same thread. The best mapping of threads to devices depends on the available resources and the workload. Start with a small number of I/O threads. Often, a single I/O thread for all disk devices is sufficient. Do not configure more threads than the number of virtual CPUs, and do not configure idle threads. You can use the virsh iothreadadd command to add I/O threads with specific thread IDs to a running virtual server. 5.6.2. Avoid virtual SCSI devices Configure virtual SCSI devices only if you need to address the device through SCSI-specific interfaces. Configure disk space as virtual block devices rather than virtual SCSI devices, regardless of the backing on the host. However, you might need SCSI-specific interfaces for: A LUN for a SCSI-attached tape drive on the host. A DVD ISO file on the host file system that is mounted on a virtual DVD drive. 5.6.3. Configure guest caching for disk Configure your disk devices to do caching by the guest and not by the host. Ensure that the driver element of the disk device includes the cache="none" and io="native" parameters. <disk type="block" device="disk"> <driver name="qemu" type="raw" cache="none" io="native" iothread="1"/> ... </disk> 5.6.4. Exclude the memory balloon device Unless you need a dynamic memory size, do not define a memory balloon device and ensure that libvirt does not create one for you. Include the memballoon parameter as a child of the devices element in your domain configuration XML file. Check the list of active profiles: <memballoon model="none"/> 5.6.5. Tune the CPU migration algorithm of the host scheduler Important Do not change the scheduler settings unless you are an expert who understands the implications. Do not apply changes to production systems without testing them and confirming that they have the intended effect. The kernel.sched_migration_cost_ns parameter specifies a time interval in nanoseconds. After the last execution of a task, the CPU cache is considered to have useful content until this interval expires. Increasing this interval results in fewer task migrations. The default value is 500000 ns. If the CPU idle time is higher than expected when there are runnable processes, try reducing this interval. If tasks bounce between CPUs or nodes too often, try increasing it. To dynamically set the interval to 60000 ns, enter the following command: # sysctl kernel.sched_migration_cost_ns=60000 To persistently change the value to 60000 ns, add the following entry to /etc/sysctl.conf : kernel.sched_migration_cost_ns=60000 5.6.6. Disable the cpuset cgroup controller Note This setting applies only to KVM hosts with cgroups version 1. To enable CPU hotplug on the host, disable the cgroup controller. Procedure Open /etc/libvirt/qemu.conf with an editor of your choice. Go to the cgroup_controllers line. Duplicate the entire line and remove the leading number sign (#) from the copy. Remove the cpuset entry, as follows: cgroup_controllers = [ "cpu", "devices", "memory", "blkio", "cpuacct" ] For the new setting to take effect, you must restart the libvirtd daemon: Stop all virtual machines. Run the following command: # systemctl restart libvirtd Restart the virtual machines. This setting persists across host reboots. 5.6.7. Tune the polling period for idle virtual CPUs When a virtual CPU becomes idle, KVM polls for wakeup conditions for the virtual CPU before allocating the host resource. You can specify the time interval, during which polling takes place in sysfs at /sys/module/kvm/parameters/halt_poll_ns . During the specified time, polling reduces the wakeup latency for the virtual CPU at the expense of resource usage. Depending on the workload, a longer or shorter time for polling can be beneficial. The time interval is specified in nanoseconds. The default is 50000 ns. To optimize for low CPU consumption, enter a small value or write 0 to disable polling: # echo 0 > /sys/module/kvm/parameters/halt_poll_ns To optimize for low latency, for example for transactional workloads, enter a large value: # echo 80000 > /sys/module/kvm/parameters/halt_poll_ns Additional resources Linux on IBM Z Performance Tuning for KVM Getting started with virtualization on IBM Z | [
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-enable-rfs spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:text/plain;charset=US-ASCII,%23%20turn%20on%20Receive%20Flow%20Steering%20%28RFS%29%20for%20all%20network%20interfaces%0ASUBSYSTEM%3D%3D%22net%22%2C%20ACTION%3D%3D%22add%22%2C%20RUN%7Bprogram%7D%2B%3D%22/bin/bash%20-c%20%27for%20x%20in%20/sys/%24DEVPATH/queues/rx-%2A%3B%20do%20echo%208192%20%3E%20%24x/rps_flow_cnt%3B%20%20done%27%22%0A filesystem: root mode: 0644 path: /etc/udev/rules.d/70-persistent-net.rules - contents: source: data:text/plain;charset=US-ASCII,%23%20define%20sock%20flow%20enbtried%20for%20%20Receive%20Flow%20Steering%20%28RFS%29%0Anet.core.rps_sock_flow_entries%3D8192%0A filesystem: root mode: 0644 path: /etc/sysctl.d/95-enable-rps.conf",
"oc create -f enable-rfs.yaml",
"oc get mc",
"oc delete mc 50-enable-rfs",
"cat 05-master-kernelarg-hpav.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-master-kernelarg-hpav spec: config: ignition: version: 3.1.0 kernelArguments: - rd.dasd=800-805",
"cat 05-worker-kernelarg-hpav.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-hpav spec: config: ignition: version: 3.1.0 kernelArguments: - rd.dasd=800-805",
"oc create -f 05-master-kernelarg-hpav.yaml",
"oc create -f 05-worker-kernelarg-hpav.yaml",
"oc delete -f 05-master-kernelarg-hpav.yaml",
"oc delete -f 05-worker-kernelarg-hpav.yaml",
"<domain> <iothreads>3</iothreads> 1 <devices> <disk type=\"block\" device=\"disk\"> 2 <driver ... iothread=\"2\"/> </disk> </devices> </domain>",
"<disk type=\"block\" device=\"disk\"> <driver name=\"qemu\" type=\"raw\" cache=\"none\" io=\"native\" iothread=\"1\"/> </disk>",
"<memballoon model=\"none\"/>",
"sysctl kernel.sched_migration_cost_ns=60000",
"kernel.sched_migration_cost_ns=60000",
"cgroup_controllers = [ \"cpu\", \"devices\", \"memory\", \"blkio\", \"cpuacct\" ]",
"systemctl restart libvirtd",
"echo 0 > /sys/module/kvm/parameters/halt_poll_ns",
"echo 80000 > /sys/module/kvm/parameters/halt_poll_ns"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/scalability_and_performance/ibm-z-recommended-host-practices |
14.22.3. Defining a Virtual Network from an XML File | 14.22.3. Defining a Virtual Network from an XML File This command defines a virtual network from an XML file, the network is just defined but not instantiated. To define the virtual network, run: | [
"net-define file"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-virtual_networking_commands-defining_a_virtual_network_from_an_xml_file |
Chapter 1. Planning your Red Hat Ansible Automation Platform operator installation on Red Hat OpenShift Container Platform | Chapter 1. Planning your Red Hat Ansible Automation Platform operator installation on Red Hat OpenShift Container Platform Red Hat Ansible Automation Platform is supported on both Red Hat Enterprise Linux and Red Hat OpenShift. OpenShift operators help install and automate day-2 operations of complex, distributed software on Red Hat OpenShift Container Platform. The Ansible Automation Platform Operator enables you to deploy and manage Ansible Automation Platform components on Red Hat OpenShift Container Platform. You can use this section to help plan your Red Hat Ansible Automation Platform installation on your Red Hat OpenShift Container Platform environment. Before installing, review the supported installation scenarios to determine which meets your requirements. 1.1. About Ansible Automation Platform Operator The Ansible Automation Platform Operator provides cloud-native, push-button deployment of new Ansible Automation Platform instances in your OpenShift environment. The Ansible Automation Platform Operator includes resource types to deploy and manage instances of Automation controller and Private Automation hub. It also includes automation controller job resources for defining and launching jobs inside your automation controller deployments. Deploying Ansible Automation Platform instances with a Kubernetes native operator offers several advantages over launching instances from a playbook deployed on Red Hat OpenShift Container Platform, including upgrades and full lifecycle support for your Red Hat Ansible Automation Platform deployments. You can install the Ansible Automation Platform Operator from the Red Hat Operators catalog in OperatorHub. 1.2. OpenShift Container Platform version compatibility The Ansible Automation Platform Operator to install Ansible Automation Platform 2.3 is available on OpenShift Container Platform 4.9 and later versions. Additional resources See the Red Hat Ansible Automation Platform Life Cycle for the most current compatibility details. 1.3. Supported installation scenarios for Red Hat OpenShift Container Platform You can use the OperatorHub on the Red Hat OpenShift Container Platform web console to install Ansible Automation Platform Operator. Alternatively, you can install Ansible Automation Platform Operator from the OpenShift Container Platform command-line interface (CLI), oc . Follow one of the workflows below to install the Ansible Automation Platform Operator and use it to install the components of Ansible Automation Platform that you require. Automation controller custom resources first, then automation hub custom resources; Automation hub custom resources first, then automation controller custom resources; Automation controller custom resources; Automation hub custom resources. 1.4. Custom resources You can define custom resources for each primary installation workflows. 1.5. Additional resources See Understanding OperatorHub to learn more about OpenShift Container Platform OperatorHub. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/operator-install-planning |
Chapter 25. Rack schema reference | Chapter 25. Rack schema reference Used in: KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec Full list of Rack schema properties The rack option configures rack awareness. A rack can represent an availability zone, data center, or an actual rack in your data center. The rack is configured through a topologyKey . topologyKey identifies a label on OpenShift nodes that contains the name of the topology in its value. An example of such a label is topology.kubernetes.io/zone (or failure-domain.beta.kubernetes.io/zone on older OpenShift versions), which contains the name of the availability zone in which the OpenShift node runs. You can configure your Kafka cluster to be aware of the rack in which it runs, and enable additional features such as spreading partition replicas across different racks or consuming messages from the closest replicas. For more information about OpenShift node labels, see Well-Known Labels, Annotations and Taints . Consult your OpenShift administrator regarding the node label that represents the zone or rack into which the node is deployed. 25.1. Spreading partition replicas across racks When rack awareness is configured, Streams for Apache Kafka will set broker.rack configuration for each Kafka broker. The broker.rack configuration assigns a rack ID to each broker. When broker.rack is configured, Kafka brokers will spread partition replicas across as many different racks as possible. When replicas are spread across multiple racks, the probability that multiple replicas will fail at the same time is lower than if they would be in the same rack. Spreading replicas improves resiliency, and is important for availability and reliability. To enable rack awareness in Kafka, add the rack option to the .spec.kafka section of the Kafka custom resource as shown in the example below. Example rack configuration for Kafka apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... rack: topologyKey: topology.kubernetes.io/zone # ... Note The rack in which brokers are running can change in some cases when the pods are deleted or restarted. As a result, the replicas running in different racks might then share the same rack. Use Cruise Control and the KafkaRebalance resource with the RackAwareGoal to make sure that replicas remain distributed across different racks. When rack awareness is enabled in the Kafka custom resource, Streams for Apache Kafka will automatically add the OpenShift preferredDuringSchedulingIgnoredDuringExecution affinity rule to distribute the Kafka brokers across the different racks. However, the preferred rule does not guarantee that the brokers will be spread. Depending on your exact OpenShift and Kafka configurations, you should add additional affinity rules or configure topologySpreadConstraints for both ZooKeeper and Kafka to make sure the nodes are properly distributed accross as many racks as possible. For more information see Configuring pod scheduling . 25.2. Consuming messages from the closest replicas Rack awareness can also be used in consumers to fetch data from the closest replica. This is useful for reducing the load on your network when a Kafka cluster spans multiple datacenters and can also reduce costs when running Kafka in public clouds. However, it can lead to increased latency. In order to be able to consume from the closest replica, rack awareness has to be configured in the Kafka cluster, and the RackAwareReplicaSelector has to be enabled. The replica selector plugin provides the logic that enables clients to consume from the nearest replica. The default implementation uses LeaderSelector to always select the leader replica for the client. Specify RackAwareReplicaSelector for the replica.selector.class to switch from the default implementation. Example rack configuration with enabled replica-aware selector apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... rack: topologyKey: topology.kubernetes.io/zone config: # ... replica.selector.class: org.apache.kafka.common.replica.RackAwareReplicaSelector # ... In addition to the Kafka broker configuration, you also need to specify the client.rack option in your consumers. The client.rack option should specify the rack ID in which the consumer is running. RackAwareReplicaSelector associates matching broker.rack and client.rack IDs, to find the nearest replica and consume from it. If there are multiple replicas in the same rack, RackAwareReplicaSelector always selects the most up-to-date replica. If the rack ID is not specified, or if it cannot find a replica with the same rack ID, it will fall back to the leader replica. Figure 25.1. Example showing client consuming from replicas in the same availability zone You can also configure Kafka Connect, MirrorMaker 2 and Kafka Bridge so that connectors consume messages from the closest replicas. You enable rack awareness in the KafkaConnect , KafkaMirrorMaker2 , and KafkaBridge custom resources. The configuration does does not set affinity rules, but you can also configure affinity or topologySpreadConstraints . For more information see Configuring pod scheduling . When deploying Kafka Connect using Streams for Apache Kafka, you can use the rack section in the KafkaConnect custom resource to automatically configure the client.rack option. Example rack configuration for Kafka Connect apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect # ... spec: # ... rack: topologyKey: topology.kubernetes.io/zone # ... When deploying MirrorMaker 2 using Streams for Apache Kafka, you can use the rack section in the KafkaMirrorMaker2 custom resource to automatically configure the client.rack option. Example rack configuration for MirrorMaker 2 apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 # ... spec: # ... rack: topologyKey: topology.kubernetes.io/zone # ... When deploying Kafka Bridge using Streams for Apache Kafka, you can use the rack section in the KafkaBridge custom resource to automatically configure the client.rack option. Example rack configuration for Kafka Bridge apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge # ... spec: # ... rack: topologyKey: topology.kubernetes.io/zone # ... 25.3. Rack schema properties Property Property type Description topologyKey string A key that matches labels assigned to the OpenShift cluster nodes. The value of the label is used to set a broker's broker.rack config, and the client.rack config for Kafka Connect or MirrorMaker 2. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone config: # replica.selector.class: org.apache.kafka.common.replica.RackAwareReplicaSelector #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # rack: topologyKey: topology.kubernetes.io/zone #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 spec: # rack: topologyKey: topology.kubernetes.io/zone #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge spec: # rack: topologyKey: topology.kubernetes.io/zone #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-Rack-reference |
Part I. Preparing the RHEL installation | Part I. Preparing the RHEL installation Essential steps for preparing a RHEL installation environment addresses system requirements, supported architectures, and offers customization options for installation media. Additionally, it covers methods for creating bootable installation media, setting up network-based repositories, and configuring UEFI HTTP or PXE installation sources. Guidance is also included for systems using UEFI Secure Boot and for installing RHEL on 64-bit IBM Z architecture. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automatically_installing_rhel/preparing-the-rhel-installation |
7.25. cluster | 7.25. cluster 7.25.1. RHBA-2015:1363 - cluster bug fix and enhancement update Updated cluster packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The Red Hat Cluster Manager is a collection of technologies working together to provide data integrity and the ability to maintain application availability in the event of a failure. Bug Fixes BZ# 1149516 Previously, the gfs2_convert utility or a certain corruption could introduce bogus values for the ondisk inode "di_goal_meta" field. Consequently, these bogus values could affect GFS2 block allocation, cause an EBADSLT error on such inodes, and could disallow the creation of new files in directories or new blocks in regular files. With this update, gfs2_convert calculates the correct values. The fsck.gfs2 utility now also has the capability to identify and fix incorrect inode goal values, and the described problems no longer occur. BZ# 1121693 The gfs2_quota, gfs2_tool, gfs2_grow, and gfs2_jadd utilities did not mount the gfs2 meta file system with the "context" mount option matching the "context" option used for mounting the parent gfs2 file system. Consequently, the affected gfs2 utilities failed with an error message "Device or resource busy" when run with SELinux enabled. The mentioned gfs2 utilities have been updated to pass the "context" mount option of the gfs2 file system to the meta file system, and they no longer fail when SELinux is enabled. BZ# 1133724 A race condition in the dlm_controld daemon could be triggered when reloading the configuration, which caused a dangling file pointer to be written to. Consequently, under certain rare conditions, dlm_controld could terminate unexpectedly with a segmentation fault, leaving Distributed Lock Manager (DLM) lockspaces unmanaged and requiring a system reboot to clear. This bug has been fixed, and dlm_controld no longer crashes when the configuration is updated. BZ# 1087286 Previously, errors generated while updating the resource-agents scheme were sometimes not reported. As a consequence, if an error occurred when updating the resource-agents schema, the update failed silently and later attempts to start the cman service could fail as well. With this update, schema errors are reported, and remedial action can be taken at upgrade time in case of problems. Enhancements BZ# 1099223 The qdiskd daemon now automatically enables the master_wins mode when votes for the quorum disk default to 1 or when the number of votes is explicitly set to 1. As a result, quorum disk configuration is more consistent with the documentation, and a misconfiguration is avoided. BZ# 1095418 A new error message has been added to the qdiskd daemon, which prevents qdiskd from starting if it is configured with no heuristics in a cluster with three or more nodes. Heuristics are required in clusters with three or more nodes using a quorum device for correct operation in the event of a tie-break. Now, if no heuristics are specified and the cluster contains three or more nodes, the cman service fails to start and an error message is returned. This behavior prevents misconfigurations. Users of cluster are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-cluster |
2.2. Removing Control Groups | 2.2. Removing Control Groups Transient cgroups are released automatically as soon as the processes they contain finish. By passing the --remain‐after-exit option to systemd-run you can keep the unit running after its processes finished to collect runtime information. To stop the unit gracefully, type: Replace name with the name of the service you wish to stop. To terminate one or more of the unit's processes, type as root : Replace name with a name of the unit, for example httpd.service . Use --kill-who to select which processes from the cgroup you wish to terminate. To kill multiple processes at the same time, pass a comma-separated list of PIDs. Replace signal with the type of POSIX signal you wish to send to specified processes. Default is SIGTERM . For more information, see the systemd.kill manual page. Persistent cgroups are released when the unit is disabled and its configuration file is deleted by running: where name stands for the name of the service to be disabled. | [
"~]# systemctl stop name . service",
"~]# systemctl kill name .service --kill-who= PID ,... --signal= signal",
"~]# systemctl disable name . service"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/resource_management_guide/sec-removing_cgroups |
Chapter 3. Build [build.openshift.io/v1] | Chapter 3. Build [build.openshift.io/v1] Description Build encapsulates the inputs needed to produce a new deployable image, as well as the status of the execution and a reference to the Pod which executed the build. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object BuildSpec has the information to represent a build and also additional information about a build status object BuildStatus contains the status of a build 3.1.1. .spec Description BuildSpec has the information to represent a build and also additional information about a build Type object Required strategy Property Type Description completionDeadlineSeconds integer completionDeadlineSeconds is an optional duration in seconds, counted from the time when a build pod gets scheduled in the system, that the build may be active on a node before the system actively tries to terminate the build; value must be positive integer mountTrustedCA boolean mountTrustedCA bind mounts the cluster's trusted certificate authorities, as defined in the cluster's proxy configuration, into the build. This lets processes within a build trust components signed by custom PKI certificate authorities, such as private artifact repositories and HTTPS proxies. When this field is set to true, the contents of /etc/pki/ca-trust within the build are managed by the build container, and any changes to this directory or its subdirectories (for example - within a Dockerfile RUN instruction) are not persisted in the build's output image. nodeSelector object (string) nodeSelector is a selector which must be true for the build pod to fit on a node If nil, it can be overridden by default build nodeselector values for the cluster. If set to an empty map or a map with any values, default build nodeselector values are ignored. output object BuildOutput is input to a build strategy and describes the container image that the strategy should produce. postCommit object A BuildPostCommitSpec holds a build post commit hook specification. The hook executes a command in a temporary container running the build output image, immediately after the last layer of the image is committed and before the image is pushed to a registry. The command is executed with the current working directory (USDPWD) set to the image's WORKDIR. The build will be marked as failed if the hook execution fails. It will fail if the script or command return a non-zero exit code, or if there is any other error related to starting the temporary container. There are five different ways to configure the hook. As an example, all forms below are equivalent and will execute rake test --verbose . 1. Shell script: "postCommit": { "script": "rake test --verbose", } The above is a convenient form which is equivalent to: "postCommit": { "command": ["/bin/sh", "-ic"], "args": ["rake test --verbose"] } 2. A command as the image entrypoint: "postCommit": { "commit": ["rake", "test", "--verbose"] } Command overrides the image entrypoint in the exec form, as documented in Docker: https://docs.docker.com/engine/reference/builder/#entrypoint . 3. Pass arguments to the default entrypoint: "postCommit": { "args": ["rake", "test", "--verbose"] } This form is only useful if the image entrypoint can handle arguments. 4. Shell script with arguments: "postCommit": { "script": "rake test USD1", "args": ["--verbose"] } This form is useful if you need to pass arguments that would otherwise be hard to quote properly in the shell script. In the script, USD0 will be "/bin/sh" and USD1, USD2, etc, are the positional arguments from Args. 5. Command with arguments: "postCommit": { "command": ["rake", "test"], "args": ["--verbose"] } This form is equivalent to appending the arguments to the Command slice. It is invalid to provide both Script and Command simultaneously. If none of the fields are specified, the hook is not executed. resources ResourceRequirements resources computes resource requirements to execute the build. revision object SourceRevision is the revision or commit information from the source for the build serviceAccount string serviceAccount is the name of the ServiceAccount to use to run the pod created by this build. The pod will be allowed to use secrets referenced by the ServiceAccount source object BuildSource is the SCM used for the build. strategy object BuildStrategy contains the details of how to perform a build. triggeredBy array triggeredBy describes which triggers started the most recent update to the build configuration and contains information about those triggers. triggeredBy[] object BuildTriggerCause holds information about a triggered build. It is used for displaying build trigger data for each build and build configuration in oc describe. It is also used to describe which triggers led to the most recent update in the build configuration. 3.1.2. .spec.output Description BuildOutput is input to a build strategy and describes the container image that the strategy should produce. Type object Property Type Description imageLabels array imageLabels define a list of labels that are applied to the resulting image. If there are multiple labels with the same name then the last one in the list is used. imageLabels[] object ImageLabel represents a label applied to the resulting image. pushSecret LocalObjectReference PushSecret is the name of a Secret that would be used for setting up the authentication for executing the Docker push to authentication enabled Docker Registry (or Docker Hub). to ObjectReference to defines an optional location to push the output of this build to. Kind must be one of 'ImageStreamTag' or 'DockerImage'. This value will be used to look up a container image repository to push to. In the case of an ImageStreamTag, the ImageStreamTag will be looked for in the namespace of the build unless Namespace is specified. 3.1.3. .spec.output.imageLabels Description imageLabels define a list of labels that are applied to the resulting image. If there are multiple labels with the same name then the last one in the list is used. Type array 3.1.4. .spec.output.imageLabels[] Description ImageLabel represents a label applied to the resulting image. Type object Required name Property Type Description name string name defines the name of the label. It must have non-zero length. value string value defines the literal value of the label. 3.1.5. .spec.postCommit Description A BuildPostCommitSpec holds a build post commit hook specification. The hook executes a command in a temporary container running the build output image, immediately after the last layer of the image is committed and before the image is pushed to a registry. The command is executed with the current working directory (USDPWD) set to the image's WORKDIR. The build will be marked as failed if the hook execution fails. It will fail if the script or command return a non-zero exit code, or if there is any other error related to starting the temporary container. There are five different ways to configure the hook. As an example, all forms below are equivalent and will execute rake test --verbose . Shell script: A command as the image entrypoint: Pass arguments to the default entrypoint: Shell script with arguments: Command with arguments: It is invalid to provide both Script and Command simultaneously. If none of the fields are specified, the hook is not executed. Type object Property Type Description args array (string) args is a list of arguments that are provided to either Command, Script or the container image's default entrypoint. The arguments are placed immediately after the command to be run. command array (string) command is the command to run. It may not be specified with Script. This might be needed if the image doesn't have /bin/sh , or if you do not want to use a shell. In all other cases, using Script might be more convenient. script string script is a shell script to be run with /bin/sh -ic . It may not be specified with Command. Use Script when a shell script is appropriate to execute the post build hook, for example for running unit tests with rake test . If you need control over the image entrypoint, or if the image does not have /bin/sh , use Command and/or Args. The -i flag is needed to support CentOS and RHEL images that use Software Collections (SCL), in order to have the appropriate collections enabled in the shell. E.g., in the Ruby image, this is necessary to make ruby , bundle and other binaries available in the PATH. 3.1.6. .spec.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 3.1.7. .spec.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 3.1.8. .spec.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.9. .spec.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.10. .spec.source Description BuildSource is the SCM used for the build. Type object Property Type Description binary object BinaryBuildSource describes a binary file to be used for the Docker and Source build strategies, where the file will be extracted and used as the build source. configMaps array configMaps represents a list of configMaps and their destinations that will be used for the build. configMaps[] object ConfigMapBuildSource describes a configmap and its destination directory that will be used only at the build time. The content of the configmap referenced here will be copied into the destination directory instead of mounting. contextDir string contextDir specifies the sub-directory where the source code for the application exists. This allows to have buildable sources in directory other than root of repository. dockerfile string dockerfile is the raw contents of a Dockerfile which should be built. When this option is specified, the FROM may be modified based on your strategy base image and additional ENV stanzas from your strategy environment will be added after the FROM, but before the rest of your Dockerfile stanzas. The Dockerfile source type may be used with other options like git - in those cases the Git repo will have any innate Dockerfile replaced in the context dir. git object GitBuildSource defines the parameters of a Git SCM images array images describes a set of images to be used to provide source for the build images[] object ImageSource is used to describe build source that will be extracted from an image or used during a multi stage build. A reference of type ImageStreamTag, ImageStreamImage or DockerImage may be used. A pull secret can be specified to pull the image from an external registry or override the default service account secret if pulling from the internal registry. Image sources can either be used to extract content from an image and place it into the build context along with the repository source, or used directly during a multi-stage container image build to allow content to be copied without overwriting the contents of the repository source (see the 'paths' and 'as' fields). secrets array secrets represents a list of secrets and their destinations that will be used only for the build. secrets[] object SecretBuildSource describes a secret and its destination directory that will be used only at the build time. The content of the secret referenced here will be copied into the destination directory instead of mounting. sourceSecret LocalObjectReference sourceSecret is the name of a Secret that would be used for setting up the authentication for cloning private repository. The secret contains valid credentials for remote repository, where the data's key represent the authentication method to be used and value is the base64 encoded credentials. Supported auth methods are: ssh-privatekey. type string type of build input to accept 3.1.11. .spec.source.binary Description BinaryBuildSource describes a binary file to be used for the Docker and Source build strategies, where the file will be extracted and used as the build source. Type object Property Type Description asFile string asFile indicates that the provided binary input should be considered a single file within the build input. For example, specifying "webapp.war" would place the provided binary as /webapp.war for the builder. If left empty, the Docker and Source build strategies assume this file is a zip, tar, or tar.gz file and extract it as the source. The custom strategy receives this binary as standard input. This filename may not contain slashes or be '..' or '.'. 3.1.12. .spec.source.configMaps Description configMaps represents a list of configMaps and their destinations that will be used for the build. Type array 3.1.13. .spec.source.configMaps[] Description ConfigMapBuildSource describes a configmap and its destination directory that will be used only at the build time. The content of the configmap referenced here will be copied into the destination directory instead of mounting. Type object Required configMap Property Type Description configMap LocalObjectReference configMap is a reference to an existing configmap that you want to use in your build. destinationDir string destinationDir is the directory where the files from the configmap should be available for the build time. For the Source build strategy, these will be injected into a container where the assemble script runs. For the container image build strategy, these will be copied into the build directory, where the Dockerfile is located, so users can ADD or COPY them during container image build. 3.1.14. .spec.source.git Description GitBuildSource defines the parameters of a Git SCM Type object Required uri Property Type Description httpProxy string httpProxy is a proxy used to reach the git repository over http httpsProxy string httpsProxy is a proxy used to reach the git repository over https noProxy string noProxy is the list of domains for which the proxy should not be used ref string ref is the branch/tag/ref to build. uri string uri points to the source that will be built. The structure of the source will depend on the type of build to run 3.1.15. .spec.source.images Description images describes a set of images to be used to provide source for the build Type array 3.1.16. .spec.source.images[] Description ImageSource is used to describe build source that will be extracted from an image or used during a multi stage build. A reference of type ImageStreamTag, ImageStreamImage or DockerImage may be used. A pull secret can be specified to pull the image from an external registry or override the default service account secret if pulling from the internal registry. Image sources can either be used to extract content from an image and place it into the build context along with the repository source, or used directly during a multi-stage container image build to allow content to be copied without overwriting the contents of the repository source (see the 'paths' and 'as' fields). Type object Required from Property Type Description as array (string) A list of image names that this source will be used in place of during a multi-stage container image build. For instance, a Dockerfile that uses "COPY --from=nginx:latest" will first check for an image source that has "nginx:latest" in this field before attempting to pull directly. If the Dockerfile does not reference an image source it is ignored. This field and paths may both be set, in which case the contents will be used twice. from ObjectReference from is a reference to an ImageStreamTag, ImageStreamImage, or DockerImage to copy source from. paths array paths is a list of source and destination paths to copy from the image. This content will be copied into the build context prior to starting the build. If no paths are set, the build context will not be altered. paths[] object ImageSourcePath describes a path to be copied from a source image and its destination within the build directory. pullSecret LocalObjectReference pullSecret is a reference to a secret to be used to pull the image from a registry If the image is pulled from the OpenShift registry, this field does not need to be set. 3.1.17. .spec.source.images[].paths Description paths is a list of source and destination paths to copy from the image. This content will be copied into the build context prior to starting the build. If no paths are set, the build context will not be altered. Type array 3.1.18. .spec.source.images[].paths[] Description ImageSourcePath describes a path to be copied from a source image and its destination within the build directory. Type object Required sourcePath destinationDir Property Type Description destinationDir string destinationDir is the relative directory within the build directory where files copied from the image are placed. sourcePath string sourcePath is the absolute path of the file or directory inside the image to copy to the build directory. If the source path ends in /. then the content of the directory will be copied, but the directory itself will not be created at the destination. 3.1.19. .spec.source.secrets Description secrets represents a list of secrets and their destinations that will be used only for the build. Type array 3.1.20. .spec.source.secrets[] Description SecretBuildSource describes a secret and its destination directory that will be used only at the build time. The content of the secret referenced here will be copied into the destination directory instead of mounting. Type object Required secret Property Type Description destinationDir string destinationDir is the directory where the files from the secret should be available for the build time. For the Source build strategy, these will be injected into a container where the assemble script runs. Later, when the script finishes, all files injected will be truncated to zero length. For the container image build strategy, these will be copied into the build directory, where the Dockerfile is located, so users can ADD or COPY them during container image build. secret LocalObjectReference secret is a reference to an existing secret that you want to use in your build. 3.1.21. .spec.strategy Description BuildStrategy contains the details of how to perform a build. Type object Property Type Description customStrategy object CustomBuildStrategy defines input parameters specific to Custom build. dockerStrategy object DockerBuildStrategy defines input parameters specific to container image build. jenkinsPipelineStrategy object JenkinsPipelineBuildStrategy holds parameters specific to a Jenkins Pipeline build. Deprecated: use OpenShift Pipelines sourceStrategy object SourceBuildStrategy defines input parameters specific to an Source build. type string type is the kind of build strategy. 3.1.22. .spec.strategy.customStrategy Description CustomBuildStrategy defines input parameters specific to Custom build. Type object Required from Property Type Description buildAPIVersion string buildAPIVersion is the requested API version for the Build object serialized and passed to the custom builder env array (EnvVar) env contains additional environment variables you want to pass into a builder container. exposeDockerSocket boolean exposeDockerSocket will allow running Docker commands (and build container images) from inside the container. forcePull boolean forcePull describes if the controller should configure the build pod to always pull the images for the builder or only pull if it is not present locally from ObjectReference from is reference to an DockerImage, ImageStreamTag, or ImageStreamImage from which the container image should be pulled pullSecret LocalObjectReference pullSecret is the name of a Secret that would be used for setting up the authentication for pulling the container images from the private Docker registries secrets array secrets is a list of additional secrets that will be included in the build pod secrets[] object SecretSpec specifies a secret to be included in a build pod and its corresponding mount point 3.1.23. .spec.strategy.customStrategy.secrets Description secrets is a list of additional secrets that will be included in the build pod Type array 3.1.24. .spec.strategy.customStrategy.secrets[] Description SecretSpec specifies a secret to be included in a build pod and its corresponding mount point Type object Required secretSource mountPath Property Type Description mountPath string mountPath is the path at which to mount the secret secretSource LocalObjectReference secretSource is a reference to the secret 3.1.25. .spec.strategy.dockerStrategy Description DockerBuildStrategy defines input parameters specific to container image build. Type object Property Type Description buildArgs array (EnvVar) buildArgs contains build arguments that will be resolved in the Dockerfile. See https://docs.docker.com/engine/reference/builder/#/arg for more details. NOTE: Only the 'name' and 'value' fields are supported. Any settings on the 'valueFrom' field are ignored. dockerfilePath string dockerfilePath is the path of the Dockerfile that will be used to build the container image, relative to the root of the context (contextDir). Defaults to Dockerfile if unset. env array (EnvVar) env contains additional environment variables you want to pass into a builder container. forcePull boolean forcePull describes if the builder should pull the images from registry prior to building. from ObjectReference from is a reference to an DockerImage, ImageStreamTag, or ImageStreamImage which overrides the FROM image in the Dockerfile for the build. If the Dockerfile uses multi-stage builds, this will replace the image in the last FROM directive of the file. imageOptimizationPolicy string imageOptimizationPolicy describes what optimizations the system can use when building images to reduce the final size or time spent building the image. The default policy is 'None' which means the final build image will be equivalent to an image created by the container image build API. The experimental policy 'SkipLayers' will avoid commiting new layers in between each image step, and will fail if the Dockerfile cannot provide compatibility with the 'None' policy. An additional experimental policy 'SkipLayersAndWarn' is the same as 'SkipLayers' but simply warns if compatibility cannot be preserved. noCache boolean noCache if set to true indicates that the container image build must be executed with the --no-cache=true flag pullSecret LocalObjectReference pullSecret is the name of a Secret that would be used for setting up the authentication for pulling the container images from the private Docker registries volumes array volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. 3.1.26. .spec.strategy.dockerStrategy.volumes Description volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 3.1.27. .spec.strategy.dockerStrategy.volumes[] Description BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. Type object Required name source mounts Property Type Description mounts array mounts represents the location of the volume in the image build container mounts[] object BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. name string name is a unique identifier for this BuildVolume. It must conform to the Kubernetes DNS label standard and be unique within the pod. Names that collide with those added by the build controller will result in a failed build with an error message detailing which name caused the error. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names source object BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. 3.1.28. .spec.strategy.dockerStrategy.volumes[].mounts Description mounts represents the location of the volume in the image build container Type array 3.1.29. .spec.strategy.dockerStrategy.volumes[].mounts[] Description BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. Type object Required destinationPath Property Type Description destinationPath string destinationPath is the path within the buildah runtime environment at which the volume should be mounted. The transient mount within the build image and the backing volume will both be mounted read only. Must be an absolute path, must not contain '..' or ':', and must not collide with a destination path generated by the builder process Paths that collide with those added by the build controller will result in a failed build with an error message detailing which path caused the error. 3.1.30. .spec.strategy.dockerStrategy.volumes[].source Description BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. Type object Required type Property Type Description configMap ConfigMapVolumeSource configMap represents a ConfigMap that should populate this volume csi CSIVolumeSource csi represents ephemeral storage provided by external CSI drivers which support this capability secret SecretVolumeSource secret represents a Secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret type string type is the BuildVolumeSourceType for the volume source. Type must match the populated volume source. Valid types are: Secret, ConfigMap 3.1.31. .spec.strategy.jenkinsPipelineStrategy Description JenkinsPipelineBuildStrategy holds parameters specific to a Jenkins Pipeline build. Deprecated: use OpenShift Pipelines Type object Property Type Description env array (EnvVar) env contains additional environment variables you want to pass into a build pipeline. jenkinsfile string Jenkinsfile defines the optional raw contents of a Jenkinsfile which defines a Jenkins pipeline build. jenkinsfilePath string JenkinsfilePath is the optional path of the Jenkinsfile that will be used to configure the pipeline relative to the root of the context (contextDir). If both JenkinsfilePath & Jenkinsfile are both not specified, this defaults to Jenkinsfile in the root of the specified contextDir. 3.1.32. .spec.strategy.sourceStrategy Description SourceBuildStrategy defines input parameters specific to an Source build. Type object Required from Property Type Description env array (EnvVar) env contains additional environment variables you want to pass into a builder container. forcePull boolean forcePull describes if the builder should pull the images from registry prior to building. from ObjectReference from is reference to an DockerImage, ImageStreamTag, or ImageStreamImage from which the container image should be pulled incremental boolean incremental flag forces the Source build to do incremental builds if true. pullSecret LocalObjectReference pullSecret is the name of a Secret that would be used for setting up the authentication for pulling the container images from the private Docker registries scripts string scripts is the location of Source scripts volumes array volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. 3.1.33. .spec.strategy.sourceStrategy.volumes Description volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 3.1.34. .spec.strategy.sourceStrategy.volumes[] Description BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. Type object Required name source mounts Property Type Description mounts array mounts represents the location of the volume in the image build container mounts[] object BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. name string name is a unique identifier for this BuildVolume. It must conform to the Kubernetes DNS label standard and be unique within the pod. Names that collide with those added by the build controller will result in a failed build with an error message detailing which name caused the error. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names source object BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. 3.1.35. .spec.strategy.sourceStrategy.volumes[].mounts Description mounts represents the location of the volume in the image build container Type array 3.1.36. .spec.strategy.sourceStrategy.volumes[].mounts[] Description BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. Type object Required destinationPath Property Type Description destinationPath string destinationPath is the path within the buildah runtime environment at which the volume should be mounted. The transient mount within the build image and the backing volume will both be mounted read only. Must be an absolute path, must not contain '..' or ':', and must not collide with a destination path generated by the builder process Paths that collide with those added by the build controller will result in a failed build with an error message detailing which path caused the error. 3.1.37. .spec.strategy.sourceStrategy.volumes[].source Description BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. Type object Required type Property Type Description configMap ConfigMapVolumeSource configMap represents a ConfigMap that should populate this volume csi CSIVolumeSource csi represents ephemeral storage provided by external CSI drivers which support this capability secret SecretVolumeSource secret represents a Secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret type string type is the BuildVolumeSourceType for the volume source. Type must match the populated volume source. Valid types are: Secret, ConfigMap 3.1.38. .spec.triggeredBy Description triggeredBy describes which triggers started the most recent update to the build configuration and contains information about those triggers. Type array 3.1.39. .spec.triggeredBy[] Description BuildTriggerCause holds information about a triggered build. It is used for displaying build trigger data for each build and build configuration in oc describe. It is also used to describe which triggers led to the most recent update in the build configuration. Type object Property Type Description bitbucketWebHook object BitbucketWebHookCause has information about a Bitbucket webhook that triggered a build. genericWebHook object GenericWebHookCause holds information about a generic WebHook that triggered a build. githubWebHook object GitHubWebHookCause has information about a GitHub webhook that triggered a build. gitlabWebHook object GitLabWebHookCause has information about a GitLab webhook that triggered a build. imageChangeBuild object ImageChangeCause contains information about the image that triggered a build message string message is used to store a human readable message for why the build was triggered. E.g.: "Manually triggered by user", "Configuration change",etc. 3.1.40. .spec.triggeredBy[].bitbucketWebHook Description BitbucketWebHookCause has information about a Bitbucket webhook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string Secret is the obfuscated webhook secret that triggered a build. 3.1.41. .spec.triggeredBy[].bitbucketWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 3.1.42. .spec.triggeredBy[].bitbucketWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 3.1.43. .spec.triggeredBy[].bitbucketWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.44. .spec.triggeredBy[].bitbucketWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.45. .spec.triggeredBy[].genericWebHook Description GenericWebHookCause holds information about a generic WebHook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string secret is the obfuscated webhook secret that triggered a build. 3.1.46. .spec.triggeredBy[].genericWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 3.1.47. .spec.triggeredBy[].genericWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 3.1.48. .spec.triggeredBy[].genericWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.49. .spec.triggeredBy[].genericWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.50. .spec.triggeredBy[].githubWebHook Description GitHubWebHookCause has information about a GitHub webhook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string secret is the obfuscated webhook secret that triggered a build. 3.1.51. .spec.triggeredBy[].githubWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 3.1.52. .spec.triggeredBy[].githubWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 3.1.53. .spec.triggeredBy[].githubWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.54. .spec.triggeredBy[].githubWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.55. .spec.triggeredBy[].gitlabWebHook Description GitLabWebHookCause has information about a GitLab webhook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string Secret is the obfuscated webhook secret that triggered a build. 3.1.56. .spec.triggeredBy[].gitlabWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 3.1.57. .spec.triggeredBy[].gitlabWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 3.1.58. .spec.triggeredBy[].gitlabWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.59. .spec.triggeredBy[].gitlabWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.60. .spec.triggeredBy[].imageChangeBuild Description ImageChangeCause contains information about the image that triggered a build Type object Property Type Description fromRef ObjectReference fromRef contains detailed information about an image that triggered a build. imageID string imageID is the ID of the image that triggered a new build. 3.1.61. .status Description BuildStatus contains the status of a build Type object Required phase Property Type Description cancelled boolean cancelled describes if a cancel event was triggered for the build. completionTimestamp Time completionTimestamp is a timestamp representing the server time when this Build was finished, whether that build failed or succeeded. It reflects the time at which the Pod running the Build terminated. It is represented in RFC3339 form and is in UTC. conditions array Conditions represents the latest available observations of a build's current state. conditions[] object BuildCondition describes the state of a build at a certain point. config ObjectReference config is an ObjectReference to the BuildConfig this Build is based on. duration integer duration contains time.Duration object describing build time. logSnippet string logSnippet is the last few lines of the build log. This value is only set for builds that failed. message string message is a human-readable message indicating details about why the build has this status. output object BuildStatusOutput contains the status of the built image. outputDockerImageReference string outputDockerImageReference contains a reference to the container image that will be built by this build. Its value is computed from Build.Spec.Output.To, and should include the registry address, so that it can be used to push and pull the image. phase string phase is the point in the build lifecycle. Possible values are "New", "Pending", "Running", "Complete", "Failed", "Error", and "Cancelled". reason string reason is a brief CamelCase string that describes any failure and is meant for machine parsing and tidy display in the CLI. stages array stages contains details about each stage that occurs during the build including start time, duration (in milliseconds), and the steps that occured within each stage. stages[] object StageInfo contains details about a build stage. startTimestamp Time startTimestamp is a timestamp representing the server time when this Build started running in a Pod. It is represented in RFC3339 form and is in UTC. 3.1.62. .status.conditions Description Conditions represents the latest available observations of a build's current state. Type array 3.1.63. .status.conditions[] Description BuildCondition describes the state of a build at a certain point. Type object Required type status Property Type Description lastTransitionTime Time The last time the condition transitioned from one status to another. lastUpdateTime Time The last time this condition was updated. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of build condition. 3.1.64. .status.output Description BuildStatusOutput contains the status of the built image. Type object Property Type Description to object BuildStatusOutputTo describes the status of the built image with regards to image registry to which it was supposed to be pushed. 3.1.65. .status.output.to Description BuildStatusOutputTo describes the status of the built image with regards to image registry to which it was supposed to be pushed. Type object Property Type Description imageDigest string imageDigest is the digest of the built container image. The digest uniquely identifies the image in the registry to which it was pushed. Please note that this field may not always be set even if the push completes successfully - e.g. when the registry returns no digest or returns it in a format that the builder doesn't understand. 3.1.66. .status.stages Description stages contains details about each stage that occurs during the build including start time, duration (in milliseconds), and the steps that occured within each stage. Type array 3.1.67. .status.stages[] Description StageInfo contains details about a build stage. Type object Property Type Description durationMilliseconds integer durationMilliseconds identifies how long the stage took to complete in milliseconds. Note: the duration of a stage can exceed the sum of the duration of the steps within the stage as not all actions are accounted for in explicit build steps. name string name is a unique identifier for each build stage that occurs. startTime Time startTime is a timestamp representing the server time when this Stage started. It is represented in RFC3339 form and is in UTC. steps array steps contains details about each step that occurs during a build stage including start time and duration in milliseconds. steps[] object StepInfo contains details about a build step. 3.1.68. .status.stages[].steps Description steps contains details about each step that occurs during a build stage including start time and duration in milliseconds. Type array 3.1.69. .status.stages[].steps[] Description StepInfo contains details about a build step. Type object Property Type Description durationMilliseconds integer durationMilliseconds identifies how long the step took to complete in milliseconds. name string name is a unique identifier for each build step. startTime Time startTime is a timestamp representing the server time when this Step started. it is represented in RFC3339 form and is in UTC. 3.2. API endpoints The following API endpoints are available: /apis/build.openshift.io/v1/builds GET : list or watch objects of kind Build /apis/build.openshift.io/v1/watch/builds GET : watch individual changes to a list of Build. deprecated: use the 'watch' parameter with a list operation instead. /apis/build.openshift.io/v1/namespaces/{namespace}/builds DELETE : delete collection of Build GET : list or watch objects of kind Build POST : create a Build /apis/build.openshift.io/v1/watch/namespaces/{namespace}/builds GET : watch individual changes to a list of Build. deprecated: use the 'watch' parameter with a list operation instead. /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name} DELETE : delete a Build GET : read the specified Build PATCH : partially update the specified Build PUT : replace the specified Build /apis/build.openshift.io/v1/watch/namespaces/{namespace}/builds/{name} GET : watch changes to an object of kind Build. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name}/details PUT : replace details of the specified Build /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/webhooks POST : connect POST requests to webhooks of BuildConfig /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/webhooks/{path} POST : connect POST requests to webhooks of BuildConfig 3.2.1. /apis/build.openshift.io/v1/builds HTTP method GET Description list or watch objects of kind Build Table 3.1. HTTP responses HTTP code Reponse body 200 - OK BuildList schema 401 - Unauthorized Empty 3.2.2. /apis/build.openshift.io/v1/watch/builds HTTP method GET Description watch individual changes to a list of Build. deprecated: use the 'watch' parameter with a list operation instead. Table 3.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/build.openshift.io/v1/namespaces/{namespace}/builds HTTP method DELETE Description delete collection of Build Table 3.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Build Table 3.5. HTTP responses HTTP code Reponse body 200 - OK BuildList schema 401 - Unauthorized Empty HTTP method POST Description create a Build Table 3.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.7. Body parameters Parameter Type Description body Build schema Table 3.8. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 202 - Accepted Build schema 401 - Unauthorized Empty 3.2.4. /apis/build.openshift.io/v1/watch/namespaces/{namespace}/builds HTTP method GET Description watch individual changes to a list of Build. deprecated: use the 'watch' parameter with a list operation instead. Table 3.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.5. /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name} Table 3.10. Global path parameters Parameter Type Description name string name of the Build HTTP method DELETE Description delete a Build Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Build Table 3.13. HTTP responses HTTP code Reponse body 200 - OK Build schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Build Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.15. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Build Table 3.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.17. Body parameters Parameter Type Description body Build schema Table 3.18. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 401 - Unauthorized Empty 3.2.6. /apis/build.openshift.io/v1/watch/namespaces/{namespace}/builds/{name} Table 3.19. Global path parameters Parameter Type Description name string name of the Build HTTP method GET Description watch changes to an object of kind Build. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.7. /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name}/details Table 3.21. Global path parameters Parameter Type Description name string name of the Build Table 3.22. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method PUT Description replace details of the specified Build Table 3.23. Body parameters Parameter Type Description body Build schema Table 3.24. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 401 - Unauthorized Empty 3.2.8. /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/webhooks Table 3.25. Global path parameters Parameter Type Description name string name of the Build HTTP method POST Description connect POST requests to webhooks of BuildConfig Table 3.26. HTTP responses HTTP code Reponse body 200 - OK string 401 - Unauthorized Empty 3.2.9. /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/webhooks/{path} Table 3.27. Global path parameters Parameter Type Description name string name of the Build HTTP method POST Description connect POST requests to webhooks of BuildConfig Table 3.28. HTTP responses HTTP code Reponse body 200 - OK string 401 - Unauthorized Empty | [
"\"postCommit\": { \"script\": \"rake test --verbose\", }",
"The above is a convenient form which is equivalent to:",
"\"postCommit\": { \"command\": [\"/bin/sh\", \"-ic\"], \"args\": [\"rake test --verbose\"] }",
"\"postCommit\": { \"commit\": [\"rake\", \"test\", \"--verbose\"] }",
"Command overrides the image entrypoint in the exec form, as documented in Docker: https://docs.docker.com/engine/reference/builder/#entrypoint.",
"\"postCommit\": { \"args\": [\"rake\", \"test\", \"--verbose\"] }",
"This form is only useful if the image entrypoint can handle arguments.",
"\"postCommit\": { \"script\": \"rake test USD1\", \"args\": [\"--verbose\"] }",
"This form is useful if you need to pass arguments that would otherwise be hard to quote properly in the shell script. In the script, USD0 will be \"/bin/sh\" and USD1, USD2, etc, are the positional arguments from Args.",
"\"postCommit\": { \"command\": [\"rake\", \"test\"], \"args\": [\"--verbose\"] }",
"This form is equivalent to appending the arguments to the Command slice."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/workloads_apis/build-build-openshift-io-v1 |
Part III. Known Issues | Part III. Known Issues This part describes known issues in Red Hat Enterprise Linux 7. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/part-red_hat_enterprise_linux-7.0_release_notes-known_issues |
Chapter 6. Installing the multi-model serving platform | Chapter 6. Installing the multi-model serving platform For deploying small and medium-sized models, OpenShift AI includes a multi-model serving platform that is based on the ModelMesh component. On the multi-model serving platform, multiple models can be deployed from the same model server and share the server resources. To install the multi-model serving platform or ModelMesh, follow the steps described in Installing Red Hat OpenShift AI components by using the CLI or Installing Red Hat OpenShift AI components by using the web console . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/installing_and_uninstalling_openshift_ai_self-managed/installing-the-multi-model-serving-platform_component-install |
12.8. Removing an Attribute | 12.8. Removing an Attribute This section describes how to delete an attribute using the command line and the web console. 12.8.1. Removing an Attribute Using the Command Line Use the ldapmodify utility to delete an attribute. For example: Remove the unwanted attributes from any entries which use them. Delete the attribute: For further details about object class definitions, see Section 12.1.2, "Object Classes" . 12.8.2. Removing an Attribute Using the Web Console To remove an attribute using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Select Schema Objectclasses . Click the Choose Action button to the attribute you want to remove. Select Delete Attribute . Click Yes to confirm. | [
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com schema attributetypes remove dateofbirth"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/removing_an_attribute |
Chapter 17. Berkeley Internet Name Domain | Chapter 17. Berkeley Internet Name Domain BIND performs name resolution services using the named daemon. BIND lets users locate computer resources and services by name instead of numerical addresses. In Red Hat Enterprise Linux, the bind package provides a DNS server. Enter the following command to see if the bind package is installed: If it is not installed, use the yum utility as the root user to install it: 17.1. BIND and SELinux The default permissions on the /var/named/slaves/ , /var/named/dynamic/ and /var/named/data/ directories allow zone files to be updated using zone transfers and dynamic DNS updates. Files in /var/named/ are labeled with the named_zone_t type, which is used for master zone files. For a slave server, configure the /etc/named.conf file to place slave zones in /var/named/slaves/ . The following is an example of a domain entry in /etc/named.conf for a slave DNS server that stores the zone file for testdomain.com in /var/named/slaves/ : If a zone file is labeled named_zone_t , the named_write_master_zones Boolean must be enabled to allow zone transfers and dynamic DNS to update the zone file. Also, the mode of the parent directory has to be changed to allow the named user or group read, write and execute access. If zone files in /var/named/ are labeled with the named_cache_t type, a file system relabel or running restorecon -R /var/ will change their type to named_zone_t . | [
"~]USD rpm -q bind package bind is not installed",
"~]# yum install bind",
"zone \"testdomain.com\" { type slave; masters { IP-address; }; file \"/var/named/slaves/db.testdomain.com\"; };"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/chap-managing_confined_services-berkeley_internet_name_domain |
Chapter 31. Analyzing application performance | Chapter 31. Analyzing application performance Perf is a performance analysis tool. It provides a simple command-line interface and abstracts the CPU hardware difference in Linux performance measurements. Perf is based on the perf_events interface exported by the kernel. One advantage of perf is that it is both kernel and architecture neutral. The analysis data can be reviewed without requiring a specific system configuration. Prerequisites The perf package must be installed on the system. You have administrator privileges. 31.1. Collecting system-wide statistics The perf record command is used for collecting system-wide statistics. It can be used in all processors. Procedure Collect system-wide performance statistics. In this example, all CPUs are denoted with the -a option, and the process was terminated after a few seconds. The results show that it collected 0.725 MB of data and stored it to a newly-created perf.data file. Verification Ensure that the results file was created. 31.2. Archiving performance analysis results You can analyze the results of the perf on other systems using the perf archive command. This may not be necessary, if: Dynamic Shared Objects (DSOs), such as binaries and libraries, are already present in the analysis system, such as the ~/.debug/ cache. Both systems have the same set of binaries. Procedure Create an archive of the results from the perf command. Create a tarball from the archive. 31.3. Analyzing performance analysis results The data from the perf record feature can now be investigated directly using the perf report command. Procedure Analyze the results directly from the perf.data file or from an archived tarball. The output of the report is sorted according to the maximum CPU usage in percentage by the application. It shows if the sample has occurred in the kernel or user space of the process. The report shows information about the module from which the sample was taken: A kernel sample that did not take place in a kernel module is marked with the notation [kernel.kallsyms] . A kernel sample that took place in the kernel module is marked as [module] , [ext4] . For a process in user space, the results might show the shared library linked with the process. The report denotes whether the process also occurs in kernel or user space. The result [.] indicates user space. The result [k] indicates kernel space. Finer grained details are available for review, including data appropriate for experienced perf developers. 31.4. Listing pre-defined events There are a range of available options to get the hardware tracepoint activity. Procedure List pre-defined hardware and software events: 31.5. Getting statistics about specified events You can view specific events using the perf stat command. Procedure View the number of context switches with the perf stat feature: The results show that in 5 seconds, 15619 context switches took place. View file system activity by running a script. The following shows an example script: In another terminal run the perf stat command: The results show that in 5 seconds the script asked to create 5 files, indicating that there are 5 inode requests. 31.6. Additional resources perf help COMMAND perf (1) man page on your system | [
"perf record -a ^C[ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.725 MB perf.data (~31655 samples) ]",
"ls perf.data",
"perf archive",
"tar cvf perf.data.tar.bz2 -C ~/.debug",
"perf report",
"perf list List of pre-defined events (to be used in -e): cpu-cycles OR cycles [Hardware event] stalled-cycles-frontend OR idle-cycles-frontend [Hardware event] stalled-cycles-backend OR idle-cycles-backend [Hardware event] instructions [Hardware event] cache-references [Hardware event] cache-misses [Hardware event] branch-instructions OR branches [Hardware event] branch-misses [Hardware event] bus-cycles [Hardware event] cpu-clock [Software event] task-clock [Software event] page-faults OR faults [Software event] minor-faults [Software event] major-faults [Software event] context-switches OR cs [Software event] cpu-migrations OR migrations [Software event] alignment-faults [Software event] emulation-faults [Software event] ...[output truncated]",
"perf stat -e context-switches -a sleep 5 Performance counter stats for 'sleep 5': 15,619 context-switches 5.002060064 seconds time elapsed",
"for i in {1..100}; do touch /tmp/USDi; sleep 1; done",
"perf stat -e ext4:ext4_request_inode -a sleep 5 Performance counter stats for 'sleep 5': 5 ext4:ext4_request_inode 5.002253620 seconds time elapsed"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_analyzing-application-performance_optimizing-rhel9-for-real-time-for-low-latency-operation |
Chapter 1. Deployment overview | Chapter 1. Deployment overview AMQ Streams simplifies the process of running Apache Kafka in an OpenShift cluster. This guide provides instructions on all the options available for deploying and upgrading AMQ Streams, describing what is deployed, and the order of deployment required to run Apache Kafka in an OpenShift cluster. As well as describing the deployment steps, the guide also provides pre- and post-deployment instructions to prepare for and verify a deployment. Additional deployment options described include the steps to introduce metrics. Upgrade instructions are provided for AMQ Streams and Kafka upgrades. AMQ Streams is designed to work on all types of OpenShift cluster regardless of distribution, from public and private clouds to local deployments intended for development. 1.1. How AMQ Streams supports Kafka AMQ Streams provides container images and Operators for running Kafka on OpenShift. AMQ Streams Operators are fundamental to the running of AMQ Streams. The Operators provided with AMQ Streams are purpose-built with specialist operational knowledge to effectively manage Kafka. Operators simplify the process of: Deploying and running Kafka clusters Deploying and running Kafka components Configuring access to Kafka Securing access to Kafka Upgrading Kafka Managing brokers Creating and managing topics Creating and managing users 1.2. AMQ Streams Operators AMQ Streams supports Kafka using Operators to deploy and manage the components and dependencies of Kafka to OpenShift. Operators are a method of packaging, deploying, and managing an OpenShift application. AMQ Streams Operators extend OpenShift functionality, automating common and complex tasks related to a Kafka deployment. By implementing knowledge of Kafka operations in code, Kafka administration tasks are simplified and require less manual intervention. Operators AMQ Streams provides Operators for managing a Kafka cluster running within an OpenShift cluster. Cluster Operator Deploys and manages Apache Kafka clusters, Kafka Connect, Kafka MirrorMaker, Kafka Bridge, Kafka Exporter, and the Entity Operator Entity Operator Comprises the Topic Operator and User Operator Topic Operator Manages Kafka topics User Operator Manages Kafka users The Cluster Operator can deploy the Topic Operator and User Operator as part of an Entity Operator configuration at the same time as a Kafka cluster. Operators within the AMQ Streams architecture 1.2.1. Cluster Operator AMQ Streams uses the Cluster Operator to deploy and manage clusters for: Kafka (including ZooKeeper, Entity Operator, Kafka Exporter, and Cruise Control) Kafka Connect Kafka MirrorMaker Kafka Bridge Custom resources are used to deploy the clusters. For example, to deploy a Kafka cluster: A Kafka resource with the cluster configuration is created within the OpenShift cluster. The Cluster Operator deploys a corresponding Kafka cluster, based on what is declared in the Kafka resource. The Cluster Operator can also deploy (through configuration of the Kafka resource): A Topic Operator to provide operator-style topic management through KafkaTopic custom resources A User Operator to provide operator-style user management through KafkaUser custom resources The Topic Operator and User Operator function within the Entity Operator on deployment. Example architecture for the Cluster Operator 1.2.2. Topic Operator The Topic Operator provides a way of managing topics in a Kafka cluster through OpenShift resources. Example architecture for the Topic Operator The role of the Topic Operator is to keep a set of KafkaTopic OpenShift resources describing Kafka topics in-sync with corresponding Kafka topics. Specifically, if a KafkaTopic is: Created, the Topic Operator creates the topic Deleted, the Topic Operator deletes the topic Changed, the Topic Operator updates the topic Working in the other direction, if a topic is: Created within the Kafka cluster, the Operator creates a KafkaTopic Deleted from the Kafka cluster, the Operator deletes the KafkaTopic Changed in the Kafka cluster, the Operator updates the KafkaTopic This allows you to declare a KafkaTopic as part of your application's deployment and the Topic Operator will take care of creating the topic for you. Your application just needs to deal with producing or consuming from the necessary topics. The Topic Operator maintains information about each topic in a topic store , which is continually synchronized with updates from Kafka topics or OpenShift KafkaTopic custom resources. Updates from operations applied to a local in-memory topic store are persisted to a backup topic store on disk. If a topic is reconfigured or reassigned to other brokers, the KafkaTopic will always be up to date. 1.2.3. User Operator The User Operator manages Kafka users for a Kafka cluster by watching for KafkaUser resources that describe Kafka users, and ensuring that they are configured properly in the Kafka cluster. For example, if a KafkaUser is: Created, the User Operator creates the user it describes Deleted, the User Operator deletes the user it describes Changed, the User Operator updates the user it describes Unlike the Topic Operator, the User Operator does not sync any changes from the Kafka cluster with the OpenShift resources. Kafka topics can be created by applications directly in Kafka, but it is not expected that the users will be managed directly in the Kafka cluster in parallel with the User Operator. The User Operator allows you to declare a KafkaUser resource as part of your application's deployment. You can specify the authentication and authorization mechanism for the user. You can also configure user quotas that control usage of Kafka resources to ensure, for example, that a user does not monopolize access to a broker. When the user is created, the user credentials are created in a Secret . Your application needs to use the user and its credentials for authentication and to produce or consume messages. In addition to managing credentials for authentication, the User Operator also manages authorization rules by including a description of the user's access rights in the KafkaUser declaration. 1.2.4. Feature gates in AMQ Streams Operators You can enable and disable some features of operators using feature gates . Feature gates are set in the operator configuration and have three stages of maturity: alpha, beta, or General Availability (GA). For more information, see Feature gates . 1.3. AMQ Streams custom resources A deployment of Kafka components to an OpenShift cluster using AMQ Streams is highly configurable through the application of custom resources. Custom resources are created as instances of APIs added by Custom resource definitions (CRDs) to extend OpenShift resources. CRDs act as configuration instructions to describe the custom resources in an OpenShift cluster, and are provided with AMQ Streams for each Kafka component used in a deployment, as well as users and topics. CRDs and custom resources are defined as YAML files. Example YAML files are provided with the AMQ Streams distribution. CRDs also allow AMQ Streams resources to benefit from native OpenShift features like CLI accessibility and configuration validation. Additional resources Extend the Kubernetes API with CustomResourceDefinitions 1.3.1. AMQ Streams custom resource example CRDs require a one-time installation in a cluster to define the schemas used to instantiate and manage AMQ Streams-specific resources. After a new custom resource type is added to your cluster by installing a CRD, you can create instances of the resource based on its specification. Depending on the cluster setup, installation typically requires cluster admin privileges. Note Access to manage custom resources is limited to AMQ Streams administrators. For more information, see Designating AMQ Streams administrators in the Deploying and Upgrading AMQ Streams on OpenShift guide. A CRD defines a new kind of resource, such as kind:Kafka , within an OpenShift cluster. The Kubernetes API server allows custom resources to be created based on the kind and understands from the CRD how to validate and store the custom resource when it is added to the OpenShift cluster. Warning When CRDs are deleted, custom resources of that type are also deleted. Additionally, the resources created by the custom resource, such as pods and statefulsets are also deleted. Each AMQ Streams-specific custom resource conforms to the schema defined by the CRD for the resource's kind . The custom resources for AMQ Streams components have common configuration properties, which are defined under spec . To understand the relationship between a CRD and a custom resource, let's look at a sample of the CRD for a Kafka topic. Kafka topic CRD apiVersion: kafka.strimzi.io/v1beta2 kind: CustomResourceDefinition metadata: 1 name: kafkatopics.kafka.strimzi.io labels: app: strimzi spec: 2 group: kafka.strimzi.io versions: v1beta2 scope: Namespaced names: # ... singular: kafkatopic plural: kafkatopics shortNames: - kt 3 additionalPrinterColumns: 4 # ... subresources: status: {} 5 validation: 6 openAPIV3Schema: properties: spec: type: object properties: partitions: type: integer minimum: 1 replicas: type: integer minimum: 1 maximum: 32767 # ... 1 The metadata for the topic CRD, its name and a label to identify the CRD. 2 The specification for this CRD, including the group (domain) name, the plural name and the supported schema version, which are used in the URL to access the API of the topic. The other names are used to identify instance resources in the CLI. For example, oc get kafkatopic my-topic or oc get kafkatopics . 3 The shortname can be used in CLI commands. For example, oc get kt can be used as an abbreviation instead of oc get kafkatopic . 4 The information presented when using a get command on the custom resource. 5 The current status of the CRD as described in the schema reference for the resource. 6 openAPIV3Schema validation provides validation for the creation of topic custom resources. For example, a topic requires at least one partition and one replica. Note You can identify the CRD YAML files supplied with the AMQ Streams installation files, because the file names contain an index number followed by 'Crd'. Here is a corresponding example of a KafkaTopic custom resource. Kafka topic custom resource apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic 1 metadata: name: my-topic labels: strimzi.io/cluster: my-cluster 2 spec: 3 partitions: 1 replicas: 1 config: retention.ms: 7200000 segment.bytes: 1073741824 status: conditions: 4 lastTransitionTime: "2019-08-20T11:37:00.706Z" status: "True" type: Ready observedGeneration: 1 / ... 1 The kind and apiVersion identify the CRD of which the custom resource is an instance. 2 A label, applicable only to KafkaTopic and KafkaUser resources, that defines the name of the Kafka cluster (which is same as the name of the Kafka resource) to which a topic or user belongs. 3 The spec shows the number of partitions and replicas for the topic as well as the configuration parameters for the topic itself. In this example, the retention period for a message to remain in the topic and the segment file size for the log are specified. 4 Status conditions for the KafkaTopic resource. The type condition changed to Ready at the lastTransitionTime . Custom resources can be applied to a cluster through the platform CLI. When the custom resource is created, it uses the same validation as the built-in resources of the Kubernetes API. After a KafkaTopic custom resource is created, the Topic Operator is notified and corresponding Kafka topics are created in AMQ Streams. 1.4. AMQ Streams installation methods There are two ways to install AMQ Streams on OpenShift. Installation method Description Supported platform Installation artifacts (YAML files) Download the amq-streams-x.y.z-ocp-install-examples.zip file from the AMQ Streams download site . , deploy the YAML installation artifacts to your OpenShift cluster using oc . You start by deploying the Cluster Operator from install/cluster-operator to a single namespace, multiple namespaces, or all namespaces. OpenShift 4.6 and 4.8 OperatorHub Use the Red Hat Integration - AMQ Streams Operator in the OperatorHub to deploy AMQ Streams to a single namespace or all namespaces. OpenShift 4.6 and 4.8 For the greatest flexibility, choose the installation artifacts method. Choose the OperatorHub method if you want to install AMQ Streams to OpenShift 4.6 and 4.8 in a standard configuration using the web console. The OperatorHub also allows you to take advantage of automatic updates. Both methods install the Cluster Operator to your OpenShift cluster. Use the same method to deploy the other components, starting with the Kafka cluster. If you are using the installation artifacts method, example YAML files are provided. If you are using the OperatorHub, the AMQ Streams Operator makes Kafka components available for installation from the OpenShift web console. AMQ Streams installation artifacts The AMQ Streams installation artifacts contain various YAML files that can be deployed to OpenShift, using oc , to create custom resources, including: Deployments Custom resource definitions (CRDs) Roles and role bindings Service accounts YAML installation files are provided for the Cluster Operator, Topic Operator, User Operator, and the Strimzi Admin role. OperatorHub Starting with OpenShift 4, the Operator Lifecycle Manager (OLM) helps cluster administrators to install, update, and manage the lifecycle of all Operators and their associated services running across their clusters. The OLM is part of the Operator Framework , an open source toolkit designed to manage Kubernetes-native applications (Operators) in an effective, automated, and scalable way. The OperatorHub is part of the OpenShift web console. Cluster administrators can use it to discover, install, and upgrade Operators. Operators can be pulled from the OperatorHub, installed on the OpenShift cluster to a single namespace or all namespaces, and managed by the OLM. Engineering teams can then independently manage the software in development, test, and production environments using the OLM. Red Hat Integration - AMQ Streams Operator The Red Hat Integration - AMQ Streams Operator is available to install from the OperatorHub. Once installed, the AMQ Streams Operator deploys the Cluster Operator to your OpenShift cluster, along with the necessary CRDs and role-based access control (RBAC) resources. You still need to install the Kafka components from the OpenShift web console. Additional resources Installing AMQ Streams using the installation artifacts: Section 5.1.1.2, "Deploying the Cluster Operator to watch a single namespace" Section 5.1.1.3, "Deploying the Cluster Operator to watch multiple namespaces" Section 5.1.1.4, "Deploying the Cluster Operator to watch all namespaces" Installing AMQ Streams from the OperatorHub: Section 4.2, "Deploying the AMQ Streams Operator from the OperatorHub" Operators guide in the OpenShift documentation. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: CustomResourceDefinition metadata: 1 name: kafkatopics.kafka.strimzi.io labels: app: strimzi spec: 2 group: kafka.strimzi.io versions: v1beta2 scope: Namespaced names: # singular: kafkatopic plural: kafkatopics shortNames: - kt 3 additionalPrinterColumns: 4 # subresources: status: {} 5 validation: 6 openAPIV3Schema: properties: spec: type: object properties: partitions: type: integer minimum: 1 replicas: type: integer minimum: 1 maximum: 32767 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic 1 metadata: name: my-topic labels: strimzi.io/cluster: my-cluster 2 spec: 3 partitions: 1 replicas: 1 config: retention.ms: 7200000 segment.bytes: 1073741824 status: conditions: 4 lastTransitionTime: \"2019-08-20T11:37:00.706Z\" status: \"True\" type: Ready observedGeneration: 1 /"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/deploying_and_upgrading_amq_streams_on_openshift/deploy-intro_str |
20.2.2. Using the HMC or SE DVD Drive | 20.2.2. Using the HMC or SE DVD Drive Double-click Load from CD-ROM, DVD, or Server . In the dialog box that follows, select Local CD-ROM / DVD then click Continue . In the dialog that follows, keep the default selection of generic.ins then click Continue . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-s390-steps-boot-installing_in_an_lpar-hmc-dvd |
5.3. Minor version updates of RHGS 3.5.z | 5.3. Minor version updates of RHGS 3.5.z For RHGS 3.5.z minor version updates on RHEL 7 and RHEL 8, follow the procedure described in Section 5.2, "In-Service Software Upgrade from Red Hat Gluster Storage 3.4 to Red Hat Gluster Storage 3.5" Note In the case of kernel updates make sure to reboot the host after software update is completed. | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/installation_guide/sect-minor-version-updates-of-rhgs-3.5.z |
Deploying Red Hat Satellite on Amazon Web Services | Deploying Red Hat Satellite on Amazon Web Services Red Hat Satellite 6.11 Reference guide for deploying Red Hat Satellite Server and Capsules on Amazon Web Services Red Hat Satellite Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/deploying_red_hat_satellite_on_amazon_web_services/index |
Chapter 5. Installing a cluster on Azure Stack Hub using ARM templates | Chapter 5. Installing a cluster on Azure Stack Hub using ARM templates In OpenShift Container Platform version 4.16, you can install a cluster on Microsoft Azure Stack Hub by using infrastructure that you provide. Several Azure Resource Manager (ARM) templates are provided to assist in completing these steps or to help model your own. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several ARM templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure Stack Hub account to host the cluster. You downloaded the Azure CLI and installed it on your computer. See Install the Azure CLI in the Azure documentation. The documentation below was tested using version 2.28.0 of the Azure CLI. Azure CLI commands might perform differently based on the version you use. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.3. Configuring your Azure Stack Hub project Before you can install OpenShift Container Platform, you must configure an Azure project to host it. Important All Azure Stack Hub resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure Stack Hub restricts, see Resolve reserved resource name errors in the Azure documentation. 5.3.1. Azure Stack Hub account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure Stack Hub components, and the default Quota types in Azure Stack Hub affect your ability to install OpenShift Container Platform clusters. The following table summarizes the Azure Stack Hub components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Description vCPU 56 A default cluster requires 56 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap, control plane, and worker machines use Standard_DS4_v2 virtual machines, which use 8 vCPUs, a default cluster requires 56 vCPUs. The bootstrap node VM is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. VNet 1 Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 2 The public load balancer uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Additional resources Optimizing storage . 5.3.2. Configuring a DNS zone in Azure Stack Hub To successfully install OpenShift Container Platform on Azure Stack Hub, you must create DNS records in an Azure Stack Hub DNS zone. The DNS zone must be authoritative for the domain. To delegate a registrar's DNS zone to Azure Stack Hub, see Microsoft's documentation for Azure Stack Hub datacenter DNS integration . You can view Azure's DNS solution by visiting this example for creating DNS zones . 5.3.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 5.3.4. Required Azure Stack Hub roles Your Microsoft Azure Stack Hub account must have the following roles for the subscription that you use: Owner To set roles on the Azure portal, see the Manage access to resources in Azure Stack Hub with role-based access control in the Microsoft documentation. 5.3.5. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. Procedure Register your environment: USD az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1 1 Specify the Azure Resource Manager endpoint, `https://management.<region>.<fqdn>/`. See the Microsoft documentation for details. Set the active environment: USD az cloud set -n AzureStackCloud Update your environment configuration to use the specific API version for Azure Stack Hub: USD az cloud update --profile 2019-03-01-hybrid Log in to the Azure CLI: USD az login If you are in a multitenant environment, you must also supply the tenant ID. If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": AzureStackCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> \ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3 1 Specify the service principal name. 2 Specify the subscription ID. 3 Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 5.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 5.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.6. Creating the installation files for Azure Stack Hub To install OpenShift Container Platform on Microsoft Azure Stack Hub using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You manually create the install-config.yaml file, and then generate and customize the Kubernetes manifests and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 5.6.1. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Make the following modifications for Azure Stack Hub: Set the replicas parameter to 0 for the compute pool: compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1 1 Set to 0 . The compute machines will be provisioned manually later. Update the platform.azure section of the install-config.yaml file to configure your Azure Stack Hub configuration: platform: azure: armEndpoint: <azurestack_arm_endpoint> 1 baseDomainResourceGroupName: <resource_group> 2 cloudName: AzureStackCloud 3 region: <azurestack_region> 4 1 Specify the Azure Resource Manager endpoint of your Azure Stack Hub environment, like https://management.local.azurestack.external . 2 Specify the name of the resource group that contains the DNS zone for your base domain. 3 Specify the Azure Stack Hub environment, which is used to configure the Azure SDK with the appropriate Azure API endpoints. 4 Specify the name of your Azure Stack Hub region. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for Azure Stack Hub 5.6.2. Sample customized install-config.yaml file for Azure Stack Hub You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com controlPlane: 1 name: master platform: azure: osDisk: diskSizeGB: 1024 2 diskType: premium_LRS replicas: 3 compute: 3 - name: worker platform: azure: osDisk: diskSizeGB: 512 4 diskType: premium_LRS replicas: 0 metadata: name: test-cluster 5 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 6 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 7 baseDomainResourceGroupName: resource_group 8 region: azure_stack_local_region 9 resourceGroupName: existing_resource_group 10 outboundType: Loadbalancer cloudName: AzureStackCloud 11 pullSecret: '{"auths": ...}' 12 fips: false 13 additionalTrustBundle: | 14 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- sshKey: ssh-ed25519 AAAA... 15 1 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 2 4 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 5 Specify the name of the cluster. 6 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 7 Specify the Azure Resource Manager endpoint that your Azure Stack Hub operator provides. 8 Specify the name of the resource group that contains the DNS zone for your base domain. 9 Specify the name of your Azure Stack Hub local region. 10 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 11 Specify the Azure Stack Hub environment as your target platform. 12 Specify the pull secret required to authenticate your cluster. 13 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 14 If your Azure Stack Hub environment uses an internal certificate authority (CA), add the necessary certificate bundle in .pem format. 15 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 5.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.6.4. Exporting common variables for ARM templates You must export a common set of variables that are used with the provided Azure Resource Manager (ARM) templates used to assist in completing a user-provided infrastructure install on Microsoft Azure Stack Hub. Note Specific ARM templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Export common variables found in the install-config.yaml to be used by the provided ARM templates: USD export CLUSTER_NAME=<cluster_name> 1 USD export AZURE_REGION=<azure_region> 2 USD export SSH_KEY=<ssh_key> 3 USD export BASE_DOMAIN=<base_domain> 4 USD export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5 1 The value of the .metadata.name attribute from the install-config.yaml file. 2 The region to deploy the cluster into. This is the value of the .platform.azure.region attribute from the install-config.yaml file. 3 The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it contains spaces. This is the value of the .sshKey attribute from the install-config.yaml file. 4 The base domain to deploy the cluster to. The base domain corresponds to the DNS zone that you created for your cluster. This is the value of the .baseDomain attribute from the install-config.yaml file. 5 The resource group where the DNS zone exists. This is the value of the .platform.azure.baseDomainResourceGroupName attribute from the install-config.yaml file. For example: USD export CLUSTER_NAME=test-cluster USD export AZURE_REGION=centralus USD export SSH_KEY="ssh-rsa xxx/xxx/xxx= [email protected]" USD export BASE_DOMAIN=example.com USD export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 5.6.5. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. Optional: If your Azure Stack Hub environment uses an internal certificate authority (CA), you must update the .spec.trustedCA.name field in the <installation_directory>/manifests/cluster-proxy-01-config.yaml file to use user-ca-bundle : ... spec: trustedCA: name: user-ca-bundle ... Later, you must update your bootstrap ignition to include the CA. When configuring Azure on user-provisioned infrastructure, you must export some common variables defined in the manifest files to use later in the Azure Resource Manager (ARM) templates: Export the infrastructure ID by using the following command: USD export INFRA_ID=<infra_id> 1 1 The OpenShift Container Platform cluster has been assigned an identifier ( INFRA_ID ) in the form of <cluster_name>-<random_string> . This will be used as the base name for most resources created using the provided ARM templates. This is the value of the .status.infrastructureName attribute from the manifests/cluster-infrastructure-02-config.yml file. Export the resource group by using the following command: USD export RESOURCE_GROUP=<resource_group> 1 1 All resources created in this Azure deployment exists as part of a resource group . The resource group name is also based on the INFRA_ID , in the form of <cluster_name>-<random_string>-rg . This is the value of the .status.platformStatus.azure.resourceGroupName attribute from the manifests/cluster-infrastructure-02-config.yml file. Manually create your cloud credentials. From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your openshift-install binary is built to use: USD openshift-install version Example output release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-azure namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. The format for the secret data varies for each cloud provider. Sample secrets.yaml file apiVersion: v1 kind: Secret metadata: name: USD{secret_name} namespace: USD{secret_namespace} stringData: azure_subscription_id: USD{subscription_id} azure_client_id: USD{app_id} azure_client_secret: USD{client_secret} azure_tenant_id: USD{tenant_id} azure_resource_prefix: USD{cluster_name} azure_resourcegroup: USD{resource_group} azure_region: USD{azure_region} Create a cco-configmap.yaml file in the manifests directory with the Cloud Credential Operator (CCO) disabled: Sample ConfigMap object apiVersion: v1 kind: ConfigMap metadata: name: cloud-credential-operator-config namespace: openshift-cloud-credential-operator annotations: release.openshift.io/create-only: "true" data: disabled: "true" To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources Manually manage cloud credentials 5.6.6. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 5.7. Creating the Azure resource group You must create a Microsoft Azure resource group . This is used during the installation of your OpenShift Container Platform cluster on Azure Stack Hub. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the resource group in a supported Azure region: USD az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION} 5.8. Uploading the RHCOS cluster image and bootstrap Ignition config file The Azure client does not support deployments based on files existing locally. You must copy and store the RHCOS virtual hard disk (VHD) cluster image and bootstrap Ignition config file in a storage container so they are accessible during deployment. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create an Azure storage account to store the VHD cluster image: USD az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS Warning The Azure storage account name must be between 3 and 24 characters in length and use numbers and lower-case letters only. If your CLUSTER_NAME variable does not follow these restrictions, you must manually define the Azure storage account name. For more information on Azure storage account name restrictions, see Resolve errors for storage account names in the Azure documentation. Export the storage account key as an environment variable: USD export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query "[0].value" -o tsv` Export the URL of the RHCOS VHD to an environment variable: USD export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats."vhd.gz".disk.location') Important The RHCOS images might not change with every release of OpenShift Container Platform. You must specify an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. Create the storage container for the VHD: USD az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} Download the compressed RHCOS VHD file locally: USD curl -O -L USD{COMPRESSED_VHD_URL} Decompress the VHD file. Note The decompressed VHD file is approximately 16 GB, so be sure that your host system has 16 GB of free space available. You can delete the VHD file after you upload it. Copy the local VHD to a blob: USD az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -f rhcos-<rhcos_version>-azurestack.x86_64.vhd Create a blob storage container and upload the generated bootstrap.ign file: USD az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} USD az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c "files" -f "<installation_directory>/bootstrap.ign" -n "bootstrap.ign" 5.9. Example for creating DNS zones DNS records are required for clusters that use user-provisioned infrastructure. You should choose the DNS strategy that fits your scenario. For this example, Azure Stack Hub's datacenter DNS integration is used, so you will create a DNS zone. Note The DNS zone is not required to exist in the same resource group as the cluster deployment and might already exist in your organization for the desired base domain. If that is the case, you can skip creating the DNS zone; be sure the installation config you generated earlier reflects that scenario. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the new DNS zone in the resource group exported in the BASE_DOMAIN_RESOURCE_GROUP environment variable: USD az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can skip this step if you are using a DNS zone that already exists. You can learn more about configuring a DNS zone in Azure Stack Hub by visiting that section. 5.10. Creating a VNet in Azure Stack Hub You must create a virtual network (VNet) in Microsoft Azure Stack Hub for your OpenShift Container Platform cluster to use. You can customize the VNet to meet your requirements. One way to create the VNet is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure Stack Hub infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Copy the template from the ARM template for the VNet section of this topic and save it as 01_vnet.json in your cluster's installation directory. This template describes the VNet that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/01_vnet.json" \ --parameters baseName="USD{INFRA_ID}" 1 1 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 5.10.1. ARM template for the VNet You can use the following Azure Resource Manager (ARM) template to deploy the VNet that you need for your OpenShift Container Platform cluster: Example 5.1. 01_vnet.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/azurestack/01_vnet.json[] 5.11. Deploying the RHCOS cluster image for the Azure Stack Hub infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Microsoft Azure Stack Hub for your OpenShift Container Platform nodes. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Store the RHCOS virtual hard disk (VHD) cluster image in an Azure storage container. Store the bootstrap Ignition config file in an Azure storage container. Procedure Copy the template from the ARM template for image storage section of this topic and save it as 02_storage.json in your cluster's installation directory. This template describes the image storage that your cluster requires. Export the RHCOS VHD blob URL as a variable: USD export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -o tsv` Deploy the cluster image: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/02_storage.json" \ --parameters vhdBlobURL="USD{VHD_BLOB_URL}" \ 1 --parameters baseName="USD{INFRA_ID}" \ 2 --parameters storageAccount="USD{CLUSTER_NAME}sa" \ 3 --parameters architecture="<architecture>" 4 1 The blob URL of the RHCOS VHD to be used to create master and worker machines. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 The name of your Azure storage account. 4 Specify the system architecture. Valid values are x64 (default) or Arm64 . 5.11.1. ARM template for image storage You can use the following Azure Resource Manager (ARM) template to deploy the stored Red Hat Enterprise Linux CoreOS (RHCOS) image that you need for your OpenShift Container Platform cluster: Example 5.2. 02_storage.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/azurestack/02_storage.json[] 5.12. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 5.12.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 5.1. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 5.2. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 5.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 5.13. Creating networking and load balancing components in Azure Stack Hub You must configure networking and load balancing in Microsoft Azure Stack Hub for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Azure Resource Manager (ARM) template. Load balancing requires the following DNS records: An api DNS record for the API public load balancer in the DNS zone. An api-int DNS record for the API internal load balancer in the DNS zone. Note If you do not use the provided ARM template to create your Azure Stack Hub infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Procedure Copy the template from the ARM template for the network and load balancers section of this topic and save it as 03_infra.json in your cluster's installation directory. This template describes the networking and load balancing objects that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/03_infra.json" \ --parameters baseName="USD{INFRA_ID}" 1 1 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Create an api DNS record and an api-int DNS record. When creating the API DNS records, the USD{BASE_DOMAIN_RESOURCE_GROUP} variable must point to the resource group where the DNS zone exists. Export the following variable: USD export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query "[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv` Export the following variable: USD export PRIVATE_IP=`az network lb frontend-ip show -g "USDRESOURCE_GROUP" --lb-name "USD{INFRA_ID}-internal" -n internal-lb-ip --query "privateIpAddress" -o tsv` Create the api DNS record in a new DNS zone: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60 If you are adding the cluster to an existing DNS zone, you can create the api DNS record in it instead: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60 Create the api-int DNS record in a new DNS zone: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z "USD{CLUSTER_NAME}.USD{BASE_DOMAIN}" -n api-int -a USD{PRIVATE_IP} --ttl 60 If you are adding the cluster to an existing DNS zone, you can create the api-int DNS record in it instead: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api-int.USD{CLUSTER_NAME} -a USD{PRIVATE_IP} --ttl 60 5.13.1. ARM template for the network and load balancers You can use the following Azure Resource Manager (ARM) template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster: Example 5.3. 03_infra.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/azurestack/03_infra.json[] 5.14. Creating the bootstrap machine in Azure Stack Hub You must create the bootstrap machine in Microsoft Azure Stack Hub to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Create and configure networking and load balancers in Azure Stack Hub. Create control plane and compute roles. Procedure Copy the template from the ARM template for the bootstrap machine section of this topic and save it as 04_bootstrap.json in your cluster's installation directory. This template describes the bootstrap machine that your cluster requires. Export the bootstrap URL variable: USD bootstrap_url_expiry=`date -u -d "10 hours" '+%Y-%m-%dT%H:%MZ'` USD export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv` Export the bootstrap ignition variable: If your environment uses a public certificate authority (CA), run this command: USD export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\n'` If your environment uses an internal CA, you must add your PEM encoded bundle to the bootstrap ignition stub so that your bootstrap virtual machine can pull the bootstrap ignition from the storage account. Run the following commands, which assume your CA is in a file called CA.pem : USD export CA="data:text/plain;charset=utf-8;base64,USD(cat CA.pem |base64 |tr -d '\n')" USD export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url "USDBOOTSTRAP_URL" --arg cert "USDCA" '{ignition:{version:USDv,security:{tls:{certificateAuthorities:[{source:USDcert}]}},config:{replace:{source:USDurl}}}}' | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create --verbose -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/04_bootstrap.json" \ --parameters bootstrapIgnition="USD{BOOTSTRAP_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" \ 2 --parameters diagnosticsStorageAccountName="USD{CLUSTER_NAME}sa" 3 1 The bootstrap Ignition content for the bootstrap cluster. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 The name of the storage account for your cluster. 5.14.1. ARM template for the bootstrap machine You can use the following Azure Resource Manager (ARM) template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 5.4. 04_bootstrap.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/azurestack/04_bootstrap.json[] 5.15. Creating the control plane machines in Azure Stack Hub You must create the control plane machines in Microsoft Azure Stack Hub for your cluster to use. One way to create these machines is to modify the provided Azure Resource Manager (ARM) template. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Create and configure networking and load balancers in Azure Stack Hub. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the ARM template for control plane machines section of this topic and save it as 05_masters.json in your cluster's installation directory. This template describes the control plane machines that your cluster requires. Export the following variable needed by the control plane machine deployment: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/05_masters.json" \ --parameters masterIgnition="USD{MASTER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" \ 2 --parameters diagnosticsStorageAccountName="USD{CLUSTER_NAME}sa" 3 1 The Ignition content for the control plane nodes (also known as the master nodes). 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 The name of the storage account for your cluster. 5.15.1. ARM template for control plane machines You can use the following Azure Resource Manager (ARM) template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 5.5. 05_masters.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/azurestack/05_masters.json[] 5.16. Wait for bootstrap completion and remove bootstrap resources in Azure Stack Hub After you create all of the required infrastructure in Microsoft Azure Stack Hub, wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Create and configure networking and load balancers in Azure Stack Hub. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in USD az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes USD az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes USD az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait USD az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign USD az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip Note If you do not delete the bootstrap server, installation may not succeed due to API traffic being routed to the bootstrap server. 5.17. Creating additional worker machines in Azure Stack Hub You can create worker machines in Microsoft Azure Stack Hub for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Azure Resource Manager (ARM) template. Additional instances can be launched by including additional resources of type 06_workers.json in the file. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Create and configure networking and load balancers in Azure Stack Hub. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the ARM template for worker machines section of this topic and save it as 06_workers.json in your cluster's installation directory. This template describes the worker machines that your cluster requires. Export the following variable needed by the worker machine deployment: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/06_workers.json" \ --parameters workerIgnition="USD{WORKER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 --parameters diagnosticsStorageAccountName="USD{CLUSTER_NAME}sa" 3 1 The Ignition content for the worker nodes. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 The name of the storage account for your cluster. 5.17.1. ARM template for worker machines You can use the following Azure Resource Manager (ARM) template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 5.6. 06_workers.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/azurestack/06_workers.json[] 5.18. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 5.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 5.20. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 5.21. Adding the Ingress DNS records If you removed the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites You deployed an OpenShift Container Platform cluster on Microsoft Azure Stack Hub by using infrastructure that you provisioned. Install the OpenShift CLI ( oc ). Install or update the Azure CLI . Procedure Confirm the Ingress router has created a load balancer and populated the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20 Export the Ingress router IP as a variable: USD export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add a *.apps record to the DNS zone. If you are adding this cluster to a new DNS zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300 If you are adding this cluster to an already existing DNS zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300 If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com 5.22. Completing an Azure Stack Hub installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Microsoft Azure Stack Hub user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned Azure Stack Hub infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Additional resources See About remote health monitoring for more information about the Telemetry service. | [
"az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1",
"az cloud set -n AzureStackCloud",
"az cloud update --profile 2019-03-01-hybrid",
"az login",
"az account list --refresh",
"[ { \"cloudName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id> 1",
"az account show",
"{ \"environmentName\": AzureStackCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1",
"platform: azure: armEndpoint: <azurestack_arm_endpoint> 1 baseDomainResourceGroupName: <resource_group> 2 cloudName: AzureStackCloud 3 region: <azurestack_region> 4",
"apiVersion: v1 baseDomain: example.com controlPlane: 1 name: master platform: azure: osDisk: diskSizeGB: 1024 2 diskType: premium_LRS replicas: 3 compute: 3 - name: worker platform: azure: osDisk: diskSizeGB: 512 4 diskType: premium_LRS replicas: 0 metadata: name: test-cluster 5 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 6 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 7 baseDomainResourceGroupName: resource_group 8 region: azure_stack_local_region 9 resourceGroupName: existing_resource_group 10 outboundType: Loadbalancer cloudName: AzureStackCloud 11 pullSecret: '{\"auths\": ...}' 12 fips: false 13 additionalTrustBundle: | 14 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- sshKey: ssh-ed25519 AAAA... 15",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5",
"export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"spec: trustedCA: name: user-ca-bundle",
"export INFRA_ID=<infra_id> 1",
"export RESOURCE_GROUP=<resource_group> 1",
"openshift-install version",
"release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-azure namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: v1 kind: Secret metadata: name: USD{secret_name} namespace: USD{secret_namespace} stringData: azure_subscription_id: USD{subscription_id} azure_client_id: USD{app_id} azure_client_secret: USD{client_secret} azure_tenant_id: USD{tenant_id} azure_resource_prefix: USD{cluster_name} azure_resourcegroup: USD{resource_group} azure_region: USD{azure_region}",
"apiVersion: v1 kind: ConfigMap metadata: name: cloud-credential-operator-config namespace: openshift-cloud-credential-operator annotations: release.openshift.io/create-only: \"true\" data: disabled: \"true\"",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}",
"az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS",
"export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`",
"export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats.\"vhd.gz\".disk.location')",
"az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"curl -O -L USD{COMPRESSED_VHD_URL}",
"az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -f rhcos-<rhcos_version>-azurestack.x86_64.vhd",
"az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"",
"az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/azurestack/01_vnet.json[]",
"export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters storageAccount=\"USD{CLUSTER_NAME}sa\" \\ 3 --parameters architecture=\"<architecture>\" 4",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/azurestack/02_storage.json[]",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters baseName=\"USD{INFRA_ID}\" 1",
"export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`",
"export PRIVATE_IP=`az network lb frontend-ip show -g \"USDRESOURCE_GROUP\" --lb-name \"USD{INFRA_ID}-internal\" -n internal-lb-ip --query \"privateIpAddress\" -o tsv`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z \"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" -n api-int -a USD{PRIVATE_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api-int.USD{CLUSTER_NAME} -a USD{PRIVATE_IP} --ttl 60",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/azurestack/03_infra.json[]",
"bootstrap_url_expiry=`date -u -d \"10 hours\" '+%Y-%m-%dT%H:%MZ'`",
"export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv`",
"export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`",
"export CA=\"data:text/plain;charset=utf-8;base64,USD(cat CA.pem |base64 |tr -d '\\n')\"",
"export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url \"USDBOOTSTRAP_URL\" --arg cert \"USDCA\" '{ignition:{version:USDv,security:{tls:{certificateAuthorities:[{source:USDcert}]}},config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`",
"az deployment group create --verbose -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters diagnosticsStorageAccountName=\"USD{CLUSTER_NAME}sa\" 3",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/azurestack/04_bootstrap.json[]",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters diagnosticsStorageAccountName=\"USD{CLUSTER_NAME}sa\" 3",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/azurestack/05_masters.json[]",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2 --parameters diagnosticsStorageAccountName=\"USD{CLUSTER_NAME}sa\" 3",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/azurestack/06_workers.json[]",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20",
"export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_azure_stack_hub/installing-azure-stack-hub-user-infra |
Validation and troubleshooting | Validation and troubleshooting OpenShift Container Platform 4.15 Validating and troubleshooting an OpenShift Container Platform installation Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/validation_and_troubleshooting/index |
Chapter 2. Working with pods | Chapter 2. Working with pods 2.1. Using pods A pod is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. 2.1.1. Understanding pods Pods are the rough equivalent of a machine instance (physical or virtual) to a Container. Each pod is allocated its own internal IP address, therefore owning its entire port space, and containers within pods can share their local storage and networking. Pods have a lifecycle; they are defined, then they are assigned to run on a node, then they run until their container(s) exit or they are removed for some other reason. Pods, depending on policy and exit code, might be removed after exiting, or can be retained to enable access to the logs of their containers. OpenShift Container Platform treats pods as largely immutable; changes cannot be made to a pod definition while it is running. OpenShift Container Platform implements changes by terminating an existing pod and recreating it with modified configuration, base image(s), or both. Pods are also treated as expendable, and do not maintain state when recreated. Therefore pods should usually be managed by higher-level controllers, rather than directly by users. Note For the maximum number of pods per OpenShift Container Platform node host, see the Cluster Limits. Warning Bare pods that are not managed by a replication controller will be not rescheduled upon node disruption. 2.1.2. Example pod configurations OpenShift Container Platform leverages the Kubernetes concept of a pod , which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. The following is an example definition of a pod. It demonstrates many features of pods, most of which are discussed in other topics and thus only briefly mentioned here: Pod object definition (YAML) kind: Pod apiVersion: v1 metadata: name: example labels: environment: production app: abc 1 spec: restartPolicy: Always 2 securityContext: 3 runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: 4 - name: abc args: - sleep - "1000000" volumeMounts: 5 - name: cache-volume mountPath: /cache 6 image: registry.access.redhat.com/ubi7/ubi-init:latest 7 securityContext: allowPrivilegeEscalation: false runAsNonRoot: true capabilities: drop: ["ALL"] resources: limits: memory: "100Mi" cpu: "1" requests: memory: "100Mi" cpu: "1" volumes: 8 - name: cache-volume emptyDir: sizeLimit: 500Mi 1 Pods can be "tagged" with one or more labels, which can then be used to select and manage groups of pods in a single operation. The labels are stored in key/value format in the metadata hash. 2 The pod restart policy with possible values Always , OnFailure , and Never . The default value is Always . 3 OpenShift Container Platform defines a security context for containers which specifies whether they are allowed to run as privileged containers, run as a user of their choice, and more. The default context is very restrictive but administrators can modify this as needed. 4 containers specifies an array of one or more container definitions. 5 The container specifies where external storage volumes are mounted within the container. 6 Specify the volumes to provide for the pod. Volumes mount at the specified path. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 7 Each container in the pod is instantiated from its own container image. 8 The pod defines storage volumes that are available to its container(s) to use. If you attach persistent volumes that have high file counts to pods, those pods can fail or can take a long time to start. For more information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state? . Note This pod definition does not include attributes that are filled by OpenShift Container Platform automatically after the pod is created and its lifecycle begins. The Kubernetes pod documentation has details about the functionality and purpose of pods. 2.1.3. Additional resources For more information on pods and storage see Understanding persistent storage and Understanding ephemeral storage . 2.2. Viewing pods As an administrator, you can view the pods in your cluster and to determine the health of those pods and the cluster as a whole. 2.2.1. About pods OpenShift Container Platform leverages the Kubernetes concept of a pod , which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. Pods are the rough equivalent of a machine instance (physical or virtual) to a container. You can view a list of pods associated with a specific project or view usage statistics about pods. 2.2.2. Viewing pods in a project You can view a list of pods associated with the current project, including the number of replica, the current status, number or restarts and the age of the pod. Procedure To view the pods in a project: Change to the project: USD oc project <project-name> Run the following command: USD oc get pods For example: USD oc get pods Example output NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m Add the -o wide flags to view the pod IP address and the node where the pod is located. USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none> 2.2.3. Viewing pod usage statistics You can display usage statistics about pods, which provide the runtime environments for containers. These usage statistics include CPU, memory, and storage consumption. Prerequisites You must have cluster-reader permission to view the usage statistics. Metrics must be installed to view the usage statistics. Procedure To view the usage statistics: Run the following command: USD oc adm top pods For example: USD oc adm top pods -n openshift-console Example output NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi Run the following command to view the usage statistics for pods with labels: USD oc adm top pod --selector='' You must choose the selector (label query) to filter on. Supports = , == , and != . For example: USD oc adm top pod --selector='name=my-pod' 2.2.4. Viewing resource logs You can view the log for various resources in the OpenShift CLI ( oc ) and web console. Logs read from the tail, or end, of the log. Prerequisites Access to the OpenShift CLI ( oc ). Procedure (UI) In the OpenShift Container Platform console, navigate to Workloads Pods or navigate to the pod through the resource you want to investigate. Note Some resources, such as builds, do not have pods to query directly. In such instances, you can locate the Logs link on the Details page for the resource. Select a project from the drop-down menu. Click the name of the pod you want to investigate. Click Logs . Procedure (CLI) View the log for a specific pod: USD oc logs -f <pod_name> -c <container_name> where: -f Optional: Specifies that the output follows what is being written into the logs. <pod_name> Specifies the name of the pod. <container_name> Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name. For example: USD oc logs ruby-58cd97df55-mww7r USD oc logs -f ruby-57f7f4855b-znl92 -c ruby The contents of log files are printed out. View the log for a specific resource: USD oc logs <object_type>/<resource_name> 1 1 Specifies the resource type and name. For example: USD oc logs deployment/ruby The contents of log files are printed out. 2.3. Configuring an OpenShift Container Platform cluster for pods As an administrator, you can create and maintain an efficient cluster for pods. By keeping your cluster efficient, you can provide a better environment for your developers using such tools as what a pod does when it exits, ensuring that the required number of pods is always running, when to restart pods designed to run only once, limit the bandwidth available to pods, and how to keep pods running during disruptions. 2.3.1. Configuring how pods behave after restart A pod restart policy determines how OpenShift Container Platform responds when Containers in that pod exit. The policy applies to all Containers in that pod. The possible values are: Always - Tries restarting a successfully exited Container on the pod continuously, with an exponential back-off delay (10s, 20s, 40s) capped at 5 minutes. The default is Always . OnFailure - Tries restarting a failed Container on the pod with an exponential back-off delay (10s, 20s, 40s) capped at 5 minutes. Never - Does not try to restart exited or failed Containers on the pod. Pods immediately fail and exit. After the pod is bound to a node, the pod will never be bound to another node. This means that a controller is necessary in order for a pod to survive node failure: Condition Controller Type Restart Policy Pods that are expected to terminate (such as batch computations) Job OnFailure or Never Pods that are expected to not terminate (such as web servers) Replication controller Always . Pods that must run one-per-machine Daemon set Any If a Container on a pod fails and the restart policy is set to OnFailure , the pod stays on the node and the Container is restarted. If you do not want the Container to restart, use a restart policy of Never . If an entire pod fails, OpenShift Container Platform starts a new pod. Developers must address the possibility that applications might be restarted in a new pod. In particular, applications must handle temporary files, locks, incomplete output, and so forth caused by runs. Note Kubernetes architecture expects reliable endpoints from cloud providers. When a cloud provider is down, the kubelet prevents OpenShift Container Platform from restarting. If the underlying cloud provider endpoints are not reliable, do not install a cluster using cloud provider integration. Install the cluster as if it was in a no-cloud environment. It is not recommended to toggle cloud provider integration on or off in an installed cluster. For details on how OpenShift Container Platform uses restart policy with failed Containers, see the Example States in the Kubernetes documentation. 2.3.2. Limiting the bandwidth available to pods You can apply quality-of-service traffic shaping to a pod and effectively limit its available bandwidth. Egress traffic (from the pod) is handled by policing, which simply drops packets in excess of the configured rate. Ingress traffic (to the pod) is handled by shaping queued packets to effectively handle data. The limits you place on a pod do not affect the bandwidth of other pods. Procedure To limit the bandwidth on a pod: Write an object definition JSON file, and specify the data traffic speed using kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations. For example, to limit both pod egress and ingress bandwidth to 10M/s: Limited Pod object definition { "kind": "Pod", "spec": { "containers": [ { "image": "openshift/hello-openshift", "name": "hello-openshift" } ] }, "apiVersion": "v1", "metadata": { "name": "iperf-slow", "annotations": { "kubernetes.io/ingress-bandwidth": "10M", "kubernetes.io/egress-bandwidth": "10M" } } } Create the pod using the object definition: USD oc create -f <file_or_dir_path> 2.3.3. Understanding how to use pod disruption budgets to specify the number of pods that must be up A pod disruption budget allows the specification of safety constraints on pods during operations, such as draining a node for maintenance. PodDisruptionBudget is an API object that specifies the minimum number or percentage of replicas that must be up at a time. Setting these in projects can be helpful during node maintenance (such as scaling a cluster down or a cluster upgrade) and is only honored on voluntary evictions (not on node failures). A PodDisruptionBudget object's configuration consists of the following key parts: A label selector, which is a label query over a set of pods. An availability level, which specifies the minimum number of pods that must be available simultaneously, either: minAvailable is the number of pods must always be available, even during a disruption. maxUnavailable is the number of pods can be unavailable during a disruption. Note Available refers to the number of pods that has condition Ready=True . Ready=True refers to the pod that is able to serve requests and should be added to the load balancing pools of all matching services. A maxUnavailable of 0% or 0 or a minAvailable of 100% or equal to the number of replicas is permitted but can block nodes from being drained. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. You can check for pod disruption budgets across all projects with the following: USD oc get poddisruptionbudget --all-namespaces Note The following example contains some values that are specific to OpenShift Container Platform on AWS. Example output NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #... The PodDisruptionBudget is considered healthy when there are at least minAvailable pods running in the system. Every pod above that limit can be evicted. Note Depending on your pod priority and preemption settings, lower-priority pods might be removed despite their pod disruption budget requirements. 2.3.3.1. Specifying the number of pods that must be up with pod disruption budgets You can use a PodDisruptionBudget object to specify the minimum number or percentage of replicas that must be up at a time. Procedure To configure a pod disruption budget: Create a YAML file with the an object definition similar to the following: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The minimum number of pods that must be available simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this parameter blank, for example selector {} , to select all pods in the project. Or: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The maximum number of pods that can be unavailable simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this parameter blank, for example selector {} , to select all pods in the project. Run the following command to add the object to project: USD oc create -f </path/to/file> -n <project_name> 2.3.3.2. Specifying the eviction policy for unhealthy pods When you use pod disruption budgets (PDBs) to specify how many pods must be available simultaneously, you can also define the criteria for how unhealthy pods are considered for eviction. You can choose one of the following policies: IfHealthyBudget Running pods that are not yet healthy can be evicted only if the guarded application is not disrupted. AlwaysAllow Running pods that are not yet healthy can be evicted regardless of whether the criteria in the pod disruption budget is met. This policy can help evict malfunctioning applications, such as ones with pods stuck in the CrashLoopBackOff state or failing to report the Ready status. Note It is recommended to set the unhealthyPodEvictionPolicy field to AlwaysAllow in the PodDisruptionBudget object to support the eviction of misbehaving applications during a node drain. The default behavior is to wait for the application pods to become healthy before the drain can proceed. Procedure Create a YAML file that defines a PodDisruptionBudget object and specify the unhealthy pod eviction policy: Example pod-disruption-budget.yaml file apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1 1 Choose either IfHealthyBudget or AlwaysAllow as the unhealthy pod eviction policy. The default is IfHealthyBudget when the unhealthyPodEvictionPolicy field is empty. Create the PodDisruptionBudget object by running the following command: USD oc create -f pod-disruption-budget.yaml With a PDB that has the AlwaysAllow unhealthy pod eviction policy set, you can now drain nodes and evict the pods for a malfunctioning application guarded by this PDB. Additional resources Enabling features using feature gates Unhealthy Pod Eviction Policy in the Kubernetes documentation 2.3.4. Preventing pod removal using critical pods There are a number of core components that are critical to a fully functional cluster, but, run on a regular cluster node rather than the master. A cluster might stop working properly if a critical add-on is evicted. Pods marked as critical are not allowed to be evicted. Procedure To make a pod critical: Create a Pod spec or edit existing pods to include the system-cluster-critical priority class: apiVersion: v1 kind: Pod metadata: name: my-pdb spec: template: metadata: name: critical-pod priorityClassName: system-cluster-critical 1 # ... 1 Default priority class for pods that should never be evicted from a node. Alternatively, you can specify system-node-critical for pods that are important to the cluster but can be removed if necessary. Create the pod: USD oc create -f <file-name>.yaml 2.3.5. Reducing pod timeouts when using persistent volumes with high file counts If a storage volume contains many files (~1,000,000 or greater), you might experience pod timeouts. This can occur because, when volumes are mounted, OpenShift Container Platform recursively changes the ownership and permissions of the contents of each volume in order to match the fsGroup specified in a pod's securityContext . For large volumes, checking and changing the ownership and permissions can be time consuming, resulting in a very slow pod startup. You can reduce this delay by applying one of the following workarounds: Use a security context constraint (SCC) to skip the SELinux relabeling for a volume. Use the fsGroupChangePolicy field inside an SCC to control the way that OpenShift Container Platform checks and manages ownership and permissions for a volume. Use the Cluster Resource Override Operator to automatically apply an SCC to skip the SELinux relabeling. Use a runtime class to skip the SELinux relabeling for a volume. For information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state? . 2.4. Automatically scaling pods with the horizontal pod autoscaler As a developer, you can use a horizontal pod autoscaler (HPA) to specify how OpenShift Container Platform should automatically increase or decrease the scale of a replication controller or deployment configuration, based on metrics collected from the pods that belong to that replication controller or deployment configuration. You can create an HPA for any deployment, deployment config, replica set, replication controller, or stateful set. For information on scaling pods based on custom metrics, see Automatically scaling pods based on custom metrics . Note It is recommended to use a Deployment object or ReplicaSet object unless you need a specific feature or behavior provided by other objects. For more information on these objects, see Understanding deployments . 2.4.1. Understanding horizontal pod autoscalers You can create a horizontal pod autoscaler to specify the minimum and maximum number of pods you want to run, as well as the CPU utilization or memory utilization your pods should target. After you create a horizontal pod autoscaler, OpenShift Container Platform begins to query the CPU and/or memory resource metrics on the pods. When these metrics are available, the horizontal pod autoscaler computes the ratio of the current metric utilization with the desired metric utilization, and scales up or down accordingly. The query and scaling occurs at a regular interval, but can take one to two minutes before metrics become available. For replication controllers, this scaling corresponds directly to the replicas of the replication controller. For deployment configurations, scaling corresponds directly to the replica count of the deployment configuration. Note that autoscaling applies only to the latest deployment in the Complete phase. OpenShift Container Platform automatically accounts for resources and prevents unnecessary autoscaling during resource spikes, such as during start up. Pods in the unready state have 0 CPU usage when scaling up and the autoscaler ignores the pods when scaling down. Pods without known metrics have 0% CPU usage when scaling up and 100% CPU when scaling down. This allows for more stability during the HPA decision. To use this feature, you must configure readiness checks to determine if a new pod is ready for use. To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. 2.4.1.1. Supported metrics The following metrics are supported by horizontal pod autoscalers: Table 2.1. Metrics Metric Description API version CPU utilization Number of CPU cores used. Can be used to calculate a percentage of the pod's requested CPU. autoscaling/v1 , autoscaling/v2 Memory utilization Amount of memory used. Can be used to calculate a percentage of the pod's requested memory. autoscaling/v2 Important For memory-based autoscaling, memory usage must increase and decrease proportionally to the replica count. On average: An increase in replica count must lead to an overall decrease in memory (working set) usage per-pod. A decrease in replica count must lead to an overall increase in per-pod memory usage. Use the OpenShift Container Platform web console to check the memory behavior of your application and ensure that your application meets these requirements before using memory-based autoscaling. The following example shows autoscaling for the hello-node Deployment object. The initial deployment requires 3 pods. The HPA object increases the minimum to 5. If CPU usage on the pods reaches 75%, the pods increase to 7: USD oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75 Example output horizontalpodautoscaler.autoscaling/hello-node autoscaled Sample YAML to create an HPA for the hello-node deployment object with minReplicas set to 3 apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hello-node namespace: default spec: maxReplicas: 7 minReplicas: 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hello-node targetCPUUtilizationPercentage: 75 status: currentReplicas: 5 desiredReplicas: 0 After you create the HPA, you can view the new state of the deployment by running the following command: USD oc get deployment hello-node There are now 5 pods in the deployment: Example output NAME REVISION DESIRED CURRENT TRIGGERED BY hello-node 1 5 5 config 2.4.2. How does the HPA work? The horizontal pod autoscaler (HPA) extends the concept of pod auto-scaling. The HPA lets you create and manage a group of load-balanced nodes. The HPA automatically increases or decreases the number of pods when a given CPU or memory threshold is crossed. Figure 2.1. High level workflow of the HPA The HPA is an API resource in the Kubernetes autoscaling API group. The autoscaler works as a control loop with a default of 15 seconds for the sync period. During this period, the controller manager queries the CPU, memory utilization, or both, against what is defined in the YAML file for the HPA. The controller manager obtains the utilization metrics from the resource metrics API for per-pod resource metrics like CPU or memory, for each pod that is targeted by the HPA. If a utilization value target is set, the controller calculates the utilization value as a percentage of the equivalent resource request on the containers in each pod. The controller then takes the average of utilization across all targeted pods and produces a ratio that is used to scale the number of desired replicas. The HPA is configured to fetch metrics from metrics.k8s.io , which is provided by the metrics server. Because of the dynamic nature of metrics evaluation, the number of replicas can fluctuate during scaling for a group of replicas. Note To implement the HPA, all targeted pods must have a resource request set on their containers. 2.4.3. About requests and limits The scheduler uses the resource request that you specify for containers in a pod, to decide which node to place the pod on. The kubelet enforces the resource limit that you specify for a container to ensure that the container is not allowed to use more than the specified limit. The kubelet also reserves the request amount of that system resource specifically for that container to use. How to use resource metrics? In the pod specifications, you must specify the resource requests, such as CPU and memory. The HPA uses this specification to determine the resource utilization and then scales the target up or down. For example, the HPA object uses the following metric source: type: Resource resource: name: cpu target: type: Utilization averageUtilization: 60 In this example, the HPA keeps the average utilization of the pods in the scaling target at 60%. Utilization is the ratio between the current resource usage to the requested resource of the pod. 2.4.4. Best practices All pods must have resource requests configured The HPA makes a scaling decision based on the observed CPU or memory utilization values of pods in an OpenShift Container Platform cluster. Utilization values are calculated as a percentage of the resource requests of each pod. Missing resource request values can affect the optimal performance of the HPA. Configure the cool down period During horizontal pod autoscaling, there might be a rapid scaling of events without a time gap. Configure the cool down period to prevent frequent replica fluctuations. You can specify a cool down period by configuring the stabilizationWindowSeconds field. The stabilization window is used to restrict the fluctuation of replicas count when the metrics used for scaling keep fluctuating. The autoscaling algorithm uses this window to infer a desired state and avoid unwanted changes to workload scale. For example, a stabilization window is specified for the scaleDown field: behavior: scaleDown: stabilizationWindowSeconds: 300 In the above example, all desired states for the past 5 minutes are considered. This approximates a rolling maximum, and avoids having the scaling algorithm frequently remove pods only to trigger recreating an equivalent pod just moments later. 2.4.4.1. Scaling policies The autoscaling/v2 API allows you to add scaling policies to a horizontal pod autoscaler. A scaling policy controls how the OpenShift Container Platform horizontal pod autoscaler (HPA) scales pods. Scaling policies allow you to restrict the rate that HPAs scale pods up or down by setting a specific number or specific percentage to scale in a specified period of time. You can also define a stabilization window , which uses previously computed desired states to control scaling if the metrics are fluctuating. You can create multiple policies for the same scaling direction, and determine which policy is used, based on the amount of change. You can also restrict the scaling by timed iterations. The HPA scales pods during an iteration, then performs scaling, as needed, in further iterations. Sample HPA object with a scaling policy apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: behavior: scaleDown: 1 policies: 2 - type: Pods 3 value: 4 4 periodSeconds: 60 5 - type: Percent value: 10 6 periodSeconds: 60 selectPolicy: Min 7 stabilizationWindowSeconds: 300 8 scaleUp: 9 policies: - type: Pods value: 5 10 periodSeconds: 70 - type: Percent value: 12 11 periodSeconds: 80 selectPolicy: Max stabilizationWindowSeconds: 0 ... 1 Specifies the direction for the scaling policy, either scaleDown or scaleUp . This example creates a policy for scaling down. 2 Defines the scaling policy. 3 Determines if the policy scales by a specific number of pods or a percentage of pods during each iteration. The default value is pods . 4 Limits the amount of scaling, either the number of pods or percentage of pods, during each iteration. There is no default value for scaling down by number of pods. 5 Determines the length of a scaling iteration. The default value is 15 seconds. 6 The default value for scaling down by percentage is 100%. 7 Determines which policy to use first, if multiple policies are defined. Specify Max to use the policy that allows the highest amount of change, Min to use the policy that allows the lowest amount of change, or Disabled to prevent the HPA from scaling in that policy direction. The default value is Max . 8 Determines the time period the HPA should look back at desired states. The default value is 0 . 9 This example creates a policy for scaling up. 10 Limits the amount of scaling up by the number of pods. The default value for scaling up the number of pods is 4%. 11 Limits the amount of scaling up by the percentage of pods. The default value for scaling up by percentage is 100%. Example policy for scaling down apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: ... minReplicas: 20 ... behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 30 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max scaleUp: selectPolicy: Disabled In this example, when the number of pods is greater than 40, the percent-based policy is used for scaling down, as that policy results in a larger change, as required by the selectPolicy . If there are 80 pod replicas, in the first iteration the HPA reduces the pods by 8, which is 10% of the 80 pods (based on the type: Percent and value: 10 parameters), over one minute ( periodSeconds: 60 ). For the iteration, the number of pods is 72. The HPA calculates that 10% of the remaining pods is 7.2, which it rounds up to 8 and scales down 8 pods. On each subsequent iteration, the number of pods to be scaled is re-calculated based on the number of remaining pods. When the number of pods falls below 40, the pods-based policy is applied, because the pod-based number is greater than the percent-based number. The HPA reduces 4 pods at a time ( type: Pods and value: 4 ), over 30 seconds ( periodSeconds: 30 ), until there are 20 replicas remaining ( minReplicas ). The selectPolicy: Disabled parameter prevents the HPA from scaling up the pods. You can manually scale up by adjusting the number of replicas in the replica set or deployment set, if needed. If set, you can view the scaling policy by using the oc edit command: USD oc edit hpa hpa-resource-metrics-memory Example output apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: annotations: autoscaling.alpha.kubernetes.io/behavior:\ '{"ScaleUp":{"StabilizationWindowSeconds":0,"SelectPolicy":"Max","Policies":[{"Type":"Pods","Value":4,"PeriodSeconds":15},{"Type":"Percent","Value":100,"PeriodSeconds":15}]},\ "ScaleDown":{"StabilizationWindowSeconds":300,"SelectPolicy":"Min","Policies":[{"Type":"Pods","Value":4,"PeriodSeconds":60},{"Type":"Percent","Value":10,"PeriodSeconds":60}]}}' ... 2.4.5. Creating a horizontal pod autoscaler by using the web console From the web console, you can create a horizontal pod autoscaler (HPA) that specifies the minimum and maximum number of pods you want to run on a Deployment or DeploymentConfig object. You can also define the amount of CPU or memory usage that your pods should target. Note An HPA cannot be added to deployments that are part of an Operator-backed service, Knative service, or Helm chart. Procedure To create an HPA in the web console: In the Topology view, click the node to reveal the side pane. From the Actions drop-down list, select Add HorizontalPodAutoscaler to open the Add HorizontalPodAutoscaler form. Figure 2.2. Add HorizontalPodAutoscaler From the Add HorizontalPodAutoscaler form, define the name, minimum and maximum pod limits, the CPU and memory usage, and click Save . Note If any of the values for CPU and memory usage are missing, a warning is displayed. To edit an HPA in the web console: In the Topology view, click the node to reveal the side pane. From the Actions drop-down list, select Edit HorizontalPodAutoscaler to open the Edit Horizontal Pod Autoscaler form. From the Edit Horizontal Pod Autoscaler form, edit the minimum and maximum pod limits and the CPU and memory usage, and click Save . Note While creating or editing the horizontal pod autoscaler in the web console, you can switch from Form view to YAML view . To remove an HPA in the web console: In the Topology view, click the node to reveal the side panel. From the Actions drop-down list, select Remove HorizontalPodAutoscaler . In the confirmation pop-up window, click Remove to remove the HPA. 2.4.6. Creating a horizontal pod autoscaler for CPU utilization by using the CLI Using the OpenShift Container Platform CLI, you can create a horizontal pod autoscaler (HPA) to automatically scale an existing Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet object. The HPA scales the pods associated with that object to maintain the CPU usage you specify. Note It is recommended to use a Deployment object or ReplicaSet object unless you need a specific feature or behavior provided by other objects. The HPA increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified CPU utilization across all pods. When autoscaling for CPU utilization, you can use the oc autoscale command and specify the minimum and maximum number of pods you want to run at any given time and the average CPU utilization your pods should target. If you do not specify a minimum, the pods are given default values from the OpenShift Container Platform server. To autoscale for a specific CPU value, create a HorizontalPodAutoscaler object with the target CPU and pod limits. Prerequisites To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with Cpu and Memory displayed under Usage . USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Example output Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none> Procedure To create a horizontal pod autoscaler for CPU utilization: Perform one of the following: To scale based on the percent of CPU utilization, create a HorizontalPodAutoscaler object for an existing object: USD oc autoscale <object_type>/<name> \ 1 --min <number> \ 2 --max <number> \ 3 --cpu-percent=<percent> 4 1 Specify the type and name of the object to autoscale. The object must exist and be a Deployment , DeploymentConfig / dc , ReplicaSet / rs , ReplicationController / rc , or StatefulSet . 2 Optionally, specify the minimum number of replicas when scaling down. 3 Specify the maximum number of replicas when scaling up. 4 Specify the target average CPU utilization over all the pods, represented as a percent of requested CPU. If not specified or negative, a default autoscaling policy is used. For example, the following command shows autoscaling for the hello-node deployment object. The initial deployment requires 3 pods. The HPA object increases the minimum to 5. If CPU usage on the pods reaches 75%, the pods will increase to 7: USD oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75 To scale for a specific CPU value, create a YAML file similar to the following for an existing object: Create a YAML file similar to the following: apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: cpu-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: cpu 9 target: type: AverageValue 10 averageValue: 500m 11 1 Use the autoscaling/v2 API. 2 Specify a name for this horizontal pod autoscaler object. 3 Specify the API version of the object to scale: For a Deployment , ReplicaSet , Statefulset object, use apps/v1 . For a ReplicationController , use v1 . For a DeploymentConfig , use apps.openshift.io/v1 . 4 Specify the type of object. The object must be a Deployment , DeploymentConfig / dc , ReplicaSet / rs , ReplicationController / rc , or StatefulSet . 5 Specify the name of the object to scale. The object must exist. 6 Specify the minimum number of replicas when scaling down. 7 Specify the maximum number of replicas when scaling up. 8 Use the metrics parameter for memory utilization. 9 Specify cpu for CPU utilization. 10 Set to AverageValue . 11 Set to averageValue with the targeted CPU value. Create the horizontal pod autoscaler: USD oc create -f <file-name>.yaml Verify that the horizontal pod autoscaler was created: USD oc get hpa cpu-autoscale Example output NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE cpu-autoscale Deployment/example 173m/500m 1 10 1 20m 2.4.7. Creating a horizontal pod autoscaler object for memory utilization by using the CLI Using the OpenShift Container Platform CLI, you can create a horizontal pod autoscaler (HPA) to automatically scale an existing Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet object. The HPA scales the pods associated with that object to maintain the average memory utilization you specify, either a direct value or a percentage of requested memory. Note It is recommended to use a Deployment object or ReplicaSet object unless you need a specific feature or behavior provided by other objects. The HPA increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified memory utilization across all pods. For memory utilization, you can specify the minimum and maximum number of pods and the average memory utilization your pods should target. If you do not specify a minimum, the pods are given default values from the OpenShift Container Platform server. Prerequisites To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with Cpu and Memory displayed under Usage . USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-129-223.compute.internal -n openshift-kube-scheduler Example output Name: openshift-kube-scheduler-ip-10-0-129-223.compute.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Cpu: 0 Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2020-02-14T22:21:14Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-129-223.compute.internal Timestamp: 2020-02-14T22:21:14Z Window: 5m0s Events: <none> Procedure To create a horizontal pod autoscaler for memory utilization: Create a YAML file for one of the following: To scale for a specific memory value, create a HorizontalPodAutoscaler object similar to the following for an existing object: apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: AverageValue 10 averageValue: 500Mi 11 behavior: 12 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 60 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max 1 Use the autoscaling/v2 API. 2 Specify a name for this horizontal pod autoscaler object. 3 Specify the API version of the object to scale: For a Deployment , ReplicaSet , or Statefulset object, use apps/v1 . For a ReplicationController , use v1 . For a DeploymentConfig , use apps.openshift.io/v1 . 4 Specify the type of object. The object must be a Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet . 5 Specify the name of the object to scale. The object must exist. 6 Specify the minimum number of replicas when scaling down. 7 Specify the maximum number of replicas when scaling up. 8 Use the metrics parameter for memory utilization. 9 Specify memory for memory utilization. 10 Set the type to AverageValue . 11 Specify averageValue and a specific memory value. 12 Optional: Specify a scaling policy to control the rate of scaling up or down. To scale for a percentage, create a HorizontalPodAutoscaler object similar to the following for an existing object: apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: memory-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: Utilization 10 averageUtilization: 50 11 behavior: 12 scaleUp: stabilizationWindowSeconds: 180 policies: - type: Pods value: 6 periodSeconds: 120 - type: Percent value: 10 periodSeconds: 120 selectPolicy: Max 1 Use the autoscaling/v2 API. 2 Specify a name for this horizontal pod autoscaler object. 3 Specify the API version of the object to scale: For a ReplicationController, use v1 . For a DeploymentConfig, use apps.openshift.io/v1 . For a Deployment, ReplicaSet, Statefulset object, use apps/v1 . 4 Specify the type of object. The object must be a Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet . 5 Specify the name of the object to scale. The object must exist. 6 Specify the minimum number of replicas when scaling down. 7 Specify the maximum number of replicas when scaling up. 8 Use the metrics parameter for memory utilization. 9 Specify memory for memory utilization. 10 Set to Utilization . 11 Specify averageUtilization and a target average memory utilization over all the pods, represented as a percent of requested memory. The target pods must have memory requests configured. 12 Optional: Specify a scaling policy to control the rate of scaling up or down. Create the horizontal pod autoscaler: USD oc create -f <file-name>.yaml For example: USD oc create -f hpa.yaml Example output horizontalpodautoscaler.autoscaling/hpa-resource-metrics-memory created Verify that the horizontal pod autoscaler was created: USD oc get hpa hpa-resource-metrics-memory Example output NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-resource-metrics-memory Deployment/example 2441216/500Mi 1 10 1 20m USD oc describe hpa hpa-resource-metrics-memory Example output Name: hpa-resource-metrics-memory Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Wed, 04 Mar 2020 16:31:37 +0530 Reference: Deployment/example Metrics: ( current / target ) resource memory on pods: 2441216 / 500Mi Min replicas: 1 Max replicas: 10 ReplicationController pods: 1 current / 1 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale recommended size matches current size ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 6m34s horizontal-pod-autoscaler New size: 1; reason: All metrics below target 2.4.8. Understanding horizontal pod autoscaler status conditions by using the CLI You can use the status conditions set to determine whether or not the horizontal pod autoscaler (HPA) is able to scale and whether or not it is currently restricted in any way. The HPA status conditions are available with the v2 version of the autoscaling API. The HPA responds with the following status conditions: The AbleToScale condition indicates whether HPA is able to fetch and update metrics, as well as whether any backoff-related conditions could prevent scaling. A True condition indicates scaling is allowed. A False condition indicates scaling is not allowed for the reason specified. The ScalingActive condition indicates whether the HPA is enabled (for example, the replica count of the target is not zero) and is able to calculate desired metrics. A True condition indicates metrics is working properly. A False condition generally indicates a problem with fetching metrics. The ScalingLimited condition indicates that the desired scale was capped by the maximum or minimum of the horizontal pod autoscaler. A True condition indicates that you need to raise or lower the minimum or maximum replica count in order to scale. A False condition indicates that the requested scaling is allowed. USD oc describe hpa cm-test Example output Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) "http_requests" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range Events: 1 The horizontal pod autoscaler status messages. The following is an example of a pod that is unable to scale: Example output Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: no matches for kind "ReplicationController" in group "apps" Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 6s (x3 over 36s) horizontal-pod-autoscaler no matches for kind "ReplicationController" in group "apps" The following is an example of a pod that could not obtain the needed metrics for scaling: Example output Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API The following is an example of a pod where the requested autoscaling was less than the required minimums: Example output Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range 2.4.8.1. Viewing horizontal pod autoscaler status conditions by using the CLI You can view the status conditions set on a pod by the horizontal pod autoscaler (HPA). Note The horizontal pod autoscaler status conditions are available with the v2 version of the autoscaling API. Prerequisites To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with Cpu and Memory displayed under Usage . USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Example output Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none> Procedure To view the status conditions on a pod, use the following command with the name of the pod: USD oc describe hpa <pod-name> For example: USD oc describe hpa cm-test The conditions appear in the Conditions field in the output. Example output Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) "http_requests" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range 2.4.9. Additional resources For more information on replication controllers and deployment controllers, see Understanding deployments and deployment configs . For an example on the usage of HPA, see Horizontal Pod Autoscaling of Quarkus Application Based on Memory Utilization . 2.5. Automatically adjust pod resource levels with the vertical pod autoscaler The OpenShift Container Platform Vertical Pod Autoscaler Operator (VPA) automatically reviews the historic and current CPU and memory resources for containers in pods and can update the resource limits and requests based on the usage values it learns. The VPA uses individual custom resources (CR) to update all of the pods in a project that are associated with any built-in workload objects, including the following object types: Deployment DeploymentConfig StatefulSet Job DaemonSet ReplicaSet ReplicationController The VPA can also update certain custom resource object that manage pods, as described in Using the Vertical Pod Autoscaler Operator with Custom Resources . The VPA helps you to understand the optimal CPU and memory usage for your pods and can automatically maintain pod resources through the pod lifecycle. 2.5.1. About the Vertical Pod Autoscaler Operator The Vertical Pod Autoscaler Operator (VPA) is implemented as an API resource and a custom resource (CR). The CR determines the actions that the VPA Operator should take with the pods associated with a specific workload object, such as a daemon set, replication controller, and so forth, in a project. The VPA Operator consists of three components, each of which has its own pod in the VPA namespace: Recommender The VPA recommender monitors the current and past resource consumption and, based on this data, determines the optimal CPU and memory resources for the pods in the associated workload object. Updater The VPA updater checks if the pods in the associated workload object have the correct resources. If the resources are correct, the updater takes no action. If the resources are not correct, the updater kills the pod so that they can be recreated by their controllers with the updated requests. Admission controller The VPA admission controller sets the correct resource requests on each new pod in the associated workload object, whether the pod is new or was recreated by its controller due to the VPA updater actions. You can use the default recommender or use your own alternative recommender to autoscale based on your own algorithms. The default recommender automatically computes historic and current CPU and memory usage for the containers in those pods and uses this data to determine optimized resource limits and requests to ensure that these pods are operating efficiently at all times. For example, the default recommender suggests reduced resources for pods that are requesting more resources than they are using and increased resources for pods that are not requesting enough. The VPA then automatically deletes any pods that are out of alignment with these recommendations one at a time, so that your applications can continue to serve requests with no downtime. The workload objects then re-deploy the pods with the original resource limits and requests. The VPA uses a mutating admission webhook to update the pods with optimized resource limits and requests before the pods are admitted to a node. If you do not want the VPA to delete pods, you can view the VPA resource limits and requests and manually update the pods as needed. Note By default, workload objects must specify a minimum of two replicas in order for the VPA to automatically delete their pods. Workload objects that specify fewer replicas than this minimum are not deleted. If you manually delete these pods, when the workload object redeploys the pods, the VPA does update the new pods with its recommendations. You can change this minimum by modifying the VerticalPodAutoscalerController object as shown in Changing the VPA minimum value . For example, if you have a pod that uses 50% of the CPU but only requests 10%, the VPA determines that the pod is consuming more CPU than requested and deletes the pod. The workload object, such as replica set, restarts the pods and the VPA updates the new pod with its recommended resources. For developers, you can use the VPA to help ensure your pods stay up during periods of high demand by scheduling pods onto nodes that have appropriate resources for each pod. Administrators can use the VPA to better utilize cluster resources, such as preventing pods from reserving more CPU resources than needed. The VPA monitors the resources that workloads are actually using and adjusts the resource requirements so capacity is available to other workloads. The VPA also maintains the ratios between limits and requests that are specified in initial container configuration. Note If you stop running the VPA or delete a specific VPA CR in your cluster, the resource requests for the pods already modified by the VPA do not change. Any new pods get the resources defined in the workload object, not the recommendations made by the VPA. 2.5.2. Installing the Vertical Pod Autoscaler Operator You can use the OpenShift Container Platform web console to install the Vertical Pod Autoscaler Operator (VPA). Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose VerticalPodAutoscaler from the list of available Operators, and click Install . On the Install Operator page, ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory openshift-vertical-pod-autoscaler namespace, which is automatically created if it does not exist. Click Install . Verifiction Verify the installation by listing the VPA Operator components: Navigate to Workloads Pods . Select the openshift-vertical-pod-autoscaler project from the drop-down menu and verify that there are four pods running. Navigate to Workloads Deployments to verify that there are four deployments running. Optional: Verify the installation in the OpenShift Container Platform CLI using the following command: USD oc get all -n openshift-vertical-pod-autoscaler The output shows four pods and four deployments: Example output NAME READY STATUS RESTARTS AGE pod/vertical-pod-autoscaler-operator-85b4569c47-2gmhc 1/1 Running 0 3m13s pod/vpa-admission-plugin-default-67644fc87f-xq7k9 1/1 Running 0 2m56s pod/vpa-recommender-default-7c54764b59-8gckt 1/1 Running 0 2m56s pod/vpa-updater-default-7f6cc87858-47vw9 1/1 Running 0 2m56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/vpa-webhook ClusterIP 172.30.53.206 <none> 443/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/vertical-pod-autoscaler-operator 1/1 1 1 3m13s deployment.apps/vpa-admission-plugin-default 1/1 1 1 2m56s deployment.apps/vpa-recommender-default 1/1 1 1 2m56s deployment.apps/vpa-updater-default 1/1 1 1 2m56s NAME DESIRED CURRENT READY AGE replicaset.apps/vertical-pod-autoscaler-operator-85b4569c47 1 1 1 3m13s replicaset.apps/vpa-admission-plugin-default-67644fc87f 1 1 1 2m56s replicaset.apps/vpa-recommender-default-7c54764b59 1 1 1 2m56s replicaset.apps/vpa-updater-default-7f6cc87858 1 1 1 2m56s 2.5.3. About Using the Vertical Pod Autoscaler Operator To use the Vertical Pod Autoscaler Operator (VPA), you create a VPA custom resource (CR) for a workload object in your cluster. The VPA learns and applies the optimal CPU and memory resources for the pods associated with that workload object. You can use a VPA with a deployment, stateful set, job, daemon set, replica set, or replication controller workload object. The VPA CR must be in the same project as the pods you want to monitor. You use the VPA CR to associate a workload object and specify which mode the VPA operates in: The Auto and Recreate modes automatically apply the VPA CPU and memory recommendations throughout the pod lifetime. The VPA deletes any pods in the project that are out of alignment with its recommendations. When redeployed by the workload object, the VPA updates the new pods with its recommendations. The Initial mode automatically applies VPA recommendations only at pod creation. The Off mode only provides recommended resource limits and requests, allowing you to manually apply the recommendations. The off mode does not update pods. You can also use the CR to opt-out certain containers from VPA evaluation and updates. For example, a pod has the following limits and requests: resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi After creating a VPA that is set to auto , the VPA learns the resource usage and deletes the pod. When redeployed, the pod uses the new resource limits and requests: resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k You can view the VPA recommendations using the following command: USD oc get vpa <vpa-name> --output yaml After a few minutes, the output shows the recommendations for CPU and memory requests, similar to the following: Example output ... status: ... recommendation: containerRecommendations: - containerName: frontend lowerBound: cpu: 25m memory: 262144k target: cpu: 25m memory: 262144k uncappedTarget: cpu: 25m memory: 262144k upperBound: cpu: 262m memory: "274357142" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: "498558823" ... The output shows the recommended resources, target , the minimum recommended resources, lowerBound , the highest recommended resources, upperBound , and the most recent resource recommendations, uncappedTarget . The VPA uses the lowerBound and upperBound values to determine if a pod needs to be updated. If a pod has resource requests below the lowerBound values or above the upperBound values, the VPA terminates and recreates the pod with the target values. 2.5.3.1. Changing the VPA minimum value By default, workload objects must specify a minimum of two replicas in order for the VPA to automatically delete and update their pods. As a result, workload objects that specify fewer than two replicas are not automatically acted upon by the VPA. The VPA does update new pods from these workload objects if the pods are restarted by some process external to the VPA. You can change this cluster-wide minimum value by modifying the minReplicas parameter in the VerticalPodAutoscalerController custom resource (CR). For example, if you set minReplicas to 3 , the VPA does not delete and update pods for workload objects that specify fewer than three replicas. Note If you set minReplicas to 1 , the VPA can delete the only pod for a workload object that specifies only one replica. You should use this setting with one-replica objects only if your workload can tolerate downtime whenever the VPA deletes a pod to adjust its resources. To avoid unwanted downtime with one-replica objects, configure the VPA CRs with the podUpdatePolicy set to Initial , which automatically updates the pod only when it is restarted by some process external to the VPA, or Off , which allows you to update the pod manually at an appropriate time for your application. Example VerticalPodAutoscalerController object apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: creationTimestamp: "2021-04-21T19:29:49Z" generation: 2 name: default namespace: openshift-vertical-pod-autoscaler resourceVersion: "142172" uid: 180e17e9-03cc-427f-9955-3b4d7aeb2d59 spec: minReplicas: 3 1 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15 1 1 Specify the minimum number of replicas in a workload object for the VPA to act on. Any objects with replicas fewer than the minimum are not automatically deleted by the VPA. 2.5.3.2. Automatically applying VPA recommendations To use the VPA to automatically update pods, create a VPA CR for a specific workload object with updateMode set to Auto or Recreate . When the pods are created for the workload object, the VPA constantly monitors the containers to analyze their CPU and memory needs. The VPA deletes any pods that do not meet the VPA recommendations for CPU and memory. When redeployed, the pods use the new resource limits and requests based on the VPA recommendations, honoring any pod disruption budget set for your applications. The recommendations are added to the status field of the VPA CR for reference. Note By default, workload objects must specify a minimum of two replicas in order for the VPA to automatically delete their pods. Workload objects that specify fewer replicas than this minimum are not deleted. If you manually delete these pods, when the workload object redeploys the pods, the VPA does update the new pods with its recommendations. You can change this minimum by modifying the VerticalPodAutoscalerController object as shown in Changing the VPA minimum value . Example VPA CR for the Auto mode apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Auto" 3 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Auto or Recreate : Auto . The VPA assigns resource requests on pod creation and updates the existing pods by terminating them when the requested resources differ significantly from the new recommendation. Recreate . The VPA assigns resource requests on pod creation and updates the existing pods by terminating them when the requested resources differ significantly from the new recommendation. This mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. Note Before a VPA can determine recommendations for resources and apply the recommended resources to new pods, operating pods must exist and be running in the project. If a workload's resource usage, such as CPU and memory, is consistent, the VPA can determine recommendations for resources in a few minutes. If a workload's resource usage is inconsistent, the VPA must collect metrics at various resource usage intervals for the VPA to make an accurate recommendation. 2.5.3.3. Automatically applying VPA recommendations on pod creation To use the VPA to apply the recommended resources only when a pod is first deployed, create a VPA CR for a specific workload object with updateMode set to Initial . Then, manually delete any pods associated with the workload object that you want to use the VPA recommendations. In the Initial mode, the VPA does not delete pods and does not update the pods as it learns new resource recommendations. Example VPA CR for the Initial mode apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Initial" 3 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Initial . The VPA assigns resources when pods are created and does not change the resources during the lifetime of the pod. Note Before a VPA can determine recommended resources and apply the recommendations to new pods, operating pods must exist and be running in the project. To obtain the most accurate recommendations from the VPA, wait at least 8 days for the pods to run and for the VPA to stabilize. 2.5.3.4. Manually applying VPA recommendations To use the VPA to only determine the recommended CPU and memory values, create a VPA CR for a specific workload object with updateMode set to off . When the pods are created for that workload object, the VPA analyzes the CPU and memory needs of the containers and records those recommendations in the status field of the VPA CR. The VPA does not update the pods as it determines new resource recommendations. Example VPA CR for the Off mode apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Off" 3 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Off . You can view the recommendations using the following command. USD oc get vpa <vpa-name> --output yaml With the recommendations, you can edit the workload object to add CPU and memory requests, then delete and redeploy the pods using the recommended resources. Note Before a VPA can determine recommended resources and apply the recommendations to new pods, operating pods must exist and be running in the project. To obtain the most accurate recommendations from the VPA, wait at least 8 days for the pods to run and for the VPA to stabilize. 2.5.3.5. Exempting containers from applying VPA recommendations If your workload object has multiple containers and you do not want the VPA to evaluate and act on all of the containers, create a VPA CR for a specific workload object and add a resourcePolicy to opt-out specific containers. When the VPA updates the pods with recommended resources, any containers with a resourcePolicy are not updated and the VPA does not present recommendations for those containers in the pod. apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Auto" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: "Off" 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Auto , Recreate , or Off . The Recreate mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. 4 Specify the containers you want to opt-out and set mode to Off . For example, a pod has two containers, the same resource requests and limits: # ... spec: containers: - name: frontend resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi - name: backend resources: limits: cpu: "1" memory: 500Mi requests: cpu: 500m memory: 100Mi # ... After launching a VPA CR with the backend container set to opt-out, the VPA terminates and recreates the pod with the recommended resources applied only to the frontend container: ... spec: containers: name: frontend resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k ... name: backend resources: limits: cpu: "1" memory: 500Mi requests: cpu: 500m memory: 100Mi ... 2.5.3.6. Performance tuning the VPA Operator As a cluster administrator, you can tune the performance of your Vertical Pod Autoscaler Operator (VPA) to limit the rate at which the VPA makes requests of the Kubernetes API server and to specify the CPU and memory resources for the VPA recommender, updater, and admission controller component pods. Additionally, you can configure the VPA Operator to monitor only those workloads that are being managed by a VPA custom resource (CR). By default, the VPA Operator monitors every workload in the cluster. This allows the VPA Operator to accrue and store 8 days of historical data for all workloads, which the Operator can use if a new VPA CR is created for a workload. However, this causes the VPA Operator to use significant CPU and memory, which could cause the Operator to fail, particularly on larger clusters. By configuring the VPA Operator to monitor only workloads with a VPA CR, you can save on CPU and memory resources. One trade-off is that if you have a workload that has been running, and you create a VPA CR to manage that workload, the VPA Operator does not have any historical data for that workload. As a result, the initial recommendations are not as useful as those after the workload had been running for some time. These tunings allow you to ensure the VPA has sufficient resources to operate at peak efficiency and to prevent throttling and a possible delay in pod admissions. You can perform the following tunings on the VPA components by editing the VerticalPodAutoscalerController custom resource (CR): To prevent throttling and pod admission delays, set the queries-per-second (QPS) and burst rates for VPA requests of the Kubernetes API server by using the kube-api-qps and kube-api-burst parameters. To ensure sufficient CPU and memory, set the CPU and memory requests for VPA component pods by using the standard cpu and memory resource requests. To configure the VPA Operator to monitor only workloads that are being managed by a VPA CR, set the memory-saver parameter to true for the recommender component. For guidelines on the resources and rate limits that you could set for each VPA component, the following tables provide recommended baseline values, depending on the size of your cluster and other factors. Important These recommended values were derived from internal Red Hat testing on clusters that are not necessarily representative of real-world clusters. You should test these values in a non-production cluster before configuring a production cluster. Table 2.2. Requests by containers in the cluster Component 1-500 containers 500-1000 containers 1000-2000 containers 2000-4000 containers 4000+ containers CPU Memory CPU Memory CPU Memory CPU Memory CPU Memory Admission 25m 50Mi 25m 75Mi 40m 150Mi 75m 260Mi (0.03c)/2 + 10 [1] (0.1c)/2 + 50 [1] Recommender 25m 100Mi 50m 160Mi 75m 275Mi 120m 420Mi (0.05c)/2 + 50 [1] (0.15c)/2 + 120 [1] Updater 25m 100Mi 50m 220Mi 80m 350Mi 150m 500Mi (0.07c)/2 + 20 [1] (0.15c)/2 + 200 [1] c is the number of containers in the cluster. Note It is recommended that you set the memory limit on your containers to at least double the recommended requests in the table. However, because CPU is a compressible resource, setting CPU limits for containers can throttle the VPA. As such, it is recommended that you do not set a CPU limit on your containers. Table 2.3. Rate limits by VPAs in the cluster Component 1 - 150 VPAs 151 - 500 VPAs 501-2000 VPAs 2001-4000 VPAs QPS Limit [1] Burst [2] QPS Limit Burst QPS Limit Burst QPS Limit Burst Recommender 5 10 30 60 60 120 120 240 Updater 5 10 30 60 60 120 120 240 QPS specifies the queries per second (QPS) limit when making requests to Kubernetes API server. The default for the updater and recommender pods is 5.0 . Burst specifies the burst limit when making requests to Kubernetes API server. The default for the updater and recommender pods is 10.0 . Note If you have more than 4000 VPAs in your cluster, it is recommended that you start performance tuning with the values in the table and slowly increase the values until you achieve the desired recommender and updater latency and performance. You should adjust these values slowly because increased QPS and Burst could affect the cluster health and slow down the Kubernetes API server if too many API requests are being sent to the API server from the VPA components. The following example VPA controller CR is for a cluster with 1000 to 2000 containers and a pod creation surge of 26 to 50. The CR sets the following values: The container memory and CPU requests for all three VPA components The container memory limit for all three VPA components The QPS and burst rates for all three VPA components The memory-saver parameter to true for the VPA recommender component Example VerticalPodAutoscalerController CR apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: 1 container: args: 2 - '--kube-api-qps=50.0' - '--kube-api-burst=100.0' resources: requests: 3 cpu: 40m memory: 150Mi limits: memory: 300Mi recommender: 4 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' - '--memory-saver=true' 5 resources: requests: cpu: 75m memory: 275Mi limits: memory: 550Mi updater: 6 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' resources: requests: cpu: 80m memory: 350M limits: memory: 700Mi minReplicas: 2 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15 1 Specifies the tuning parameters for the VPA admission controller. 2 Specifies the API QPS and burst rates for the VPA admission controller. kube-api-qps : Specifies the queries per second (QPS) limit when making requests to Kubernetes API server. The default is 5.0 . kube-api-burst : Specifies the burst limit when making requests to Kubernetes API server. The default is 10.0 . 3 Specifies the resource requests and limits for the VPA admission controller pod. 4 Specifies the tuning parameters for the VPA recommender. 5 Specifies that the VPA Operator monitors only workloads with a VPA CR. The default is false . 6 Specifies the tuning parameters for the VPA updater. You can verify that the settings were applied to each VPA component pod. Example updater pod apiVersion: v1 kind: Pod metadata: name: vpa-updater-default-d65ffb9dc-hgw44 namespace: openshift-vertical-pod-autoscaler # ... spec: containers: - args: - --logtostderr - --v=1 - --min-replicas=2 - --kube-api-qps=60.0 - --kube-api-burst=120.0 # ... resources: requests: cpu: 80m memory: 350M # ... Example admission controller pod apiVersion: v1 kind: Pod metadata: name: vpa-admission-plugin-default-756999448c-l7tsd namespace: openshift-vertical-pod-autoscaler # ... spec: containers: - args: - --logtostderr - --v=1 - --tls-cert-file=/data/tls-certs/tls.crt - --tls-private-key=/data/tls-certs/tls.key - --client-ca-file=/data/tls-ca-certs/service-ca.crt - --webhook-timeout-seconds=10 - --kube-api-qps=50.0 - --kube-api-burst=100.0 # ... resources: requests: cpu: 40m memory: 150Mi # ... Example recommender pod apiVersion: v1 kind: Pod metadata: name: vpa-recommender-default-74c979dbbc-znrd2 namespace: openshift-vertical-pod-autoscaler # ... spec: containers: - args: - --logtostderr - --v=1 - --recommendation-margin-fraction=0.15 - --pod-recommendation-min-cpu-millicores=25 - --pod-recommendation-min-memory-mb=250 - --kube-api-qps=60.0 - --kube-api-burst=120.0 - --memory-saver=true # ... resources: requests: cpu: 75m memory: 275Mi # ... 2.5.3.7. Using an alternative recommender You can use your own recommender to autoscale based on your own algorithms. If you do not specify an alternative recommender, OpenShift Container Platform uses the default recommender, which suggests CPU and memory requests based on historical usage. Because there is no universal recommendation policy that applies to all types of workloads, you might want to create and deploy different recommenders for specific workloads. For example, the default recommender might not accurately predict future resource usage when containers exhibit certain resource behaviors, such as cyclical patterns that alternate between usage spikes and idling as used by monitoring applications, or recurring and repeating patterns used with deep learning applications. Using the default recommender with these usage behaviors might result in significant over-provisioning and Out of Memory (OOM) kills for your applications. Note Instructions for how to create a recommender are beyond the scope of this documentation, Procedure To use an alternative recommender for your pods: Create a service account for the alternative recommender and bind that service account to the required cluster role: apiVersion: v1 1 kind: ServiceAccount metadata: name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRoleBinding metadata: name: system:example-metrics-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 3 kind: ClusterRoleBinding metadata: name: system:example-vpa-actor roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-actor subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRoleBinding metadata: name: system:example-vpa-target-reader-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-target-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> 1 Creates a service account for the recommender in the namespace where the recommender is deployed. 2 Binds the recommender service account to the metrics-reader role. Specify the namespace where the recommender is to be deployed. 3 Binds the recommender service account to the vpa-actor role. Specify the namespace where the recommender is to be deployed. 4 Binds the recommender service account to the vpa-target-reader role. Specify the namespace where the recommender is to be deployed. To add the alternative recommender to the cluster, create a Deployment object similar to the following: apiVersion: apps/v1 kind: Deployment metadata: name: alt-vpa-recommender namespace: <namespace_name> spec: replicas: 1 selector: matchLabels: app: alt-vpa-recommender template: metadata: labels: app: alt-vpa-recommender spec: containers: 1 - name: recommender image: quay.io/example/alt-recommender:latest 2 imagePullPolicy: Always resources: limits: cpu: 200m memory: 1000Mi requests: cpu: 50m memory: 500Mi ports: - name: prometheus containerPort: 8942 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL seccompProfile: type: RuntimeDefault serviceAccountName: alt-vpa-recommender-sa 3 securityContext: runAsNonRoot: true 1 Creates a container for your alternative recommender. 2 Specifies your recommender image. 3 Associates the service account that you created for the recommender. A new pod is created for the alternative recommender in the same namespace. USD oc get pods Example output NAME READY STATUS RESTARTS AGE frontend-845d5478d-558zf 1/1 Running 0 4m25s frontend-845d5478d-7z9gx 1/1 Running 0 4m25s frontend-845d5478d-b7l4j 1/1 Running 0 4m25s vpa-alt-recommender-55878867f9-6tp5v 1/1 Running 0 9s Configure a VPA CR that includes the name of the alternative recommender Deployment object. Example VPA CR to include the alternative recommender apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender namespace: <namespace_name> spec: recommenders: - name: alt-vpa-recommender 1 targetRef: apiVersion: "apps/v1" kind: Deployment 2 name: frontend 1 Specifies the name of the alternative recommender deployment. 2 Specifies the name of an existing workload object you want this VPA to manage. 2.5.4. Using the Vertical Pod Autoscaler Operator You can use the Vertical Pod Autoscaler Operator (VPA) by creating a VPA custom resource (CR). The CR indicates which pods it should analyze and determines the actions the VPA should take with those pods. You can use the VPA to scale built-in resources such as deployments or stateful sets, and custom resources that manage pods. For more information on using the VPA with custom resources, see "Using the Vertical Pod Autoscaler Operator with Custom Resources." Prerequisites The workload object that you want to autoscale must exist. If you want to use an alternative recommender, a deployment including that recommender must exist. Procedure To create a VPA CR for a specific workload object: Change to the project where the workload object you want to scale is located. Create a VPA CR YAML file: apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Auto" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: "Off" recommenders: 5 - name: my-recommender 1 Specify the type of workload object you want this VPA to manage: Deployment , StatefulSet , Job , DaemonSet , ReplicaSet , or ReplicationController . 2 Specify the name of an existing workload object you want this VPA to manage. 3 Specify the VPA mode: auto to automatically apply the recommended resources on pods associated with the controller. The VPA terminates existing pods and creates new pods with the recommended resource limits and requests. recreate to automatically apply the recommended resources on pods associated with the workload object. The VPA terminates existing pods and creates new pods with the recommended resource limits and requests. The recreate mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. initial to automatically apply the recommended resources when pods associated with the workload object are created. The VPA does not update the pods as it learns new resource recommendations. off to only generate resource recommendations for the pods associated with the workload object. The VPA does not update the pods as it learns new resource recommendations and does not apply the recommendations to new pods. 4 Optional. Specify the containers you want to opt-out and set the mode to Off . 5 Optional. Specify an alternative recommender. Create the VPA CR: USD oc create -f <file-name>.yaml After a few moments, the VPA learns the resource usage of the containers in the pods associated with the workload object. You can view the VPA recommendations using the following command: USD oc get vpa <vpa-name> --output yaml The output shows the recommendations for CPU and memory requests, similar to the following: Example output ... status: ... recommendation: containerRecommendations: - containerName: frontend lowerBound: 1 cpu: 25m memory: 262144k target: 2 cpu: 25m memory: 262144k uncappedTarget: 3 cpu: 25m memory: 262144k upperBound: 4 cpu: 262m memory: "274357142" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: "498558823" ... 1 lowerBound is the minimum recommended resource levels. 2 target is the recommended resource levels. 3 upperBound is the highest recommended resource levels. 4 uncappedTarget is the most recent resource recommendations. 2.5.4.1. Example custom resources for the Vertical Pod Autoscaler The Vertical Pod Autoscaler Operator (VPA) can update not only built-in resources such as deployments or stateful sets, but also custom resources that manage pods. In order to use the VPA with a custom resource, when you create the CustomResourceDefinition (CRD) object, you must configure the labelSelectorPath field in the /scale subresource. The /scale subresource creates a Scale object. The labelSelectorPath field defines the JSON path inside the custom resource that corresponds to Status.Selector in the Scale object and in the custom resource. The following is an example of a CustomResourceDefinition and a CustomResource that fulfills these requirements, along with a VerticalPodAutoscaler definition that targets the custom resource. The following example shows the /scale subresource contract. Note This example does not result in the VPA scaling pods because there is no controller for the custom resource that allows it to own any pods. As such, you must write a controller in a language supported by Kubernetes to manage the reconciliation and state management between the custom resource and your pods. The example illustrates the configuration for the VPA to understand the custom resource as scalable. Example custom CRD, CR apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: scalablepods.testing.openshift.io spec: group: testing.openshift.io versions: - name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: replicas: type: integer minimum: 0 selector: type: string status: type: object properties: replicas: type: integer subresources: status: {} scale: specReplicasPath: .spec.replicas statusReplicasPath: .status.replicas labelSelectorPath: .spec.selector 1 scope: Namespaced names: plural: scalablepods singular: scalablepod kind: ScalablePod shortNames: - spod 1 Specifies the JSON path that corresponds to status.selector field of the custom resource object. Example custom CR apiVersion: testing.openshift.io/v1 kind: ScalablePod metadata: name: scalable-cr namespace: default spec: selector: "app=scalable-cr" 1 replicas: 1 1 Specify the label type to apply to managed pods. This is the field referenced by the labelSelectorPath in the custom resource definition object. Example VPA object apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: scalable-cr namespace: default spec: targetRef: apiVersion: testing.openshift.io/v1 kind: ScalablePod name: scalable-cr updatePolicy: updateMode: "Auto" 2.5.5. Uninstalling the Vertical Pod Autoscaler Operator You can remove the Vertical Pod Autoscaler Operator (VPA) from your OpenShift Container Platform cluster. After uninstalling, the resource requests for the pods already modified by an existing VPA CR do not change. Any new pods get the resources defined in the workload object, not the recommendations made by the Vertical Pod Autoscaler Operator. Note You can remove a specific VPA CR by using the oc delete vpa <vpa-name> command. The same actions apply for resource requests as uninstalling the vertical pod autoscaler. After removing the VPA Operator, it is recommended that you remove the other components associated with the Operator to avoid potential issues. Prerequisites The Vertical Pod Autoscaler Operator must be installed. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Switch to the openshift-vertical-pod-autoscaler project. For the VerticalPodAutoscaler Operator, click the Options menu and select Uninstall Operator . Optional: To remove all operands associated with the Operator, in the dialog box, select Delete all operand instances for this operator checkbox. Click Uninstall . Optional: Use the OpenShift CLI to remove the VPA components: Delete the VPA namespace: USD oc delete namespace openshift-vertical-pod-autoscaler Delete the VPA custom resource definition (CRD) objects: USD oc delete crd verticalpodautoscalercheckpoints.autoscaling.k8s.io USD oc delete crd verticalpodautoscalercontrollers.autoscaling.openshift.io USD oc delete crd verticalpodautoscalers.autoscaling.k8s.io Deleting the CRDs removes the associated roles, cluster roles, and role bindings. Note This action removes from the cluster all user-created VPA CRs. If you re-install the VPA, you must create these objects again. Delete the MutatingWebhookConfiguration object by running the following command: USD oc delete MutatingWebhookConfiguration vpa-webhook-config Delete the VPA Operator: USD oc delete operator/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler 2.6. Providing sensitive data to pods by using secrets Some applications need sensitive information, such as passwords and user names, that you do not want developers to have. As an administrator, you can use Secret objects to provide this information without exposing that information in clear text. 2.6.1. Understanding secrets The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. Key properties include: Secret data can be referenced independently from its definition. Secret data volumes are backed by temporary file-storage facilities (tmpfs) and never come to rest on a node. Secret data can be shared within a namespace. YAML Secret object definition apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5 1 Indicates the structure of the secret's key names and values. 2 The allowable format for the keys in the data field must meet the guidelines in the DNS_SUBDOMAIN value in the Kubernetes identifiers glossary . 3 The value associated with keys in the data map must be base64 encoded. 4 Entries in the stringData map are converted to base64 and the entry will then be moved to the data map automatically. This field is write-only; the value will only be returned via the data field. 5 The value associated with keys in the stringData map is made up of plain text strings. You must create a secret before creating the pods that depend on that secret. When creating secrets: Create a secret object with secret data. Update the pod's service account to allow the reference to the secret. Create a pod, which consumes the secret as an environment variable or as a file (using a secret volume). 2.6.1.1. Types of secrets The value in the type field indicates the structure of the secret's key names and values. The type can be used to enforce the presence of user names and keys in the secret object. If you do not want validation, use the opaque type, which is the default. Specify one of the following types to trigger minimal server-side validation to ensure the presence of specific key names in the secret data: kubernetes.io/basic-auth : Use with Basic authentication kubernetes.io/dockercfg : Use as an image pull secret kubernetes.io/dockerconfigjson : Use as an image pull secret kubernetes.io/service-account-token : Use to obtain a legacy service account API token kubernetes.io/ssh-auth : Use with SSH key authentication kubernetes.io/tls : Use with TLS certificate authorities Specify type: Opaque if you do not want validation, which means the secret does not claim to conform to any convention for key names or values. An opaque secret, allows for unstructured key:value pairs that can contain arbitrary values. Note You can specify other arbitrary types, such as example.com/my-secret-type . These types are not enforced server-side, but indicate that the creator of the secret intended to conform to the key/value requirements of that type. For examples of creating different types of secrets, see Understanding how to create secrets . 2.6.1.2. Secret data keys Secret keys must be in a DNS subdomain. 2.6.1.3. Automatically generated secrets By default, OpenShift Container Platform creates the following secrets for each service account: A dockercfg image pull secret A service account token secret Note Prior to OpenShift Container Platform 4.11, a second service account token secret was generated when a service account was created. This service account token secret was used to access the Kubernetes API. Starting with OpenShift Container Platform 4.11, this second service account token secret is no longer created. This is because the LegacyServiceAccountTokenNoAutoGeneration upstream Kubernetes feature gate was enabled, which stops the automatic generation of secret-based service account tokens to access the Kubernetes API. After upgrading to 4.15, any existing service account token secrets are not deleted and continue to function. This service account token secret and docker configuration image pull secret are necessary to integrate the OpenShift image registry into the cluster's user authentication and authorization system. However, if you do not enable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, these secrets are not generated for each service account. Warning Do not rely on these automatically generated secrets for your own use; they might be removed in a future OpenShift Container Platform release. Workloads are automatically injected with a projected volume to obtain a bound service account token. If your workload needs an additional service account token, add an additional projected volume in your workload manifest. Bound service account tokens are more secure than service account token secrets for the following reasons: Bound service account tokens have a bounded lifetime. Bound service account tokens contain audiences. Bound service account tokens can be bound to pods or secrets and the bound tokens are invalidated when the bound object is removed. For more information, see Configuring bound service account tokens using volume projection . You can also manually create a service account token secret to obtain a token, if the security exposure of a non-expiring token in a readable API object is acceptable to you. For more information, see Creating a service account token secret . Additional resources For information about requesting bound service account tokens, see Using bound service account tokens For information about creating a service account token secret, see Creating a service account token secret . 2.6.2. Understanding how to create secrets As an administrator you must create a secret before developers can create the pods that depend on that secret. When creating secrets: Create a secret object that contains the data you want to keep secret. The specific data required for each secret type is descibed in the following sections. Example YAML object that creates an opaque secret apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque 1 data: 2 username: <username> password: <password> stringData: 3 hostname: myapp.mydomain.com secret.properties: | property1=valueA property2=valueB 1 Specifies the type of secret. 2 Specifies encoded string and data. 3 Specifies decoded string and data. Use either the data or stringdata fields, not both. Update the pod's service account to reference the secret: YAML of a service account that uses a secret apiVersion: v1 kind: ServiceAccount ... secrets: - name: test-secret Create a pod, which consumes the secret as an environment variable or as a file (using a secret volume): YAML of a pod populating files in a volume with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "cat /etc/secret-volume/*" ] volumeMounts: 1 - name: secret-volume mountPath: /etc/secret-volume 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: secret-volume secret: secretName: test-secret 4 restartPolicy: Never 1 Add a volumeMounts field to each container that needs the secret. 2 Specifies an unused directory name where you would like the secret to appear. Each key in the secret data map becomes the filename under mountPath . 3 Set to true . If true, this instructs the driver to provide a read-only volume. 4 Specifies the name of the secret. YAML of a pod populating environment variables with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "export" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Specifies the environment variable that consumes the secret key. YAML of a build config populating environment variables with secret data apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username from: kind: ImageStreamTag namespace: openshift name: 'cli:latest' 1 Specifies the environment variable that consumes the secret key. 2.6.2.1. Secret creation restrictions To use a secret, a pod needs to reference the secret. A secret can be used with a pod in three ways: To populate environment variables for containers. As files in a volume mounted on one or more of its containers. By kubelet when pulling images for the pod. Volume type secrets write data into the container as a file using the volume mechanism. Image pull secrets use service accounts for the automatic injection of the secret into all pods in a namespace. When a template contains a secret definition, the only way for the template to use the provided secret is to ensure that the secret volume sources are validated and that the specified object reference actually points to a Secret object. Therefore, a secret needs to be created before any pods that depend on it. The most effective way to ensure this is to have it get injected automatically through the use of a service account. Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace. Individual secrets are limited to 1MB in size. This is to discourage the creation of large secrets that could exhaust apiserver and kubelet memory. However, creation of a number of smaller secrets could also exhaust memory. 2.6.2.2. Creating an opaque secret As an administrator, you can create an opaque secret, which allows you to store unstructured key:value pairs that can contain arbitrary values. Procedure Create a Secret object in a YAML file on a control plane node. For example: apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password> 1 Specifies an opaque secret. Use the following command to create a Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources For more information on using secrets in pods, see Understanding how to create secrets . 2.6.2.3. Creating a service account token secret As an administrator, you can create a service account token secret, which allows you to distribute a service account token to applications that must authenticate to the API. Note It is recommended to obtain bound service account tokens using the TokenRequest API instead of using service account token secrets. The tokens obtained from the TokenRequest API are more secure than the tokens stored in secrets, because they have a bounded lifetime and are not readable by other API clients. You should create a service account token secret only if you cannot use the TokenRequest API and if the security exposure of a non-expiring token in a readable API object is acceptable to you. See the Additional resources section that follows for information on creating bound service account tokens. Procedure Create a Secret object in a YAML file on a control plane node: Example secret object: apiVersion: v1 kind: Secret metadata: name: secret-sa-sample annotations: kubernetes.io/service-account.name: "sa-name" 1 type: kubernetes.io/service-account-token 2 1 Specifies an existing service account name. If you are creating both the ServiceAccount and the Secret objects, create the ServiceAccount object first. 2 Specifies a service account token secret. Use the following command to create the Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources For more information on using secrets in pods, see Understanding how to create secrets . For information on requesting bound service account tokens, see Using bound service account tokens For information on creating service accounts, see Understanding and creating service accounts . 2.6.2.4. Creating a basic authentication secret As an administrator, you can create a basic authentication secret, which allows you to store the credentials needed for basic authentication. When using this secret type, the data parameter of the Secret object must contain the following keys encoded in the base64 format: username : the user name for authentication password : the password or token for authentication Note You can use the stringData parameter to use clear text content. Procedure Create a Secret object in a YAML file on a control plane node: Example secret object apiVersion: v1 kind: Secret metadata: name: secret-basic-auth type: kubernetes.io/basic-auth 1 data: stringData: 2 username: admin password: <password> 1 Specifies a basic authentication secret. 2 Specifies the basic authentication values to use. Use the following command to create the Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources For more information on using secrets in pods, see Understanding how to create secrets . 2.6.2.5. Creating an SSH authentication secret As an administrator, you can create an SSH authentication secret, which allows you to store data used for SSH authentication. When using this secret type, the data parameter of the Secret object must contain the SSH credential to use. Procedure Create a Secret object in a YAML file on a control plane node: Example secret object: apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 MIIEpQIBAAKCAQEAulqb/Y ... 1 Specifies an SSH authentication secret. 2 Specifies the SSH key/value pair as the SSH credentials to use. Use the following command to create the Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets . 2.6.2.6. Creating a Docker configuration secret As an administrator, you can create a Docker configuration secret, which allows you to store the credentials for accessing a container image registry. kubernetes.io/dockercfg . Use this secret type to store your local Docker configuration file. The data parameter of the secret object must contain the contents of a .dockercfg file encoded in the base64 format. kubernetes.io/dockerconfigjson . Use this secret type to store your local Docker configuration JSON file. The data parameter of the secret object must contain the contents of a .docker/config.json file encoded in the base64 format. Procedure Create a Secret object in a YAML file on a control plane node. Example Docker configuration secret object apiVersion: v1 kind: Secret metadata: name: secret-docker-cfg namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfig:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2 1 Specifies that the secret is using a Docker configuration file. 2 The output of a base64-encoded Docker configuration file Example Docker configuration JSON secret object apiVersion: v1 kind: Secret metadata: name: secret-docker-json namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2 1 Specifies that the secret is using a Docker configuration JSONfile. 2 The output of a base64-encoded Docker configuration JSON file Use the following command to create the Secret object USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources For more information on using secrets in pods, see Understanding how to create secrets . 2.6.2.7. Creating a secret using the web console You can create secrets using the web console. Procedure Navigate to Workloads Secrets . Click Create From YAML . Edit the YAML manually to your specifications, or drag and drop a file into the YAML editor. For example: apiVersion: v1 kind: Secret metadata: name: example namespace: <namespace> type: Opaque 1 data: username: <base64 encoded username> password: <base64 encoded password> stringData: 2 hostname: myapp.mydomain.com 1 This example specifies an opaque secret; however, you may see other secret types such as service account token secret, basic authentication secret, SSH authentication secret, or a secret that uses Docker configuration. 2 Entries in the stringData map are converted to base64 and the entry will then be moved to the data map automatically. This field is write-only; the value will only be returned via the data field. Click Create . Click Add Secret to workload . From the drop-down menu, select the workload to add. Click Save . 2.6.3. Understanding how to update secrets When you modify the value of a secret, the value (used by an already running pod) will not dynamically change. To change a secret, you must delete the original pod and create a new pod (perhaps with an identical PodSpec). Updating a secret follows the same workflow as deploying a new Container image. You can use the kubectl rolling-update command. The resourceVersion value in a secret is not specified when it is referenced. Therefore, if a secret is updated at the same time as pods are starting, the version of the secret that is used for the pod is not defined. Note Currently, it is not possible to check the resource version of a secret object that was used when a pod was created. It is planned that pods will report this information, so that a controller could restart ones using an old resourceVersion . In the interim, do not update the data of existing secrets, but create new ones with distinct names. 2.6.4. Creating and using secrets As an administrator, you can create a service account token secret. This allows you to distribute a service account token to applications that must authenticate to the API. Procedure Create a service account in your namespace by running the following command: USD oc create sa <service_account_name> -n <your_namespace> Save the following YAML example to a file named service-account-token-secret.yaml . The example includes a Secret object configuration that you can use to generate a service account token: apiVersion: v1 kind: Secret metadata: name: <secret_name> 1 annotations: kubernetes.io/service-account.name: "sa-name" 2 type: kubernetes.io/service-account-token 3 1 Replace <secret_name> with the name of your service token secret. 2 Specifies an existing service account name. If you are creating both the ServiceAccount and the Secret objects, create the ServiceAccount object first. 3 Specifies a service account token secret type. Generate the service account token by applying the file: USD oc apply -f service-account-token-secret.yaml Get the service account token from the secret by running the following command: USD oc get secret <sa_token_secret> -o jsonpath='{.data.token}' | base64 --decode 1 Example output ayJhbGciOiJSUzI1NiIsImtpZCI6IklOb2dtck1qZ3hCSWpoNnh5YnZhSE9QMkk3YnRZMVZoclFfQTZfRFp1YlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImJ1aWxkZXItdG9rZW4tdHZrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYnVpbGRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNmZGU2MGZmLTA1NGYtNDkyZi04YzhjLTNlZjE0NDk3MmFmNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmJ1aWxkZXIifQ.OmqFTDuMHC_lYvvEUrjr1x453hlEEHYcxS9VKSzmRkP1SiVZWPNPkTWlfNRp6bIUZD3U6aN3N7dMSN0eI5hu36xPgpKTdvuckKLTCnelMx6cxOdAbrcw1mCmOClNscwjS1KO1kzMtYnnq8rXHiMJELsNlhnRyyIXRTtNBsy4t64T3283s3SLsancyx0gy0ujx-Ch3uKAKdZi5iT-I8jnnQ-ds5THDs2h65RJhgglQEmSxpHrLGZFmyHAQI-_SjvmHZPXEc482x3SkaQHNLqpmrpJorNqh1M8ZHKzlujhZgVooMvJmWPXTb2vnvi3DGn2XI-hZxl1yD2yGH1RBpYUHA 1 Replace <sa_token_secret> with the name of your service token secret. Use your service account token to authenticate with the API of your cluster: USD curl -X GET <openshift_cluster_api> --header "Authorization: Bearer <token>" 1 2 1 Replace <openshift_cluster_api> with the OpenShift cluster API. 2 Replace <token> with the service account token that is output in the preceding command. 2.6.5. About using signed certificates with secrets To secure communication to your service, you can configure OpenShift Container Platform to generate a signed serving certificate/key pair that you can add into a secret in a project. A service serving certificate secret is intended to support complex middleware applications that need out-of-the-box certificates. It has the same settings as the server certificates generated by the administrator tooling for nodes and masters. Service Pod spec configured for a service serving certificates secret. apiVersion: v1 kind: Service metadata: name: registry annotations: service.beta.openshift.io/serving-cert-secret-name: registry-cert 1 # ... 1 Specify the name for the certificate Other pods can trust cluster-created certificates (which are only signed for internal DNS names), by using the CA bundle in the /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt file that is automatically mounted in their pod. The signature algorithm for this feature is x509.SHA256WithRSA . To manually rotate, delete the generated secret. A new certificate is created. 2.6.5.1. Generating signed certificates for use with secrets To use a signed serving certificate/key pair with a pod, create or edit the service to add the service.beta.openshift.io/serving-cert-secret-name annotation, then add the secret to the pod. Procedure To create a service serving certificate secret : Edit the Pod spec for your service. Add the service.beta.openshift.io/serving-cert-secret-name annotation with the name you want to use for your secret. kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.openshift.io/serving-cert-secret-name: my-cert 1 spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376 The certificate and key are in PEM format, stored in tls.crt and tls.key respectively. Create the service: USD oc create -f <file-name>.yaml View the secret to make sure it was created: View a list of all secrets: USD oc get secrets Example output NAME TYPE DATA AGE my-cert kubernetes.io/tls 2 9m View details on your secret: USD oc describe secret my-cert Example output Name: my-cert Namespace: openshift-console Labels: <none> Annotations: service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z service.beta.openshift.io/originating-service-name: my-service service.beta.openshift.io/originating-service-uid: 640f0ec3-afc2-4380-bf31-a8c784846a11 service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z Type: kubernetes.io/tls Data ==== tls.key: 1679 bytes tls.crt: 2595 bytes Edit your Pod spec with that secret. apiVersion: v1 kind: Pod metadata: name: my-service-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mypod image: redis volumeMounts: - name: my-container mountPath: "/etc/my-path" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: my-volume secret: secretName: my-cert items: - key: username path: my-group/my-username mode: 511 When it is available, your pod will run. The certificate will be good for the internal service DNS name, <service.name>.<service.namespace>.svc . The certificate/key pair is automatically replaced when it gets close to expiration. View the expiration date in the service.beta.openshift.io/expiry annotation on the secret, which is in RFC3339 format. Note In most cases, the service DNS name <service.name>.<service.namespace>.svc is not externally routable. The primary use of <service.name>.<service.namespace>.svc is for intracluster or intraservice communication, and with re-encrypt routes. 2.6.6. Troubleshooting secrets If a service certificate generation fails with (service's service.beta.openshift.io/serving-cert-generation-error annotation contains): secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60 The service that generated the certificate no longer exists, or has a different serviceUID . You must force certificates regeneration by removing the old secret, and clearing the following annotations on the service service.beta.openshift.io/serving-cert-generation-error , service.beta.openshift.io/serving-cert-generation-error-num : Delete the secret: USD oc delete secret <secret_name> Clear the annotations: USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error- USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num- Note The command removing annotation has a - after the annotation name to be removed. 2.7. Providing sensitive data to pods by using an external secrets store Some applications need sensitive information, such as passwords and user names, that you do not want developers to have. As an alternative to using Kubernetes Secret objects to provide sensitive information, you can use an external secrets store to store the sensitive information. You can use the Secrets Store CSI Driver Operator to integrate with an external secrets store and mount the secret content as a pod volume. Important The Secrets Store CSI Driver Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.7.1. About the Secrets Store CSI Driver Operator Kubernetes secrets are stored with Base64 encoding. etcd provides encryption at rest for these secrets, but when secrets are retrieved, they are decrypted and presented to the user. If role-based access control is not configured properly on your cluster, anyone with API or etcd access can retrieve or modify a secret. Additionally, anyone who is authorized to create a pod in a namespace can use that access to read any secret in that namespace. To store and manage your secrets securely, you can configure the OpenShift Container Platform Secrets Store Container Storage Interface (CSI) Driver Operator to mount secrets from an external secret management system, such as Azure Key Vault, by using a provider plugin. Applications can then use the secret, but the secret does not persist on the system after the application pod is destroyed. The Secrets Store CSI Driver Operator, secrets-store.csi.k8s.io , enables OpenShift Container Platform to mount multiple secrets, keys, and certificates stored in enterprise-grade external secrets stores into pods as a volume. The Secrets Store CSI Driver Operator communicates with the provider using gRPC to fetch the mount contents from the specified external secrets store. After the volume is attached, the data in it is mounted into the container's file system. Secrets store volumes are mounted in-line. 2.7.1.1. Secrets store providers The following secrets store providers are available for use with the Secrets Store CSI Driver Operator: AWS Secrets Manager AWS Systems Manager Parameter Store Azure Key Vault 2.7.1.2. Automatic rotation The Secrets Store CSI driver periodically rotates the content in the mounted volume with the content from the external secrets store. If a secret is updated in the external secrets store, the secret will be updated in the mounted volume. The Secrets Store CSI Driver Operator polls for updates every 2 minutes. If you enabled synchronization of mounted content as Kubernetes secrets, the Kubernetes secrets are also rotated. Applications consuming the secret data must watch for updates to the secrets. 2.7.2. Installing the Secrets Store CSI driver Prerequisites Access to the OpenShift Container Platform web console. Administrator access to the cluster. Procedure To install the Secrets Store CSI driver: Install the Secrets Store CSI Driver Operator: Log in to the web console. Click Operators OperatorHub . Locate the Secrets Store CSI Driver Operator by typing "Secrets Store CSI" in the filter box. Click the Secrets Store CSI Driver Operator button. On the Secrets Store CSI Driver Operator page, click Install . On the Install Operator page, ensure that: All namespaces on the cluster (default) is selected. Installed Namespace is set to openshift-cluster-csi-drivers . Click Install . After the installation finishes, the Secrets Store CSI Driver Operator is listed in the Installed Operators section of the web console. Create the ClusterCSIDriver instance for the driver ( secrets-store.csi.k8s.io ): Click Administration CustomResourceDefinitions ClusterCSIDriver . On the Instances tab, click Create ClusterCSIDriver . Use the following YAML file: apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: secrets-store.csi.k8s.io spec: managementState: Managed Click Create . 2.7.3. Mounting secrets from an external secrets store to a CSI volume After installing the Secrets Store CSI Driver Operator, you can mount secrets from one of the following external secrets stores to a CSI volume: AWS Secrets Manager AWS Systems Manager Parameter Store Azure Key Vault 2.7.3.1. Mounting secrets from AWS Secrets Manager You can use the Secrets Store CSI Driver Operator to mount secrets from AWS Secrets Manager to a CSI volume in OpenShift Container Platform. To mount secrets from AWS Secrets Manager, your cluster must be installed on AWS and use AWS Security Token Service (STS). Prerequisites Your cluster is installed on AWS and uses AWS Security Token Service (STS). You have installed the Secrets Store CSI Driver Operator. See Installing the Secrets Store CSI driver for instructions. You have configured AWS Secrets Manager to store the required secrets. You have extracted and prepared the ccoctl binary. You have installed the jq CLI tool. You have access to the cluster as a user with the cluster-admin role. Procedure Install the AWS Secrets Manager provider: Create a YAML file with the following configuration for the provider resources: Important The AWS Secrets Manager provider for the Secrets Store CSI driver is an upstream provider. This configuration is modified from the configuration provided in the upstream AWS documentation so that it works properly with OpenShift Container Platform. Changes to this configuration might impact functionality. Example aws-provider.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [""] resources: ["serviceaccounts/token"] verbs: ["create"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get"] - apiGroups: [""] resources: ["pods"] verbs: ["get"] - apiGroups: [""] resources: ["nodes"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: "/etc/kubernetes/secrets-store-csi-providers" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: "/etc/kubernetes/secrets-store-csi-providers" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux Grant privileged access to the csi-secrets-store-provider-aws service account by running the following command: USD oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers Create the provider resources by running the following command: USD oc apply -f aws-provider.yaml Grant permission to allow the service account to read the AWS secret object: Create a directory to contain the credentials request by running the following command: USD mkdir credentialsrequest-dir-aws Create a YAML file with the following configuration for the credentials request: Example credentialsrequest.yaml file apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - "secretsmanager:GetSecretValue" - "secretsmanager:DescribeSecret" effect: Allow resource: "arn:*:secretsmanager:*:*:secret:testSecret-??????" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider Retrieve the OIDC provider by running the following command: USD oc get --raw=/.well-known/openid-configuration | jq -r '.issuer' Example output https://<oidc_provider_name> Copy the OIDC provider name <oidc_provider_name> from the output to use in the step. Use the ccoctl tool to process the credentials request by running the following command: USD ccoctl aws create-iam-roles \ --name my-role --region=<aws_region> \ --credentials-requests-dir=credentialsrequest-dir-aws \ --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output Example output 2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds Copy the <aws_role_arn> from the output to use in the step. For example, arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds . Bind the service account with the role ARN by running the following command: USD oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn="<aws_role_arn>" Create a secret provider class to define your secrets store provider: Create a YAML file that defines the SecretProviderClass object: Example secret-provider-class-aws.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: "testSecret" objectType: "secretsmanager" 1 1 Specify the name for the secret provider class. 2 Specify the namespace for the secret provider class. 3 Specify the provider as aws . 4 Specify the provider-specific configuration parameters. Create the SecretProviderClass object by running the following command: USD oc create -f secret-provider-class-aws.yaml Create a deployment to use this secret provider class: Create a YAML file that defines the Deployment object: Example deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - "/bin/sleep" - "10000" volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-aws-provider" 3 1 Specify the name for the deployment. 2 Specify the namespace for the deployment. This must be the same namespace as the secret provider class. 3 Specify the name of the secret provider class. Create the Deployment object by running the following command: USD oc create -f deployment.yaml Verification Verify that you can access the secrets from AWS Secrets Manager in the pod volume mount: List the secrets in the pod mount: USD oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/ Example output testSecret View a secret in the pod mount: USD oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret Example output <secret_value> Additional resources Configuring the Cloud Credential Operator utility 2.7.3.2. Mounting secrets from AWS Systems Manager Parameter Store You can use the Secrets Store CSI Driver Operator to mount secrets from AWS Systems Manager Parameter Store to a CSI volume in OpenShift Container Platform. To mount secrets from AWS Systems Manager Parameter Store, your cluster must be installed on AWS and use AWS Security Token Service (STS). Prerequisites Your cluster is installed on AWS and uses AWS Security Token Service (STS). You have installed the Secrets Store CSI Driver Operator. See Installing the Secrets Store CSI driver for instructions. You have configured AWS Systems Manager Parameter Store to store the required secrets. You have extracted and prepared the ccoctl binary. You have installed the jq CLI tool. You have access to the cluster as a user with the cluster-admin role. Procedure Install the AWS Systems Manager Parameter Store provider: Create a YAML file with the following configuration for the provider resources: Important The AWS Systems Manager Parameter Store provider for the Secrets Store CSI driver is an upstream provider. This configuration is modified from the configuration provided in the upstream AWS documentation so that it works properly with OpenShift Container Platform. Changes to this configuration might impact functionality. Example aws-provider.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [""] resources: ["serviceaccounts/token"] verbs: ["create"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get"] - apiGroups: [""] resources: ["pods"] verbs: ["get"] - apiGroups: [""] resources: ["nodes"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: "/etc/kubernetes/secrets-store-csi-providers" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: "/etc/kubernetes/secrets-store-csi-providers" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux Grant privileged access to the csi-secrets-store-provider-aws service account by running the following command: USD oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers Create the provider resources by running the following command: USD oc apply -f aws-provider.yaml Grant permission to allow the service account to read the AWS secret object: Create a directory to contain the credentials request by running the following command: USD mkdir credentialsrequest-dir-aws Create a YAML file with the following configuration for the credentials request: Example credentialsrequest.yaml file apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - "ssm:GetParameter" - "ssm:GetParameters" effect: Allow resource: "arn:*:ssm:*:*:parameter/testParameter*" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider Retrieve the OIDC provider by running the following command: USD oc get --raw=/.well-known/openid-configuration | jq -r '.issuer' Example output https://<oidc_provider_name> Copy the OIDC provider name <oidc_provider_name> from the output to use in the step. Use the ccoctl tool to process the credentials request by running the following command: USD ccoctl aws create-iam-roles \ --name my-role --region=<aws_region> \ --credentials-requests-dir=credentialsrequest-dir-aws \ --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output Example output 2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds Copy the <aws_role_arn> from the output to use in the step. For example, arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds . Bind the service account with the role ARN by running the following command: USD oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn="<aws_role_arn>" Create a secret provider class to define your secrets store provider: Create a YAML file that defines the SecretProviderClass object: Example secret-provider-class-aws.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: "testParameter" objectType: "ssmparameter" 1 Specify the name for the secret provider class. 2 Specify the namespace for the secret provider class. 3 Specify the provider as aws . 4 Specify the provider-specific configuration parameters. Create the SecretProviderClass object by running the following command: USD oc create -f secret-provider-class-aws.yaml Create a deployment to use this secret provider class: Create a YAML file that defines the Deployment object: Example deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - "/bin/sleep" - "10000" volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-aws-provider" 3 1 Specify the name for the deployment. 2 Specify the namespace for the deployment. This must be the same namespace as the secret provider class. 3 Specify the name of the secret provider class. Create the Deployment object by running the following command: USD oc create -f deployment.yaml Verification Verify that you can access the secrets from AWS Systems Manager Parameter Store in the pod volume mount: List the secrets in the pod mount: USD oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/ Example output testParameter View a secret in the pod mount: USD oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret Example output <secret_value> Additional resources Configuring the Cloud Credential Operator utility 2.7.3.3. Mounting secrets from Azure Key Vault You can use the Secrets Store CSI Driver Operator to mount secrets from Azure Key Vault to a CSI volume in OpenShift Container Platform. To mount secrets from Azure Key Vault, your cluster must be installed on Microsoft Azure. Prerequisites Your cluster is installed on Azure. You have installed the Secrets Store CSI Driver Operator. See Installing the Secrets Store CSI driver for instructions. You have configured Azure Key Vault to store the required secrets. You have installed the Azure CLI ( az ). You have access to the cluster as a user with the cluster-admin role. Procedure Install the Azure Key Vault provider: Create a YAML file with the following configuration for the provider resources: Important The Azure Key Vault provider for the Secrets Store CSI driver is an upstream provider. This configuration is modified from the configuration provided in the upstream Azure documentation so that it works properly with OpenShift Container Platform. Changes to this configuration might impact functionality. Example azure-provider.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-azure-cluster-role rules: - apiGroups: [""] resources: ["serviceaccounts/token"] verbs: ["create"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get"] - apiGroups: [""] resources: ["pods"] verbs: ["get"] - apiGroups: [""] resources: ["nodes"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-azure-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-azure-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-azure labels: app: csi-secrets-store-provider-azure spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-azure template: metadata: labels: app: csi-secrets-store-provider-azure spec: serviceAccountName: csi-secrets-store-provider-azure hostNetwork: true containers: - name: provider-azure-installer image: mcr.microsoft.com/oss/azure/secrets-store/provider-azure:v1.4.1 imagePullPolicy: IfNotPresent args: - --endpoint=unix:///provider/azure.sock - --construct-pem-chain=true - --healthz-port=8989 - --healthz-path=/healthz - --healthz-timeout=5s livenessProbe: httpGet: path: /healthz port: 8989 failureThreshold: 3 initialDelaySeconds: 5 timeoutSeconds: 10 periodSeconds: 30 resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 0 capabilities: drop: - ALL volumeMounts: - mountPath: "/provider" name: providervol affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: type operator: NotIn values: - virtual-kubelet volumes: - name: providervol hostPath: path: "/var/run/secrets-store-csi-providers" tolerations: - operator: Exists nodeSelector: kubernetes.io/os: linux Grant privileged access to the csi-secrets-store-provider-azure service account by running the following command: USD oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-azure -n openshift-cluster-csi-drivers Create the provider resources by running the following command: USD oc apply -f azure-provider.yaml Create a service principal to access the key vault: Set the service principal client secret as an environment variable by running the following command: USD SERVICE_PRINCIPAL_CLIENT_SECRET="USD(az ad sp create-for-rbac --name https://USDKEYVAULT_NAME --query 'password' -otsv)" Set the service principal client ID as an environment variable by running the following command: USD SERVICE_PRINCIPAL_CLIENT_ID="USD(az ad sp list --display-name https://USDKEYVAULT_NAME --query '[0].appId' -otsv)" Create a generic secret with the service principal client secret and ID by running the following command: USD oc create secret generic secrets-store-creds -n my-namespace --from-literal clientid=USD{SERVICE_PRINCIPAL_CLIENT_ID} --from-literal clientsecret=USD{SERVICE_PRINCIPAL_CLIENT_SECRET} Apply the secrets-store.csi.k8s.io/used=true label to allow the provider to find this nodePublishSecretRef secret: USD oc -n my-namespace label secret secrets-store-creds secrets-store.csi.k8s.io/used=true Create a secret provider class to define your secrets store provider: Create a YAML file that defines the SecretProviderClass object: Example secret-provider-class-azure.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider 1 namespace: my-namespace 2 spec: provider: azure 3 parameters: 4 usePodIdentity: "false" useVMManagedIdentity: "false" userAssignedIdentityID: "" keyvaultName: "kvname" objects: | array: - | objectName: secret1 objectType: secret tenantId: "tid" 1 Specify the name for the secret provider class. 2 Specify the namespace for the secret provider class. 3 Specify the provider as azure . 4 Specify the provider-specific configuration parameters. Create the SecretProviderClass object by running the following command: USD oc create -f secret-provider-class-azure.yaml Create a deployment to use this secret provider class: Create a YAML file that defines the Deployment object: Example deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-azure-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - "/bin/sleep" - "10000" volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-azure-provider" 3 nodePublishSecretRef: name: secrets-store-creds 4 1 Specify the name for the deployment. 2 Specify the namespace for the deployment. This must be the same namespace as the secret provider class. 3 Specify the name of the secret provider class. 4 Specify the name of the Kubernetes secret that contains the service principal credentials to access Azure Key Vault. Create the Deployment object by running the following command: USD oc create -f deployment.yaml Verification Verify that you can access the secrets from Azure Key Vault in the pod volume mount: List the secrets in the pod mount: USD oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/ Example output secret1 View a secret in the pod mount: USD oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/secret1 Example output my-secret-value 2.7.4. Enabling synchronization of mounted content as Kubernetes secrets You can enable synchronization to create Kubernetes secrets from the content on a mounted volume. An example where you might want to enable synchronization is to use an environment variable in your deployment to reference the Kubernetes secret. Warning Do not enable synchronization if you do not want to store your secrets on your OpenShift Container Platform cluster and in etcd. Enable this functionality only if you require it, such as when you want to use environment variables to refer to the secret. If you enable synchronization, the secrets from the mounted volume are synchronized as Kubernetes secrets after you start a pod that mounts the secrets. The synchronized Kubernetes secret is deleted when all pods that mounted the content are deleted. Prerequisites You have installed the Secrets Store CSI Driver Operator. You have installed a secrets store provider. You have created the secret provider class. You have access to the cluster as a user with the cluster-admin role. Procedure Edit the SecretProviderClass resource by running the following command: USD oc edit secretproviderclass my-azure-provider 1 1 Replace my-azure-provider with the name of your secret provider class. Add the secretsObjects section with the configuration for the synchronized Kubernetes secrets: apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider namespace: my-namespace spec: provider: azure secretObjects: 1 - secretName: tlssecret 2 type: kubernetes.io/tls 3 labels: environment: "test" data: - objectName: tlskey 4 key: tls.key 5 - objectName: tlscrt key: tls.crt parameters: usePodIdentity: "false" keyvaultName: "kvname" objects: | array: - | objectName: tlskey objectType: secret - | objectName: tlscrt objectType: secret tenantId: "tid" 1 Specify the configuration for synchronized Kubernetes secrets. 2 Specify the name of the Kubernetes Secret object to create. 3 Specify the type of Kubernetes Secret object to create. For example, Opaque or kubernetes.io/tls . 4 Specify the object name or alias of the mounted content to synchronize. 5 Specify the data field from the specified objectName to populate the Kubernetes secret with. Save the file to apply the changes. 2.7.5. Viewing the status of secrets in the pod volume mount You can view detailed information, including the versions, of the secrets in the pod volume mount. The Secrets Store CSI Driver Operator creates a SecretProviderClassPodStatus resource in the same namespace as the pod. You can review this resource to see detailed information, including versions, about the secrets in the pod volume mount. Prerequisites You have installed the Secrets Store CSI Driver Operator. You have installed a secrets store provider. You have created the secret provider class. You have deployed a pod that mounts a volume from the Secrets Store CSI Driver Operator. You have access to the cluster as a user with the cluster-admin role. Procedure View detailed information about the secrets in a pod volume mount by running the following command: USD oc get secretproviderclasspodstatus <secret_provider_class_pod_status_name> -o yaml 1 1 The name of the secret provider class pod status object is in the format of <pod_name>-<namespace>-<secret_provider_class_name> . Example output ... status: mounted: true objects: - id: secret/tlscrt version: f352293b97da4fa18d96a9528534cb33 - id: secret/tlskey version: 02534bc3d5df481cb138f8b2a13951ef podName: busybox-<hash> secretProviderClassName: my-azure-provider targetPath: /var/lib/kubelet/pods/f0d49c1e-c87a-4beb-888f-37798456a3e7/volumes/kubernetes.io~csi/secrets-store-inline/mount 2.7.6. Uninstalling the Secrets Store CSI Driver Operator Prerequisites Access to the OpenShift Container Platform web console. Administrator access to the cluster. Procedure To uninstall the Secrets Store CSI Driver Operator: Stop all application pods that use the secrets-store.csi.k8s.io provider. Remove any third-party provider plug-in for your chosen secret store. Remove the Container Storage Interface (CSI) driver and associated manifests: Click Administration CustomResourceDefinitions ClusterCSIDriver . On the Instances tab, for secrets-store.csi.k8s.io , on the far left side, click the drop-down menu, and then click Delete ClusterCSIDriver . When prompted, click Delete . Verify that the CSI driver pods are no longer running. Uninstall the Secrets Store CSI Driver Operator: Note Before you can uninstall the Operator, you must remove the CSI driver first. Click Operators Installed Operators . On the Installed Operators page, scroll or type "Secrets Store CSI" into the Search by name box to find the Operator, and then click it. On the upper, right of the Installed Operators > Operator details page, click Actions Uninstall Operator . When prompted on the Uninstall Operator window, click the Uninstall button to remove the Operator from the namespace. Any applications deployed by the Operator on the cluster need to be cleaned up manually. After uninstalling, the Secrets Store CSI Driver Operator is no longer listed in the Installed Operators section of the web console. 2.8. Creating and using config maps The following sections define config maps and how to create and use them. 2.8.1. Understanding config maps Many applications require configuration by using some combination of configuration files, command line arguments, and environment variables. In OpenShift Container Platform, these configuration artifacts are decoupled from image content to keep containerized applications portable. The ConfigMap object provides mechanisms to inject containers with configuration data while keeping containers agnostic of OpenShift Container Platform. A config map can be used to store fine-grained information like individual properties or coarse-grained information like entire configuration files or JSON blobs. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. For example: ConfigMap Object Definition kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2 1 Contains the configuration data. 2 Points to a file that contains non-UTF8 data, for example, a binary Java keystore file. Enter the file data in Base 64. Note You can use the binaryData field when you create a config map from a binary file, such as an image. Configuration data can be consumed in pods in a variety of ways. A config map can be used to: Populate environment variable values in containers Set command-line arguments in a container Populate configuration files in a volume Users and system components can store configuration data in a config map. A config map is similar to a secret, but designed to more conveniently support working with strings that do not contain sensitive information. Config map restrictions A config map must be created before its contents can be consumed in pods. Controllers can be written to tolerate missing configuration data. Consult individual components configured by using config maps on a case-by-case basis. ConfigMap objects reside in a project. They can only be referenced by pods in the same project. The Kubelet only supports the use of a config map for pods it gets from the API server. This includes any pods created by using the CLI, or indirectly from a replication controller. It does not include pods created by using the OpenShift Container Platform node's --manifest-url flag, its --config flag, or its REST API because these are not common ways to create pods. 2.8.2. Creating a config map in the OpenShift Container Platform web console You can create a config map in the OpenShift Container Platform web console. Procedure To create a config map as a cluster administrator: In the Administrator perspective, select Workloads Config Maps . At the top right side of the page, select Create Config Map . Enter the contents of your config map. Select Create . To create a config map as a developer: In the Developer perspective, select Config Maps . At the top right side of the page, select Create Config Map . Enter the contents of your config map. Select Create . 2.8.3. Creating a config map by using the CLI You can use the following command to create a config map from directories, specific files, or literal values. Procedure Create a config map: USD oc create configmap <configmap_name> [options] 2.8.3.1. Creating a config map from a directory You can create a config map from a directory by using the --from-file flag. This method allows you to use multiple files within a directory to create a config map. Each file in the directory is used to populate a key in the config map, where the name of the key is the file name, and the value of the key is the content of the file. For example, the following command creates a config map with the contents of the example-files directory: USD oc create configmap game-config --from-file=example-files/ View the keys in the config map: USD oc describe configmaps game-config Example output Name: game-config Namespace: default Labels: <none> Annotations: <none> Data game.properties: 158 bytes ui.properties: 83 bytes You can see that the two keys in the map are created from the file names in the directory specified in the command. The content of those keys might be large, so the output of oc describe only shows the names of the keys and their sizes. Prerequisite You must have a directory with files that contain the data you want to populate a config map with. The following procedure uses these example files: game.properties and ui.properties : USD cat example-files/game.properties Example output enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 USD cat example-files/ui.properties Example output color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice Procedure Create a config map holding the content of each file in this directory by entering the following command: USD oc create configmap game-config \ --from-file=example-files/ Verification Enter the oc get command for the object with the -o option to see the values of the keys: USD oc get configmaps game-config -o yaml Example output apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:34:05Z name: game-config namespace: default resourceVersion: "407" selflink: /api/v1/namespaces/default/configmaps/game-config uid: 30944725-d66e-11e5-8cd0-68f728db1985 2.8.3.2. Creating a config map from a file You can create a config map from a file by using the --from-file flag. You can pass the --from-file option multiple times to the CLI. You can also specify the key to set in a config map for content imported from a file by passing a key=value expression to the --from-file option. For example: USD oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties Note If you create a config map from a file, you can include files containing non-UTF8 data that are placed in this field without corrupting the non-UTF8 data. OpenShift Container Platform detects binary files and transparently encodes the file as MIME . On the server, the MIME payload is decoded and stored without corrupting the data. Prerequisite You must have a directory with files that contain the data you want to populate a config map with. The following procedure uses these example files: game.properties and ui.properties : USD cat example-files/game.properties Example output enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 USD cat example-files/ui.properties Example output color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice Procedure Create a config map by specifying a specific file: USD oc create configmap game-config-2 \ --from-file=example-files/game.properties \ --from-file=example-files/ui.properties Create a config map by specifying a key-value pair: USD oc create configmap game-config-3 \ --from-file=game-special-key=example-files/game.properties Verification Enter the oc get command for the object with the -o option to see the values of the keys from the file: USD oc get configmaps game-config-2 -o yaml Example output apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:52:05Z name: game-config-2 namespace: default resourceVersion: "516" selflink: /api/v1/namespaces/default/configmaps/game-config-2 uid: b4952dc3-d670-11e5-8cd0-68f728db1985 Enter the oc get command for the object with the -o option to see the values of the keys from the key-value pair: USD oc get configmaps game-config-3 -o yaml Example output apiVersion: v1 data: game-special-key: |- 1 enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: "530" selflink: /api/v1/namespaces/default/configmaps/game-config-3 uid: 05f8da22-d671-11e5-8cd0-68f728db1985 1 This is the key that you set in the preceding step. 2.8.3.3. Creating a config map from literal values You can supply literal values for a config map. The --from-literal option takes a key=value syntax, which allows literal values to be supplied directly on the command line. Procedure Create a config map by specifying a literal value: USD oc create configmap special-config \ --from-literal=special.how=very \ --from-literal=special.type=charm Verification Enter the oc get command for the object with the -o option to see the values of the keys: USD oc get configmaps special-config -o yaml Example output apiVersion: v1 data: special.how: very special.type: charm kind: ConfigMap metadata: creationTimestamp: 2016-02-18T19:14:38Z name: special-config namespace: default resourceVersion: "651" selflink: /api/v1/namespaces/default/configmaps/special-config uid: dadce046-d673-11e5-8cd0-68f728db1985 2.8.4. Use cases: Consuming config maps in pods The following sections describe some uses cases when consuming ConfigMap objects in pods. 2.8.4.1. Populating environment variables in containers by using config maps You can use config maps to populate individual environment variables in containers or to populate environment variables in containers from all keys that form valid environment variable names. As an example, consider the following config map: ConfigMap with two environment variables apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4 1 Name of the config map. 2 The project in which the config map resides. Config maps can only be referenced by pods in the same project. 3 4 Environment variables to inject. ConfigMap with one environment variable apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2 1 Name of the config map. 2 Environment variable to inject. Procedure You can consume the keys of this ConfigMap in a pod using configMapKeyRef sections. Sample Pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Stanza to pull the specified environment variables from a ConfigMap . 2 Name of a pod environment variable that you are injecting a key's value into. 3 5 Name of the ConfigMap to pull specific environment variables from. 4 6 Environment variable to pull from the ConfigMap . 7 Makes the environment variable optional. As optional, the pod will be started even if the specified ConfigMap and keys do not exist. 8 Stanza to pull all environment variables from a ConfigMap . 9 Name of the ConfigMap to pull all environment variables from. When this pod is run, the pod logs will include the following output: Note SPECIAL_TYPE_KEY=charm is not listed in the example output because optional: true is set. 2.8.4.2. Setting command-line arguments for container commands with config maps You can use a config map to set the value of the commands or arguments in a container by using the Kubernetes substitution syntax USD(VAR_NAME) . As an example, consider the following config map: apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure To inject values into a command in a container, you must consume the keys you want to use as environment variables. Then you can refer to them in a container's command using the USD(VAR_NAME) syntax. Sample pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Inject the values into a command in a container using the keys you want to use as environment variables. When this pod is run, the output from the echo command run in the test-container container is as follows: 2.8.4.3. Injecting content into a volume by using config maps You can inject content into a volume by using config maps. Example ConfigMap custom resource (CR) apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure You have a couple different options for injecting content into a volume by using config maps. The most basic way to inject content into a volume by using a config map is to populate the volume with files where the key is the file name and the content of the file is the value of the key: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/special.how" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never 1 File containing key. When this pod is run, the output of the cat command will be: You can also control the paths within the volume where config map keys are projected: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/path/to/special-key" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never 1 Path to config map key. When this pod is run, the output of the cat command will be: 2.9. Using device plugins to access external resources with pods Device plugins allow you to use a particular device type (GPU, InfiniBand, or other similar computing resources that require vendor-specific initialization and setup) in your OpenShift Container Platform pod without needing to write custom code. 2.9.1. Understanding device plugins The device plugin provides a consistent and portable solution to consume hardware devices across clusters. The device plugin provides support for these devices through an extension mechanism, which makes these devices available to Containers, provides health checks of these devices, and securely shares them. Important OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors. A device plugin is a gRPC service running on the nodes (external to the kubelet ) that is responsible for managing specific hardware resources. Any device plugin must support following remote procedure calls (RPCs): service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} } Example device plugins Nvidia GPU device plugin for COS-based operating system Nvidia official GPU device plugin Solarflare device plugin KubeVirt device plugins: vfio and kvm Kubernetes device plugin for IBM(R) Crypto Express (CEX) cards Note For easy device plugin reference implementation, there is a stub device plugin in the Device Manager code: vendor/k8s.io/kubernetes/pkg/kubelet/cm/deviceplugin/device_plugin_stub.go . 2.9.1.1. Methods for deploying a device plugin Daemon sets are the recommended approach for device plugin deployments. Upon start, the device plugin will try to create a UNIX domain socket at /var/lib/kubelet/device-plugin/ on the node to serve RPCs from Device Manager. Since device plugins must manage hardware resources, access to the host file system, as well as socket creation, they must be run in a privileged security context. More specific details regarding deployment steps can be found with each device plugin implementation. 2.9.2. Understanding the Device Manager Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins. You can advertise specialized hardware without requiring any upstream code changes. Important OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors. Device Manager advertises devices as Extended Resources . User pods can consume devices, advertised by Device Manager, using the same Limit/Request mechanism, which is used for requesting any other Extended Resource . Upon start, the device plugin registers itself with Device Manager invoking Register on the /var/lib/kubelet/device-plugins/kubelet.sock and starts a gRPC service at /var/lib/kubelet/device-plugins/<plugin>.sock for serving Device Manager requests. Device Manager, while processing a new registration request, invokes ListAndWatch remote procedure call (RPC) at the device plugin service. In response, Device Manager gets a list of Device objects from the plugin over a gRPC stream. Device Manager will keep watching on the stream for new updates from the plugin. On the plugin side, the plugin will also keep the stream open and whenever there is a change in the state of any of the devices, a new device list is sent to the Device Manager over the same streaming connection. While handling a new pod admission request, Kubelet passes requested Extended Resources to the Device Manager for device allocation. Device Manager checks in its database to verify if a corresponding plugin exists or not. If the plugin exists and there are free allocatable devices as well as per local cache, Allocate RPC is invoked at that particular device plugin. Additionally, device plugins can also perform several other device-specific operations, such as driver installation, device initialization, and device resets. These functionalities vary from implementation to implementation. 2.9.3. Enabling Device Manager Enable Device Manager to implement a device plugin to advertise specialized hardware without any upstream code changes. Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins. Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command. Perform one of the following steps: View the machine config: # oc describe machineconfig <name> For example: # oc describe machineconfig 00-worker Example output Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1 1 Label required for the Device Manager. Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a Device Manager CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3 1 Assign a name to CR. 2 Enter the label from the Machine Config Pool. 3 Set DevicePlugins to 'true`. Create the Device Manager: USD oc create -f devicemgr.yaml Example output kubeletconfig.machineconfiguration.openshift.io/devicemgr created Ensure that Device Manager was actually enabled by confirming that /var/lib/kubelet/device-plugins/kubelet.sock is created on the node. This is the UNIX domain socket on which the Device Manager gRPC server listens for new plugin registrations. This sock file is created when the Kubelet is started only if Device Manager is enabled. 2.10. Including pod priority in pod scheduling decisions You can enable pod priority and preemption in your cluster. Pod priority indicates the importance of a pod relative to other pods and queues the pods based on that priority. pod preemption allows the cluster to evict, or preempt, lower-priority pods so that higher-priority pods can be scheduled if there is no available space on a suitable node pod priority also affects the scheduling order of pods and out-of-resource eviction ordering on the node. To use priority and preemption, you create priority classes that define the relative weight of your pods. Then, reference a priority class in the pod specification to apply that weight for scheduling. 2.10.1. Understanding pod priority When you use the Pod Priority and Preemption feature, the scheduler orders pending pods by their priority, and a pending pod is placed ahead of other pending pods with lower priority in the scheduling queue. As a result, the higher priority pod might be scheduled sooner than pods with lower priority if its scheduling requirements are met. If a pod cannot be scheduled, scheduler continues to schedule other lower priority pods. 2.10.1.1. Pod priority classes You can assign pods a priority class, which is a non-namespaced object that defines a mapping from a name to the integer value of the priority. The higher the value, the higher the priority. A priority class object can take any 32-bit integer value smaller than or equal to 1000000000 (one billion). Reserve numbers larger than or equal to one billion for critical pods that must not be preempted or evicted. By default, OpenShift Container Platform has two reserved priority classes for critical system pods to have guaranteed scheduling. USD oc get priorityclasses Example output NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s system-node-critical - This priority class has a value of 2000001000 and is used for all pods that should never be evicted from a node. Examples of pods that have this priority class are sdn-ovs , sdn , and so forth. A number of critical components include the system-node-critical priority class by default, for example: master-api master-controller master-etcd sdn sdn-ovs sync system-cluster-critical - This priority class has a value of 2000000000 (two billion) and is used with pods that are important for the cluster. Pods with this priority class can be evicted from a node in certain circumstances. For example, pods configured with the system-node-critical priority class can take priority. However, this priority class does ensure guaranteed scheduling. Examples of pods that can have this priority class are fluentd, add-on components like descheduler, and so forth. A number of critical components include the system-cluster-critical priority class by default, for example: fluentd metrics-server descheduler openshift-user-critical - You can use the priorityClassName field with important pods that cannot bind their resource consumption and do not have predictable resource consumption behavior. Prometheus pods under the openshift-monitoring and openshift-user-workload-monitoring namespaces use the openshift-user-critical priorityClassName . Monitoring workloads use system-critical as their first priorityClass , but this causes problems when monitoring uses excessive memory and the nodes cannot evict them. As a result, monitoring drops priority to give the scheduler flexibility, moving heavy workloads around to keep critical nodes operating. cluster-logging - This priority is used by Fluentd to make sure Fluentd pods are scheduled to nodes over other apps. 2.10.1.2. Pod priority names After you have one or more priority classes, you can create pods that specify a priority class name in a Pod spec. The priority admission controller uses the priority class name field to populate the integer value of the priority. If the named priority class is not found, the pod is rejected. 2.10.2. Understanding pod preemption When a developer creates a pod, the pod goes into a queue. If the developer configured the pod for pod priority or preemption, the scheduler picks a pod from the queue and tries to schedule the pod on a node. If the scheduler cannot find space on an appropriate node that satisfies all the specified requirements of the pod, preemption logic is triggered for the pending pod. When the scheduler preempts one or more pods on a node, the nominatedNodeName field of higher-priority Pod spec is set to the name of the node, along with the nodename field. The scheduler uses the nominatedNodeName field to keep track of the resources reserved for pods and also provides information to the user about preemptions in the clusters. After the scheduler preempts a lower-priority pod, the scheduler honors the graceful termination period of the pod. If another node becomes available while scheduler is waiting for the lower-priority pod to terminate, the scheduler can schedule the higher-priority pod on that node. As a result, the nominatedNodeName field and nodeName field of the Pod spec might be different. Also, if the scheduler preempts pods on a node and is waiting for termination, and a pod with a higher-priority pod than the pending pod needs to be scheduled, the scheduler can schedule the higher-priority pod instead. In such a case, the scheduler clears the nominatedNodeName of the pending pod, making the pod eligible for another node. Preemption does not necessarily remove all lower-priority pods from a node. The scheduler can schedule a pending pod by removing a portion of the lower-priority pods. The scheduler considers a node for pod preemption only if the pending pod can be scheduled on the node. 2.10.2.1. Non-preempting priority classes Pods with the preemption policy set to Never are placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled stays in the scheduling queue until sufficient resources are free and it can be scheduled. Non-preempting pods, like other pods, are subject to scheduler back-off. This means that if the scheduler tries unsuccessfully to schedule these pods, they are retried with lower frequency, allowing other pods with lower priority to be scheduled before them. Non-preempting pods can still be preempted by other, high-priority pods. 2.10.2.2. Pod preemption and other scheduler settings If you enable pod priority and preemption, consider your other scheduler settings: Pod priority and pod disruption budget A pod disruption budget specifies the minimum number or percentage of replicas that must be up at a time. If you specify pod disruption budgets, OpenShift Container Platform respects them when preempting pods at a best effort level. The scheduler attempts to preempt pods without violating the pod disruption budget. If no such pods are found, lower-priority pods might be preempted despite their pod disruption budget requirements. Pod priority and pod affinity Pod affinity requires a new pod to be scheduled on the same node as other pods with the same label. If a pending pod has inter-pod affinity with one or more of the lower-priority pods on a node, the scheduler cannot preempt the lower-priority pods without violating the affinity requirements. In this case, the scheduler looks for another node to schedule the pending pod. However, there is no guarantee that the scheduler can find an appropriate node and pending pod might not be scheduled. To prevent this situation, carefully configure pod affinity with equal-priority pods. 2.10.2.3. Graceful termination of preempted pods When preempting a pod, the scheduler waits for the pod graceful termination period to expire, allowing the pod to finish working and exit. If the pod does not exit after the period, the scheduler kills the pod. This graceful termination period creates a time gap between the point that the scheduler preempts the pod and the time when the pending pod can be scheduled on the node. To minimize this gap, configure a small graceful termination period for lower-priority pods. 2.10.3. Configuring priority and preemption You apply pod priority and preemption by creating a priority class object and associating pods to the priority by using the priorityClassName in your pod specs. Note You cannot add a priority class directly to an existing scheduled pod. Procedure To configure your cluster to use priority and preemption: Create one or more priority classes: Create a YAML file similar to the following: apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority 1 value: 1000000 2 preemptionPolicy: PreemptLowerPriority 3 globalDefault: false 4 description: "This priority class should be used for XYZ service pods only." 5 1 The name of the priority class object. 2 The priority value of the object. 3 Optional. Specifies whether this priority class is preempting or non-preempting. The preemption policy defaults to PreemptLowerPriority , which allows pods of that priority class to preempt lower-priority pods. If the preemption policy is set to Never , pods in that priority class are non-preempting. 4 Optional. Specifies whether this priority class should be used for pods without a priority class name specified. This field is false by default. Only one priority class with globalDefault set to true can exist in the cluster. If there is no priority class with globalDefault:true , the priority of pods with no priority class name is zero. Adding a priority class with globalDefault:true affects only pods created after the priority class is added and does not change the priorities of existing pods. 5 Optional. Describes which pods developers should use with this priority class. Enter an arbitrary text string. Create the priority class: USD oc create -f <file-name>.yaml Create a pod spec to include the name of a priority class: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] priorityClassName: high-priority 1 1 Specify the priority class to use with this pod. Create the pod: USD oc create -f <file-name>.yaml You can add the priority name directly to the pod configuration or to a pod template. 2.11. Placing pods on specific nodes using node selectors A node selector specifies a map of key-value pairs. The rules are defined using custom labels on nodes and selectors specified in pods. For the pod to be eligible to run on a node, the pod must have the indicated key-value pairs as the label on the node. If you are using node affinity and node selectors in the same pod configuration, see the important considerations below. 2.11.1. Using node selectors to control pod placement You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. To add node selectors to an existing pod, add a node selector to the controlling object for that pod, such as a ReplicaSet object, DaemonSet object, StatefulSet object, Deployment object, or DeploymentConfig object. Any existing pods under that controlling object are recreated on a node with a matching label. If you are creating a new pod, you can add the node selector directly to the pod spec. If the pod does not have a controlling object, you must delete the pod, edit the pod spec, and recreate the pod. Note You cannot add a node selector directly to an existing scheduled pod. Prerequisites To add a node selector to existing pods, determine the controlling object for that pod. For example, the router-default-66d5cf9464-m2g75 pod is controlled by the router-default-66d5cf9464 replica set: USD oc describe pod router-default-66d5cf9464-7pwkc Example output kind: Pod apiVersion: v1 metadata: # ... Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress # ... Controlled By: ReplicaSet/router-default-66d5cf9464 # ... The web console lists the controlling object under ownerReferences in the pod YAML: apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc # ... ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true # ... Procedure Add labels to a node by using a compute machine set or editing the node directly: Use a MachineSet object to add labels to nodes managed by the compute machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api For example: USD oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" # ... Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api Example MachineSet object apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: # ... template: metadata: # ... spec: metadata: labels: region: east type: user-node # ... Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: "user-node" region: "east" # ... Verify that the labels are added to the node: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.28.5 Add the matching node selector to a pod: To add a node selector to existing and future pods, add a node selector to the controlling object for the pods: Example ReplicaSet object with labels kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 # ... spec: # ... template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1 # ... 1 Add the node selector. To add a node selector to a specific, new pod, add the selector to the Pod object directly: Example Pod object with a node selector apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 # ... spec: nodeSelector: region: east type: user-node # ... Note You cannot add a node selector directly to an existing scheduled pod. 2.12. Run Once Duration Override Operator 2.12.1. Run Once Duration Override Operator overview You can use the Run Once Duration Override Operator to specify a maximum time limit that run-once pods can be active for. 2.12.1.1. About the Run Once Duration Override Operator OpenShift Container Platform relies on run-once pods to perform tasks such as deploying a pod or performing a build. Run-once pods are pods that have a RestartPolicy of Never or OnFailure . Cluster administrators can use the Run Once Duration Override Operator to force a limit on the time that those run-once pods can be active. After the time limit expires, the cluster will try to actively terminate those pods. The main reason to have such a limit is to prevent tasks such as builds to run for an excessive amount of time. To apply the run-once duration override from the Run Once Duration Override Operator to run-once pods, you must enable it on each applicable namespace. If both the run-once pod and the Run Once Duration Override Operator have their activeDeadlineSeconds value set, the lower of the two values is used. 2.12.2. Run Once Duration Override Operator release notes Cluster administrators can use the Run Once Duration Override Operator to force a limit on the time that run-once pods can be active. After the time limit expires, the cluster tries to terminate the run-once pods. The main reason to have such a limit is to prevent tasks such as builds to run for an excessive amount of time. To apply the run-once duration override from the Run Once Duration Override Operator to run-once pods, you must enable it on each applicable namespace. These release notes track the development of the Run Once Duration Override Operator for OpenShift Container Platform. For an overview of the Run Once Duration Override Operator, see About the Run Once Duration Override Operator . 2.12.2.1. Run Once Duration Override Operator 1.1.2 Issued: 31 October 2024 The following advisory is available for the Run Once Duration Override Operator 1.1.2: RHSA-2024:8337 2.12.2.1.1. Bug fixes This release of the Run Once Duration Override Operator addresses several Common Vulnerabilities and Exposures (CVEs). 2.12.2.2. Run Once Duration Override Operator 1.1.1 Issued: 1 July 2024 The following advisory is available for the Run Once Duration Override Operator 1.1.1: RHSA-2024:1616 2.12.2.2.1. New features and enhancements You can install and use the Run Once Duration Override Operator in an OpenShift Container Platform cluster running in FIPS mode. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When you run Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 2.12.2.2.2. Bug fixes This release of the Run Once Duration Override Operator addresses several Common Vulnerabilities and Exposures (CVEs). 2.12.2.3. Run Once Duration Override Operator 1.1.0 Issued: 28 February 2024 The following advisory is available for the Run Once Duration Override Operator 1.1.0: RHSA-2024:0269 2.12.2.3.1. Bug fixes This release of the Run Once Duration Override Operator addresses several Common Vulnerabilities and Exposures (CVEs). 2.12.3. Overriding the active deadline for run-once pods You can use the Run Once Duration Override Operator to specify a maximum time limit that run-once pods can be active for. By enabling the run-once duration override on a namespace, all future run-once pods created or updated in that namespace have their activeDeadlineSeconds field set to the value specified by the Run Once Duration Override Operator. Note If both the run-once pod and the Run Once Duration Override Operator have their activeDeadlineSeconds value set, the lower of the two values is used. 2.12.3.1. Installing the Run Once Duration Override Operator You can use the web console to install the Run Once Duration Override Operator. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Run Once Duration Override Operator. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-run-once-duration-override-operator in the Name field and click Create . Install the Run Once Duration Override Operator. Navigate to Operators OperatorHub . Enter Run Once Duration Override Operator into the filter box. Select the Run Once Duration Override Operator and click Install . On the Install Operator page: The Update channel is set to stable , which installs the latest stable release of the Run Once Duration Override Operator. Select A specific namespace on the cluster . Choose openshift-run-once-duration-override-operator from the dropdown menu under Installed namespace . Select an Update approval strategy. The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Create a RunOnceDurationOverride instance. From the Operators Installed Operators page, click Run Once Duration Override Operator . Select the Run Once Duration Override tab and click Create RunOnceDurationOverride . Edit the settings as necessary. Under the runOnceDurationOverride section, you can update the spec.activeDeadlineSeconds value, if required. The predefined value is 3600 seconds, or 1 hour. Click Create . Verification Log in to the OpenShift CLI. Verify all pods are created and running properly. USD oc get pods -n openshift-run-once-duration-override-operator Example output NAME READY STATUS RESTARTS AGE run-once-duration-override-operator-7b88c676f6-lcxgc 1/1 Running 0 7m46s runoncedurationoverride-62blp 1/1 Running 0 41s runoncedurationoverride-h8h8b 1/1 Running 0 41s runoncedurationoverride-tdsqk 1/1 Running 0 41s 2.12.3.2. Enabling the run-once duration override on a namespace To apply the run-once duration override from the Run Once Duration Override Operator to run-once pods, you must enable it on each applicable namespace. Prerequisites The Run Once Duration Override Operator is installed. Procedure Log in to the OpenShift CLI. Add the label to enable the run-once duration override to your namespace: USD oc label namespace <namespace> \ 1 runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true 1 Specify the namespace to enable the run-once duration override on. After you enable the run-once duration override on this namespace, future run-once pods that are created in this namespace will have their activeDeadlineSeconds field set to the override value from the Run Once Duration Override Operator. Existing pods in this namespace will also have their activeDeadlineSeconds value set when they are updated . Verification Create a test run-once pod in the namespace that you enabled the run-once duration override on: apiVersion: v1 kind: Pod metadata: name: example namespace: <namespace> 1 spec: restartPolicy: Never 2 securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: busybox securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] image: busybox:1.25 command: - /bin/sh - -ec - | while sleep 5; do date; done 1 Replace <namespace> with the name of your namespace. 2 The restartPolicy must be Never or OnFailure to be a run-once pod. Verify that the pod has its activeDeadlineSeconds field set: USD oc get pods -n <namespace> -o yaml | grep activeDeadlineSeconds Example output activeDeadlineSeconds: 3600 2.12.3.3. Updating the run-once active deadline override value You can customize the override value that the Run Once Duration Override Operator applies to run-once pods. The predefined value is 3600 seconds, or 1 hour. Prerequisites You have access to the cluster with cluster-admin privileges. You have installed the Run Once Duration Override Operator. Procedure Log in to the OpenShift CLI. Edit the RunOnceDurationOverride resource: USD oc edit runoncedurationoverride cluster Update the activeDeadlineSeconds field: apiVersion: operator.openshift.io/v1 kind: RunOnceDurationOverride metadata: # ... spec: runOnceDurationOverride: spec: activeDeadlineSeconds: 1800 1 # ... 1 Set the activeDeadlineSeconds field to the desired value, in seconds. Save the file to apply the changes. Any future run-once pods created in namespaces where the run-once duration override is enabled will have their activeDeadlineSeconds field set to this new value. Existing run-once pods in these namespaces will receive this new value when they are updated. 2.12.4. Uninstalling the Run Once Duration Override Operator You can remove the Run Once Duration Override Operator from OpenShift Container Platform by uninstalling the Operator and removing its related resources. 2.12.4.1. Uninstalling the Run Once Duration Override Operator You can use the web console to uninstall the Run Once Duration Override Operator. Uninstalling the Run Once Duration Override Operator does not unset the activeDeadlineSeconds field for run-once pods, but it will no longer apply the override value to future run-once pods. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have installed the Run Once Duration Override Operator. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Select openshift-run-once-duration-override-operator from the Project dropdown list. Delete the RunOnceDurationOverride instance. Click Run Once Duration Override Operator and select the Run Once Duration Override tab. Click the Options menu to the cluster entry and select Delete RunOnceDurationOverride . In the confirmation dialog, click Delete . Uninstall the Run Once Duration Override Operator Operator. Navigate to Operators Installed Operators . Click the Options menu to the Run Once Duration Override Operator entry and click Uninstall Operator . In the confirmation dialog, click Uninstall . 2.12.4.2. Uninstalling Run Once Duration Override Operator resources Optionally, after uninstalling the Run Once Duration Override Operator, you can remove its related resources from your cluster. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have uninstalled the Run Once Duration Override Operator. Procedure Log in to the OpenShift Container Platform web console. Remove CRDs that were created when the Run Once Duration Override Operator was installed: Navigate to Administration CustomResourceDefinitions . Enter RunOnceDurationOverride in the Name field to filter the CRDs. Click the Options menu to the RunOnceDurationOverride CRD and select Delete CustomResourceDefinition . In the confirmation dialog, click Delete . Delete the openshift-run-once-duration-override-operator namespace. Navigate to Administration Namespaces . Enter openshift-run-once-duration-override-operator into the filter box. Click the Options menu to the openshift-run-once-duration-override-operator entry and select Delete Namespace . In the confirmation dialog, enter openshift-run-once-duration-override-operator and click Delete . Remove the run-once duration override label from the namespaces that it was enabled on. Navigate to Administration Namespaces . Select your namespace. Click Edit to the Labels field. Remove the runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true label and click Save . | [
"kind: Pod apiVersion: v1 metadata: name: example labels: environment: production app: abc 1 spec: restartPolicy: Always 2 securityContext: 3 runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: 4 - name: abc args: - sleep - \"1000000\" volumeMounts: 5 - name: cache-volume mountPath: /cache 6 image: registry.access.redhat.com/ubi7/ubi-init:latest 7 securityContext: allowPrivilegeEscalation: false runAsNonRoot: true capabilities: drop: [\"ALL\"] resources: limits: memory: \"100Mi\" cpu: \"1\" requests: memory: \"100Mi\" cpu: \"1\" volumes: 8 - name: cache-volume emptyDir: sizeLimit: 500Mi",
"oc project <project-name>",
"oc get pods",
"oc get pods",
"NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none>",
"oc adm top pods",
"oc adm top pods -n openshift-console",
"NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi",
"oc adm top pod --selector=''",
"oc adm top pod --selector='name=my-pod'",
"oc logs -f <pod_name> -c <container_name>",
"oc logs ruby-58cd97df55-mww7r",
"oc logs -f ruby-57f7f4855b-znl92 -c ruby",
"oc logs <object_type>/<resource_name> 1",
"oc logs deployment/ruby",
"{ \"kind\": \"Pod\", \"spec\": { \"containers\": [ { \"image\": \"openshift/hello-openshift\", \"name\": \"hello-openshift\" } ] }, \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"iperf-slow\", \"annotations\": { \"kubernetes.io/ingress-bandwidth\": \"10M\", \"kubernetes.io/egress-bandwidth\": \"10M\" } } }",
"oc create -f <file_or_dir_path>",
"oc get poddisruptionbudget --all-namespaces",
"NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #",
"apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod",
"apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod",
"oc create -f </path/to/file> -n <project_name>",
"apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1",
"oc create -f pod-disruption-budget.yaml",
"apiVersion: v1 kind: Pod metadata: name: my-pdb spec: template: metadata: name: critical-pod priorityClassName: system-cluster-critical 1",
"oc create -f <file-name>.yaml",
"oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75",
"horizontalpodautoscaler.autoscaling/hello-node autoscaled",
"apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hello-node namespace: default spec: maxReplicas: 7 minReplicas: 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hello-node targetCPUUtilizationPercentage: 75 status: currentReplicas: 5 desiredReplicas: 0",
"oc get deployment hello-node",
"NAME REVISION DESIRED CURRENT TRIGGERED BY hello-node 1 5 5 config",
"type: Resource resource: name: cpu target: type: Utilization averageUtilization: 60",
"behavior: scaleDown: stabilizationWindowSeconds: 300",
"apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: behavior: scaleDown: 1 policies: 2 - type: Pods 3 value: 4 4 periodSeconds: 60 5 - type: Percent value: 10 6 periodSeconds: 60 selectPolicy: Min 7 stabilizationWindowSeconds: 300 8 scaleUp: 9 policies: - type: Pods value: 5 10 periodSeconds: 70 - type: Percent value: 12 11 periodSeconds: 80 selectPolicy: Max stabilizationWindowSeconds: 0",
"apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: minReplicas: 20 behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 30 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max scaleUp: selectPolicy: Disabled",
"oc edit hpa hpa-resource-metrics-memory",
"apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: annotations: autoscaling.alpha.kubernetes.io/behavior: '{\"ScaleUp\":{\"StabilizationWindowSeconds\":0,\"SelectPolicy\":\"Max\",\"Policies\":[{\"Type\":\"Pods\",\"Value\":4,\"PeriodSeconds\":15},{\"Type\":\"Percent\",\"Value\":100,\"PeriodSeconds\":15}]}, \"ScaleDown\":{\"StabilizationWindowSeconds\":300,\"SelectPolicy\":\"Min\",\"Policies\":[{\"Type\":\"Pods\",\"Value\":4,\"PeriodSeconds\":60},{\"Type\":\"Percent\",\"Value\":10,\"PeriodSeconds\":60}]}}'",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal",
"Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>",
"oc autoscale <object_type>/<name> \\ 1 --min <number> \\ 2 --max <number> \\ 3 --cpu-percent=<percent> 4",
"oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75",
"apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: cpu-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: cpu 9 target: type: AverageValue 10 averageValue: 500m 11",
"oc create -f <file-name>.yaml",
"oc get hpa cpu-autoscale",
"NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE cpu-autoscale Deployment/example 173m/500m 1 10 1 20m",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-129-223.compute.internal -n openshift-kube-scheduler",
"Name: openshift-kube-scheduler-ip-10-0-129-223.compute.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Cpu: 0 Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2020-02-14T22:21:14Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-129-223.compute.internal Timestamp: 2020-02-14T22:21:14Z Window: 5m0s Events: <none>",
"apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: AverageValue 10 averageValue: 500Mi 11 behavior: 12 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 60 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max",
"apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: memory-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: Utilization 10 averageUtilization: 50 11 behavior: 12 scaleUp: stabilizationWindowSeconds: 180 policies: - type: Pods value: 6 periodSeconds: 120 - type: Percent value: 10 periodSeconds: 120 selectPolicy: Max",
"oc create -f <file-name>.yaml",
"oc create -f hpa.yaml",
"horizontalpodautoscaler.autoscaling/hpa-resource-metrics-memory created",
"oc get hpa hpa-resource-metrics-memory",
"NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-resource-metrics-memory Deployment/example 2441216/500Mi 1 10 1 20m",
"oc describe hpa hpa-resource-metrics-memory",
"Name: hpa-resource-metrics-memory Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Wed, 04 Mar 2020 16:31:37 +0530 Reference: Deployment/example Metrics: ( current / target ) resource memory on pods: 2441216 / 500Mi Min replicas: 1 Max replicas: 10 ReplicationController pods: 1 current / 1 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale recommended size matches current size ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 6m34s horizontal-pod-autoscaler New size: 1; reason: All metrics below target",
"oc describe hpa cm-test",
"Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) \"http_requests\" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range Events:",
"Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: no matches for kind \"ReplicationController\" in group \"apps\" Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 6s (x3 over 36s) horizontal-pod-autoscaler no matches for kind \"ReplicationController\" in group \"apps\"",
"Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API",
"Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal",
"Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>",
"oc describe hpa <pod-name>",
"oc describe hpa cm-test",
"Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) \"http_requests\" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range",
"oc get all -n openshift-vertical-pod-autoscaler",
"NAME READY STATUS RESTARTS AGE pod/vertical-pod-autoscaler-operator-85b4569c47-2gmhc 1/1 Running 0 3m13s pod/vpa-admission-plugin-default-67644fc87f-xq7k9 1/1 Running 0 2m56s pod/vpa-recommender-default-7c54764b59-8gckt 1/1 Running 0 2m56s pod/vpa-updater-default-7f6cc87858-47vw9 1/1 Running 0 2m56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/vpa-webhook ClusterIP 172.30.53.206 <none> 443/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/vertical-pod-autoscaler-operator 1/1 1 1 3m13s deployment.apps/vpa-admission-plugin-default 1/1 1 1 2m56s deployment.apps/vpa-recommender-default 1/1 1 1 2m56s deployment.apps/vpa-updater-default 1/1 1 1 2m56s NAME DESIRED CURRENT READY AGE replicaset.apps/vertical-pod-autoscaler-operator-85b4569c47 1 1 1 3m13s replicaset.apps/vpa-admission-plugin-default-67644fc87f 1 1 1 2m56s replicaset.apps/vpa-recommender-default-7c54764b59 1 1 1 2m56s replicaset.apps/vpa-updater-default-7f6cc87858 1 1 1 2m56s",
"resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi",
"resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k",
"oc get vpa <vpa-name> --output yaml",
"status: recommendation: containerRecommendations: - containerName: frontend lowerBound: cpu: 25m memory: 262144k target: cpu: 25m memory: 262144k uncappedTarget: cpu: 25m memory: 262144k upperBound: cpu: 262m memory: \"274357142\" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: \"498558823\"",
"apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: creationTimestamp: \"2021-04-21T19:29:49Z\" generation: 2 name: default namespace: openshift-vertical-pod-autoscaler resourceVersion: \"142172\" uid: 180e17e9-03cc-427f-9955-3b4d7aeb2d59 spec: minReplicas: 3 1 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Initial\" 3",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Off\" 3",
"oc get vpa <vpa-name> --output yaml",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: \"Off\"",
"spec: containers: - name: frontend resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi - name: backend resources: limits: cpu: \"1\" memory: 500Mi requests: cpu: 500m memory: 100Mi",
"spec: containers: name: frontend resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k name: backend resources: limits: cpu: \"1\" memory: 500Mi requests: cpu: 500m memory: 100Mi",
"apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: 1 container: args: 2 - '--kube-api-qps=50.0' - '--kube-api-burst=100.0' resources: requests: 3 cpu: 40m memory: 150Mi limits: memory: 300Mi recommender: 4 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' - '--memory-saver=true' 5 resources: requests: cpu: 75m memory: 275Mi limits: memory: 550Mi updater: 6 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' resources: requests: cpu: 80m memory: 350M limits: memory: 700Mi minReplicas: 2 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15",
"apiVersion: v1 kind: Pod metadata: name: vpa-updater-default-d65ffb9dc-hgw44 namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --min-replicas=2 - --kube-api-qps=60.0 - --kube-api-burst=120.0 resources: requests: cpu: 80m memory: 350M",
"apiVersion: v1 kind: Pod metadata: name: vpa-admission-plugin-default-756999448c-l7tsd namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --tls-cert-file=/data/tls-certs/tls.crt - --tls-private-key=/data/tls-certs/tls.key - --client-ca-file=/data/tls-ca-certs/service-ca.crt - --webhook-timeout-seconds=10 - --kube-api-qps=50.0 - --kube-api-burst=100.0 resources: requests: cpu: 40m memory: 150Mi",
"apiVersion: v1 kind: Pod metadata: name: vpa-recommender-default-74c979dbbc-znrd2 namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --recommendation-margin-fraction=0.15 - --pod-recommendation-min-cpu-millicores=25 - --pod-recommendation-min-memory-mb=250 - --kube-api-qps=60.0 - --kube-api-burst=120.0 - --memory-saver=true resources: requests: cpu: 75m memory: 275Mi",
"apiVersion: v1 1 kind: ServiceAccount metadata: name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRoleBinding metadata: name: system:example-metrics-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 3 kind: ClusterRoleBinding metadata: name: system:example-vpa-actor roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-actor subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRoleBinding metadata: name: system:example-vpa-target-reader-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-target-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name>",
"apiVersion: apps/v1 kind: Deployment metadata: name: alt-vpa-recommender namespace: <namespace_name> spec: replicas: 1 selector: matchLabels: app: alt-vpa-recommender template: metadata: labels: app: alt-vpa-recommender spec: containers: 1 - name: recommender image: quay.io/example/alt-recommender:latest 2 imagePullPolicy: Always resources: limits: cpu: 200m memory: 1000Mi requests: cpu: 50m memory: 500Mi ports: - name: prometheus containerPort: 8942 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL seccompProfile: type: RuntimeDefault serviceAccountName: alt-vpa-recommender-sa 3 securityContext: runAsNonRoot: true",
"oc get pods",
"NAME READY STATUS RESTARTS AGE frontend-845d5478d-558zf 1/1 Running 0 4m25s frontend-845d5478d-7z9gx 1/1 Running 0 4m25s frontend-845d5478d-b7l4j 1/1 Running 0 4m25s vpa-alt-recommender-55878867f9-6tp5v 1/1 Running 0 9s",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender namespace: <namespace_name> spec: recommenders: - name: alt-vpa-recommender 1 targetRef: apiVersion: \"apps/v1\" kind: Deployment 2 name: frontend",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: \"Off\" recommenders: 5 - name: my-recommender",
"oc create -f <file-name>.yaml",
"oc get vpa <vpa-name> --output yaml",
"status: recommendation: containerRecommendations: - containerName: frontend lowerBound: 1 cpu: 25m memory: 262144k target: 2 cpu: 25m memory: 262144k uncappedTarget: 3 cpu: 25m memory: 262144k upperBound: 4 cpu: 262m memory: \"274357142\" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: \"498558823\"",
"apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: scalablepods.testing.openshift.io spec: group: testing.openshift.io versions: - name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: replicas: type: integer minimum: 0 selector: type: string status: type: object properties: replicas: type: integer subresources: status: {} scale: specReplicasPath: .spec.replicas statusReplicasPath: .status.replicas labelSelectorPath: .spec.selector 1 scope: Namespaced names: plural: scalablepods singular: scalablepod kind: ScalablePod shortNames: - spod",
"apiVersion: testing.openshift.io/v1 kind: ScalablePod metadata: name: scalable-cr namespace: default spec: selector: \"app=scalable-cr\" 1 replicas: 1",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: scalable-cr namespace: default spec: targetRef: apiVersion: testing.openshift.io/v1 kind: ScalablePod name: scalable-cr updatePolicy: updateMode: \"Auto\"",
"oc delete namespace openshift-vertical-pod-autoscaler",
"oc delete crd verticalpodautoscalercheckpoints.autoscaling.k8s.io",
"oc delete crd verticalpodautoscalercontrollers.autoscaling.openshift.io",
"oc delete crd verticalpodautoscalers.autoscaling.k8s.io",
"oc delete MutatingWebhookConfiguration vpa-webhook-config",
"oc delete operator/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler",
"apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5",
"apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque 1 data: 2 username: <username> password: <password> stringData: 3 hostname: myapp.mydomain.com secret.properties: | property1=valueA property2=valueB",
"apiVersion: v1 kind: ServiceAccount secrets: - name: test-secret",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: 1 - name: secret-volume mountPath: /etc/secret-volume 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: secret-volume secret: secretName: test-secret 4 restartPolicy: Never",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username from: kind: ImageStreamTag namespace: openshift name: 'cli:latest'",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-sa-sample annotations: kubernetes.io/service-account.name: \"sa-name\" 1 type: kubernetes.io/service-account-token 2",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-basic-auth type: kubernetes.io/basic-auth 1 data: stringData: 2 username: admin password: <password>",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 MIIEpQIBAAKCAQEAulqb/Y",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-docker-cfg namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfig:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"apiVersion: v1 kind: Secret metadata: name: secret-docker-json namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: example namespace: <namespace> type: Opaque 1 data: username: <base64 encoded username> password: <base64 encoded password> stringData: 2 hostname: myapp.mydomain.com",
"oc create sa <service_account_name> -n <your_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> 1 annotations: kubernetes.io/service-account.name: \"sa-name\" 2 type: kubernetes.io/service-account-token 3",
"oc apply -f service-account-token-secret.yaml",
"oc get secret <sa_token_secret> -o jsonpath='{.data.token}' | base64 --decode 1",
"ayJhbGciOiJSUzI1NiIsImtpZCI6IklOb2dtck1qZ3hCSWpoNnh5YnZhSE9QMkk3YnRZMVZoclFfQTZfRFp1YlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImJ1aWxkZXItdG9rZW4tdHZrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYnVpbGRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNmZGU2MGZmLTA1NGYtNDkyZi04YzhjLTNlZjE0NDk3MmFmNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmJ1aWxkZXIifQ.OmqFTDuMHC_lYvvEUrjr1x453hlEEHYcxS9VKSzmRkP1SiVZWPNPkTWlfNRp6bIUZD3U6aN3N7dMSN0eI5hu36xPgpKTdvuckKLTCnelMx6cxOdAbrcw1mCmOClNscwjS1KO1kzMtYnnq8rXHiMJELsNlhnRyyIXRTtNBsy4t64T3283s3SLsancyx0gy0ujx-Ch3uKAKdZi5iT-I8jnnQ-ds5THDs2h65RJhgglQEmSxpHrLGZFmyHAQI-_SjvmHZPXEc482x3SkaQHNLqpmrpJorNqh1M8ZHKzlujhZgVooMvJmWPXTb2vnvi3DGn2XI-hZxl1yD2yGH1RBpYUHA",
"curl -X GET <openshift_cluster_api> --header \"Authorization: Bearer <token>\" 1 2",
"apiVersion: v1 kind: Service metadata: name: registry annotations: service.beta.openshift.io/serving-cert-secret-name: registry-cert 1",
"kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.openshift.io/serving-cert-secret-name: my-cert 1 spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376",
"oc create -f <file-name>.yaml",
"oc get secrets",
"NAME TYPE DATA AGE my-cert kubernetes.io/tls 2 9m",
"oc describe secret my-cert",
"Name: my-cert Namespace: openshift-console Labels: <none> Annotations: service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z service.beta.openshift.io/originating-service-name: my-service service.beta.openshift.io/originating-service-uid: 640f0ec3-afc2-4380-bf31-a8c784846a11 service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z Type: kubernetes.io/tls Data ==== tls.key: 1679 bytes tls.crt: 2595 bytes",
"apiVersion: v1 kind: Pod metadata: name: my-service-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mypod image: redis volumeMounts: - name: my-container mountPath: \"/etc/my-path\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: my-volume secret: secretName: my-cert items: - key: username path: my-group/my-username mode: 511",
"secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60",
"oc delete secret <secret_name>",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-",
"apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: secrets-store.csi.k8s.io spec: managementState: Managed",
"apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: \"/etc/kubernetes/secrets-store-csi-providers\" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux",
"oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers",
"oc apply -f aws-provider.yaml",
"mkdir credentialsrequest-dir-aws",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"secretsmanager:GetSecretValue\" - \"secretsmanager:DescribeSecret\" effect: Allow resource: \"arn:*:secretsmanager:*:*:secret:testSecret-??????\" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider",
"oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'",
"https://<oidc_provider_name>",
"ccoctl aws create-iam-roles --name my-role --region=<aws_region> --credentials-requests-dir=credentialsrequest-dir-aws --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output",
"2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds",
"oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn=\"<aws_role_arn>\"",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: \"testSecret\" objectType: \"secretsmanager\"",
"oc create -f secret-provider-class-aws.yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-aws-provider\" 3",
"oc create -f deployment.yaml",
"oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/",
"testSecret",
"oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret",
"<secret_value>",
"apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: \"/etc/kubernetes/secrets-store-csi-providers\" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux",
"oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers",
"oc apply -f aws-provider.yaml",
"mkdir credentialsrequest-dir-aws",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"ssm:GetParameter\" - \"ssm:GetParameters\" effect: Allow resource: \"arn:*:ssm:*:*:parameter/testParameter*\" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider",
"oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'",
"https://<oidc_provider_name>",
"ccoctl aws create-iam-roles --name my-role --region=<aws_region> --credentials-requests-dir=credentialsrequest-dir-aws --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output",
"2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds",
"oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn=\"<aws_role_arn>\"",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: \"testParameter\" objectType: \"ssmparameter\"",
"oc create -f secret-provider-class-aws.yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-aws-provider\" 3",
"oc create -f deployment.yaml",
"oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/",
"testParameter",
"oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret",
"<secret_value>",
"apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-azure-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-azure-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-azure-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-azure labels: app: csi-secrets-store-provider-azure spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-azure template: metadata: labels: app: csi-secrets-store-provider-azure spec: serviceAccountName: csi-secrets-store-provider-azure hostNetwork: true containers: - name: provider-azure-installer image: mcr.microsoft.com/oss/azure/secrets-store/provider-azure:v1.4.1 imagePullPolicy: IfNotPresent args: - --endpoint=unix:///provider/azure.sock - --construct-pem-chain=true - --healthz-port=8989 - --healthz-path=/healthz - --healthz-timeout=5s livenessProbe: httpGet: path: /healthz port: 8989 failureThreshold: 3 initialDelaySeconds: 5 timeoutSeconds: 10 periodSeconds: 30 resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 0 capabilities: drop: - ALL volumeMounts: - mountPath: \"/provider\" name: providervol affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: type operator: NotIn values: - virtual-kubelet volumes: - name: providervol hostPath: path: \"/var/run/secrets-store-csi-providers\" tolerations: - operator: Exists nodeSelector: kubernetes.io/os: linux",
"oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-azure -n openshift-cluster-csi-drivers",
"oc apply -f azure-provider.yaml",
"SERVICE_PRINCIPAL_CLIENT_SECRET=\"USD(az ad sp create-for-rbac --name https://USDKEYVAULT_NAME --query 'password' -otsv)\"",
"SERVICE_PRINCIPAL_CLIENT_ID=\"USD(az ad sp list --display-name https://USDKEYVAULT_NAME --query '[0].appId' -otsv)\"",
"oc create secret generic secrets-store-creds -n my-namespace --from-literal clientid=USD{SERVICE_PRINCIPAL_CLIENT_ID} --from-literal clientsecret=USD{SERVICE_PRINCIPAL_CLIENT_SECRET}",
"oc -n my-namespace label secret secrets-store-creds secrets-store.csi.k8s.io/used=true",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider 1 namespace: my-namespace 2 spec: provider: azure 3 parameters: 4 usePodIdentity: \"false\" useVMManagedIdentity: \"false\" userAssignedIdentityID: \"\" keyvaultName: \"kvname\" objects: | array: - | objectName: secret1 objectType: secret tenantId: \"tid\"",
"oc create -f secret-provider-class-azure.yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: my-azure-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-azure-provider\" 3 nodePublishSecretRef: name: secrets-store-creds 4",
"oc create -f deployment.yaml",
"oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/",
"secret1",
"oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/secret1",
"my-secret-value",
"oc edit secretproviderclass my-azure-provider 1",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider namespace: my-namespace spec: provider: azure secretObjects: 1 - secretName: tlssecret 2 type: kubernetes.io/tls 3 labels: environment: \"test\" data: - objectName: tlskey 4 key: tls.key 5 - objectName: tlscrt key: tls.crt parameters: usePodIdentity: \"false\" keyvaultName: \"kvname\" objects: | array: - | objectName: tlskey objectType: secret - | objectName: tlscrt objectType: secret tenantId: \"tid\"",
"oc get secretproviderclasspodstatus <secret_provider_class_pod_status_name> -o yaml 1",
"status: mounted: true objects: - id: secret/tlscrt version: f352293b97da4fa18d96a9528534cb33 - id: secret/tlskey version: 02534bc3d5df481cb138f8b2a13951ef podName: busybox-<hash> secretProviderClassName: my-azure-provider targetPath: /var/lib/kubelet/pods/f0d49c1e-c87a-4beb-888f-37798456a3e7/volumes/kubernetes.io~csi/secrets-store-inline/mount",
"kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2",
"oc create configmap <configmap_name> [options]",
"oc create configmap game-config --from-file=example-files/",
"oc describe configmaps game-config",
"Name: game-config Namespace: default Labels: <none> Annotations: <none> Data game.properties: 158 bytes ui.properties: 83 bytes",
"cat example-files/game.properties",
"enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30",
"cat example-files/ui.properties",
"color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice",
"oc create configmap game-config --from-file=example-files/",
"oc get configmaps game-config -o yaml",
"apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:34:05Z name: game-config namespace: default resourceVersion: \"407\" selflink: /api/v1/namespaces/default/configmaps/game-config uid: 30944725-d66e-11e5-8cd0-68f728db1985",
"oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties",
"cat example-files/game.properties",
"enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30",
"cat example-files/ui.properties",
"color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice",
"oc create configmap game-config-2 --from-file=example-files/game.properties --from-file=example-files/ui.properties",
"oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties",
"oc get configmaps game-config-2 -o yaml",
"apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:52:05Z name: game-config-2 namespace: default resourceVersion: \"516\" selflink: /api/v1/namespaces/default/configmaps/game-config-2 uid: b4952dc3-d670-11e5-8cd0-68f728db1985",
"oc get configmaps game-config-3 -o yaml",
"apiVersion: v1 data: game-special-key: |- 1 enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: \"530\" selflink: /api/v1/namespaces/default/configmaps/game-config-3 uid: 05f8da22-d671-11e5-8cd0-68f728db1985",
"oc create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm",
"oc get configmaps special-config -o yaml",
"apiVersion: v1 data: special.how: very special.type: charm kind: ConfigMap metadata: creationTimestamp: 2016-02-18T19:14:38Z name: special-config namespace: default resourceVersion: \"651\" selflink: /api/v1/namespaces/default/configmaps/special-config uid: dadce046-d673-11e5-8cd0-68f728db1985",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4",
"apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"SPECIAL_LEVEL_KEY=very log_level=INFO",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"very charm",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never",
"very",
"service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }",
"oc describe machineconfig <name>",
"oc describe machineconfig 00-worker",
"Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3",
"oc create -f devicemgr.yaml",
"kubeletconfig.machineconfiguration.openshift.io/devicemgr created",
"oc get priorityclasses",
"NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s",
"apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority 1 value: 1000000 2 preemptionPolicy: PreemptLowerPriority 3 globalDefault: false 4 description: \"This priority class should be used for XYZ service pods only.\" 5",
"oc create -f <file-name>.yaml",
"apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] priorityClassName: high-priority 1",
"oc create -f <file-name>.yaml",
"oc describe pod router-default-66d5cf9464-7pwkc",
"kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464",
"apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api",
"oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc label nodes <name> <key>=<value>",
"oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.28.5",
"kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1",
"apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node",
"oc get pods -n openshift-run-once-duration-override-operator",
"NAME READY STATUS RESTARTS AGE run-once-duration-override-operator-7b88c676f6-lcxgc 1/1 Running 0 7m46s runoncedurationoverride-62blp 1/1 Running 0 41s runoncedurationoverride-h8h8b 1/1 Running 0 41s runoncedurationoverride-tdsqk 1/1 Running 0 41s",
"oc label namespace <namespace> \\ 1 runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true",
"apiVersion: v1 kind: Pod metadata: name: example namespace: <namespace> 1 spec: restartPolicy: Never 2 securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: busybox securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] image: busybox:1.25 command: - /bin/sh - -ec - | while sleep 5; do date; done",
"oc get pods -n <namespace> -o yaml | grep activeDeadlineSeconds",
"activeDeadlineSeconds: 3600",
"oc edit runoncedurationoverride cluster",
"apiVersion: operator.openshift.io/v1 kind: RunOnceDurationOverride metadata: spec: runOnceDurationOverride: spec: activeDeadlineSeconds: 1800 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/nodes/working-with-pods |
Chapter 5. Using feature gates in a hosted cluster | Chapter 5. Using feature gates in a hosted cluster You can use feature gates in a hosted cluster to enable features that are not part of the default set of features. You can enable the TechPreviewNoUpgrade feature set by using feature gates in your hosted cluster. 5.1. Enabling feature sets by using feature gates You can enable the TechPreviewNoUpgrade feature set in a hosted cluster by editing the HostedCluster custom resource (CR) with the OpenShift CLI. Prerequisites You installed the OpenShift CLI ( oc ). Procedure Open the HostedCluster CR for editing on the hosting cluster by running the following command: USD oc edit <hosted_cluster_name> -n <hosted_cluster_namespace> Define the feature set by entering a value in the featureSet field. For example: apiVersion: hypershift.openshift.io/v1beta1 kind: HostedCluster metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: configuration: featureGate: featureSet: TechPreviewNoUpgrade 3 1 Specifies your hosted cluster name. 2 Specifies your hosted cluster namespace. 3 This feature set is a subset of the current Technology Preview features. Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. This feature set allows you to enable these Technology Preview features on test clusters, where you can fully test them. Do not enable this feature set on production clusters. Save the file to apply the changes. Verification Verify that the TechPreviewNoUpgrade feature gate is enabled in your hosted cluster by running the following command: USD oc get featuregate cluster -o yaml Additional resources FeatureGate [config.openshift.io/v1 ] | [
"oc edit <hosted_cluster_name> -n <hosted_cluster_namespace>",
"apiVersion: hypershift.openshift.io/v1beta1 kind: HostedCluster metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: configuration: featureGate: featureSet: TechPreviewNoUpgrade 3",
"oc get featuregate cluster -o yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/hosted_control_planes/hcp-using-feature-gates |
5.4. Network Tuning Techniques | 5.4. Network Tuning Techniques This section describes techniques for tuning network performance in virtualized environments. Important The following features are supported on Red Hat Enterprise Linux 7 hypervisors and virtual machines, but also on virtual machines running Red Hat Enterprise Linux 6.6 and later. 5.4.1. Bridge Zero Copy Transmit Zero copy transmit mode is effective on large packet sizes. It typically reduces the host CPU overhead by up to 15% when transmitting large packets between a guest network and an external network, without affecting throughput. It does not affect performance for guest-to-guest, guest-to-host, or small packet workloads. Bridge zero copy transmit is fully supported on Red Hat Enterprise Linux 7 virtual machines, but disabled by default. To enable zero copy transmit mode, set the experimental_zcopytx kernel module parameter for the vhost_net module to 1. For detailed instructions, see the Virtualization Deployment and Administration Guide . Note An additional data copy is normally created during transmit as a threat mitigation technique against denial of service and information leak attacks. Enabling zero copy transmit disables this threat mitigation technique. If performance regression is observed, or if host CPU utilization is not a concern, zero copy transmit mode can be disabled by setting experimental_zcopytx to 0. 5.4.2. Multi-Queue virtio-net Multi-queue virtio-net provides an approach that scales the network performance as the number of vCPUs increases, by allowing them to transfer packets through more than one virtqueue pair at a time. Today's high-end servers have more processors, and guests running on them often have an increasing number of vCPUs. In single queue virtio-net, the scale of the protocol stack in a guest is restricted, as the network performance does not scale as the number of vCPUs increases. Guests cannot transmit or retrieve packets in parallel, as virtio-net has only one TX and RX queue. Multi-queue support removes these bottlenecks by allowing paralleled packet processing. Multi-queue virtio-net provides the greatest performance benefit when: Traffic packets are relatively large. The guest is active on many connections at the same time, with traffic running between guests, guest to host, or guest to an external system. The number of queues is equal to the number of vCPUs. This is because multi-queue support optimizes RX interrupt affinity and TX queue selection in order to make a specific queue private to a specific vCPU. Note Currently, setting up a multi-queue virtio-net connection can have a negative effect on the performance of outgoing traffic. Specifically, this may occur when sending packets under 1,500 bytes over the Transmission Control Protocol (TCP) stream. For more information, see the Red Hat Knowledgebase . 5.4.2.1. Configuring Multi-Queue virtio-net To use multi-queue virtio-net, enable support in the guest by adding the following to the guest XML configuration (where the value of N is from 1 to 256, as the kernel supports up to 256 queues for a multi-queue tap device): When running a virtual machine with N virtio-net queues in the guest, enable the multi-queue support with the following command (where the value of M is from 1 to N ): | [
"<interface type='network'> <source network='default'/> <model type='virtio'/> <driver name='vhost' queues='N'/> </interface>",
"ethtool -L eth0 combined M"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques |
Chapter 14. OverlappingRangeIPReservation [whereabouts.cni.cncf.io/v1alpha1] | Chapter 14. OverlappingRangeIPReservation [whereabouts.cni.cncf.io/v1alpha1] Description OverlappingRangeIPReservation is the Schema for the OverlappingRangeIPReservations API Type object Required spec 14.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OverlappingRangeIPReservationSpec defines the desired state of OverlappingRangeIPReservation 14.1.1. .spec Description OverlappingRangeIPReservationSpec defines the desired state of OverlappingRangeIPReservation Type object Required podref Property Type Description containerid string ifname string podref string 14.2. API endpoints The following API endpoints are available: /apis/whereabouts.cni.cncf.io/v1alpha1/overlappingrangeipreservations GET : list objects of kind OverlappingRangeIPReservation /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/overlappingrangeipreservations DELETE : delete collection of OverlappingRangeIPReservation GET : list objects of kind OverlappingRangeIPReservation POST : create an OverlappingRangeIPReservation /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/overlappingrangeipreservations/{name} DELETE : delete an OverlappingRangeIPReservation GET : read the specified OverlappingRangeIPReservation PATCH : partially update the specified OverlappingRangeIPReservation PUT : replace the specified OverlappingRangeIPReservation 14.2.1. /apis/whereabouts.cni.cncf.io/v1alpha1/overlappingrangeipreservations Table 14.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind OverlappingRangeIPReservation Table 14.2. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservationList schema 401 - Unauthorized Empty 14.2.2. /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/overlappingrangeipreservations Table 14.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 14.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OverlappingRangeIPReservation Table 14.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 14.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OverlappingRangeIPReservation Table 14.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 14.8. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservationList schema 401 - Unauthorized Empty HTTP method POST Description create an OverlappingRangeIPReservation Table 14.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.10. Body parameters Parameter Type Description body OverlappingRangeIPReservation schema Table 14.11. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservation schema 201 - Created OverlappingRangeIPReservation schema 202 - Accepted OverlappingRangeIPReservation schema 401 - Unauthorized Empty 14.2.3. /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/overlappingrangeipreservations/{name} Table 14.12. Global path parameters Parameter Type Description name string name of the OverlappingRangeIPReservation namespace string object name and auth scope, such as for teams and projects Table 14.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OverlappingRangeIPReservation Table 14.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 14.15. Body parameters Parameter Type Description body DeleteOptions schema Table 14.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OverlappingRangeIPReservation Table 14.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 14.18. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservation schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OverlappingRangeIPReservation Table 14.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.20. Body parameters Parameter Type Description body Patch schema Table 14.21. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservation schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OverlappingRangeIPReservation Table 14.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.23. Body parameters Parameter Type Description body OverlappingRangeIPReservation schema Table 14.24. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservation schema 201 - Created OverlappingRangeIPReservation schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/network_apis/overlappingrangeipreservation-whereabouts-cni-cncf-io-v1alpha1 |
Chapter 4. Tuna | Chapter 4. Tuna You can use the Tuna tool to adjust scheduler tunables, tune thread priority, IRQ handlers, and isolate CPU cores and sockets. Tuna aims to reduce the complexity of performing tuning tasks. After installing the tuna package, use the tuna command without any arguments to start the Tuna graphical user interface (GUI). Use the tuna -h command to display available command-line interface (CLI) options. Note that the tuna (8) manual page distinguishes between action and modifier options. The Tuna GUI and CLI provide equivalent functionality. The GUI displays the CPU topology on one screen to help you identify problems. The Tuna GUI also allows you to make changes to the running threads, and see the results of those changes immediately. In the CLI, Tuna accepts multiple command-line parameters and processes them sequentially. You can use such commands in application initialization scripts as configuration commands. The Monitoring tab of the Tuna GUI Important Use the tuna --save= filename command with a descriptive file name to save the current configuration. Note that this command does not save every option that Tuna can change, but saves the kernel thread changes only. Any processes that are not currently running when they are changed are not saved. 4.1. Reviewing the System with Tuna Before you make any changes, you can use Tuna to show you what is currently happening on the system. To view the current policies and priorities, use the tuna --show_threads command: To show only a specific thread corresponding to a PID or matching a command name, add the --threads option before --show_threads : The pid_or_cmd_list argument is a list of comma-separated PIDs or command-name patterns. To view the current interrupt requests (IRQs) and their affinity, use the tuna --show_irqs command: To show only a specific interrupt request corresponding to an IRQ number or matching an IRQ user name, add the --irqs option before --show_irqs : The number_or_user_list argument is a list of comma-separated IRQ numbers or user-name patterns. | [
"tuna --show_threads thread pid SCHED_ rtpri affinity cmd 1 OTHER 0 0,1 init 2 FIFO 99 0 migration/0 3 OTHER 0 0 ksoftirqd/0 4 FIFO 99 0 watchdog/0",
"tuna --threads= pid_or_cmd_list --show_threads",
"tuna --show_irqs users affinity 0 timer 0 1 i8042 0 7 parport0 0",
"tuna --irqs= number_or_user_list --show_irqs"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/chap-tuna |
Chapter 1. Using cost management currency exchange | Chapter 1. Using cost management currency exchange Cost management uses the United States Dollar (USD) by default. However, you can use the cost management currency exchange feature to estimate your costs in your local currency. This feature updates costs both in Red Hat Hybrid Cloud Console and in your exported cost report files. Cost management updates currency exchange information daily with the most recent data from ExchangeRate-API . The values in cost management do not reflect any foreign currency contract agreements. Procedure From Red Hat Hybrid Cloud Console go to cost management . From the currency dropdown, select your local currency. After you change your currency, cost management automatically updates all values with the most recent exchange rates. | null | https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/analyzing_your_cost_data/understanding-currency |
Chapter 2. Threat and Vulnerability Management | Chapter 2. Threat and Vulnerability Management Red Hat Ceph Storage is typically deployed in conjunction with cloud computing solutions, so it can be helpful to think about a Red Hat Ceph Storage deployment abstractly as one of many series of components in a larger deployment. These deployments typically have shared security concerns, which this guide refers to as Security Zones . Threat actors and vectors are classified based on their motivation and access to resources. The intention is to provide you with a sense of the security concerns for each zone, depending on your objectives. 2.1. Threat Actors A threat actor is an abstract way to refer to a class of adversary that you might attempt to defend against. The more capable the actor, the more rigorous the security controls that are required for successful attack mitigation and prevention. Security is a matter of balancing convenience, defense, and cost, based on requirements. In some cases, it's impossible to secure a Red Hat Ceph Storage deployment against all threat actors described here. When deploying Red Hat Ceph Storage, you must decide where the balance lies for your deployment and usage. As part of your risk assessment, you must also consider the type of data you store and any accessible resources, as this will also influence certain actors. However, even if your data is not appealing to threat actors, they could simply be attracted to your computing resources. Nation-State Actors: This is the most capable adversary. Nation-state actors can bring tremendous resources against a target. They have capabilities beyond that of any other actor. It's difficult to defend against these actors without stringent controls in place, both human and technical. Serious Organized Crime: This class describes highly capable and financially driven groups of attackers. They are able to fund in-house exploit development and target research. In recent years, the rise of organizations such as the Russian Business Network, a massive cyber-criminal enterprise, has demonstrated how cyber attacks have become a commodity. Industrial espionage falls within the serious organized crime group. Highly Capable Groups: This refers to 'Hacktivist' type organizations who are not typically commercially funded, but can pose a serious threat to service providers and cloud operators. Motivated Individuals Acting Alone: These attackers come in many guises, such as rogue or malicious employees, disaffected customers, or small-scale industrial espionage. Script Kiddies: These attackers don't target a specific organization, but run automated vulnerability scanning and exploitation. They are often a nuisance; however, compromise by one of these actors is a major risk to an organization's reputation. The following practices can help mitigate some of the risks identified above: Security Updates: You must consider the end-to-end security posture of your underlying physical infrastructure, including networking, storage, and server hardware. These systems will require their own security hardening practices. For your Red Hat Ceph Storage deployment, you should have a plan to regularly test and deploy security updates. Product Updates: Red Hat recommends running product updates as they become available. Updates are typically released every six weeks (and occasionally more frequently). Red Hat endeavors to make point releases and z-stream releases fully compatible within a major release in order to not require additional integration testing. Access Management: Access management includes authentication, authorization, and accounting. Authentication is the process of verifying the user's identity. Authorization is the process of granting permissions to an authenticated user. Accounting is the process of tracking which user performed an action. When granting system access to users, apply the principle of least privilege , and only grant users the granular system privileges they actually need. This approach can also help mitigate the risks of both malicious actors and typographical errors from system administrators. Manage Insiders: You can help mitigate the threat of malicious insiders by applying careful assignment of role-based access control (minimum required access), using encryption on internal interfaces, and using authentication/authorization security (such as centralized identity management). You can also consider additional non-technical options, such as separation of duties and irregular job role rotation. 2.2. Security Zones A security zone comprises users, applications, servers, or networks that share common trust requirements and expectations within a system. Typically they share the same authentication, authorization requirements, and users. Although you may refine these zone definitions further, this guide refers to four distinct security zones, three of which form the bare minimum that is required to deploy a security-hardened Red Hat Ceph Storage cluster. These security zones are listed below from least to most trusted: Public Security Zone: The public security zone is an entirely untrusted area of the cloud infrastructure. It can refer to the Internet as a whole or simply to networks that are external to your Red Hat OpenStack deployment over which you have no authority. Any data with confidentiality or integrity requirements that traverse this zone should be protected using compensating controls such as encryption. The public security zone SHOULD NOT be confused with the Ceph Storage Cluster's front- or client-side network, which is referred to as the public_network in RHCS and is usually NOT part of the public security zone or the Ceph client security zone. Ceph Client Security Zone: With RHCS, the Ceph client security zone refers to networks accessing Ceph clients such as Ceph Object Gateway, Ceph Block Device, Ceph Filesystem, or librados . The Ceph client security zone is typically behind a firewall separating itself from the public security zone. However, Ceph clients are not always protected from the public security zone. It is possible to expose the Ceph Object Gateway's S3 and Swift APIs in the public security zone. Storage Access Security Zone: The storage access security zone refers to internal networks providing Ceph clients with access to the Ceph Storage Cluster. We use the phrase 'storage access security zone' so that this document is consistent with the terminology used in the OpenStack Platform Security and Hardening Guide. The storage access security zone includes the Ceph Storage Cluster's front- or client-side network, which is referred to as the public_network in RHCS. Ceph Cluster Security Zone: The Ceph cluster security zone refers to the internal networks providing the Ceph Storage Cluster's OSD daemons with network communications for replication, heartbeating, backfilling, and recovery. The Ceph cluster security zone includes the Ceph Storage Cluster's backside network, which is referred to as the cluster_network in RHCS. These security zones can be mapped separately, or combined to represent the majority of the possible areas of trust within a given RHCS deployment. Security zones should be mapped out against your specific RHCS deployment topology. The zones and their trust requirements will vary depending upon whether Red Hat Ceph Storage is operating in a standalone capacity or is serving a public, private, or hybrid cloud. For a visual representation of these security zones, see Security Optimized Architecture . Additional Resources See the Network Communications section in the Red Hat Ceph Storage Data Security and Hardening Guide for more details. 2.3. Connecting Security Zones Any component that spans across multiple security zones with different trust levels or authentication requirements must be carefully configured. These connections are often the weak points in network architecture, and should always be configured to meet the security requirements of the highest trust level of any of the zones being connected. In many cases, the security controls of the connected zones should be a primary concern due to the likelihood of attack. The points where zones meet do present an opportunity for attackers to migrate or target their attack to more sensitive parts of the deployment. In some cases, Red Hat Ceph Storage administrators might want to consider securing integration points at a higher standard than any of the zones in which the integration point resides. For example, the Ceph Cluster Security Zone can be isolated from other security zones easily, because there is no reason for it to connect to other security zones. By contrast, the Storage Access Security Zone must provide access to port 6789 on Ceph monitor nodes, and ports 6800-7300 on Ceph OSD nodes. However, port 3000 should be exclusive to the Storage Access Security Zone, because it provides access to Ceph Grafana monitoring information that should be exposed to Ceph administrators only. A Ceph Object Gateway in the Ceph Client Security Zone will need to access the Ceph Cluster Security Zone's monitors (port 6789 ) and OSDs (ports 6800-7300 ), and may expose its S3 and Swift APIs to the Public Security Zone such as over HTTP port 80 or HTTPS port 443 ; yet, it may still need to restrict access to the admin API. The design of Red Hat Ceph Storage is such that the separation of security zones is difficult. As core services usually span at least two zones, special consideration must be given when applying security controls to them. 2.4. Security-Optimized Architecture A Red Hat Ceph Storage cluster's daemons typically run on nodes that are subnet isolated and behind a firewall, which makes it relatively simple to secure an RHCS cluster. By contrast, Red Hat Ceph Storage clients such as Ceph Block Device ( rbd ), Ceph Filesystem ( cephfs ), and Ceph Object Gateway ( rgw ) access the RHCS storage cluster, but expose their services to other cloud computing platforms. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/data_security_and_hardening_guide/assembly-threat-and-vulnerability-management |
Chapter 13. Optimizing virtual machine performance | Chapter 13. Optimizing virtual machine performance Virtual machines (VMs) always experience some degree of performance deterioration in comparison to the host. The following sections explain the reasons for this deterioration and provide instructions on how to minimize the performance impact of virtualization in RHEL 9, so that your hardware infrastructure resources can be used as efficiently as possible. 13.1. What influences virtual machine performance VMs are run as user-space processes on the host. The hypervisor therefore needs to convert the host's system resources so that the VMs can use them. As a consequence, a portion of the resources is consumed by the conversion, and the VM therefore cannot achieve the same performance efficiency as the host. The impact of virtualization on system performance More specific reasons for VM performance loss include: Virtual CPUs (vCPUs) are implemented as threads on the host, handled by the Linux scheduler. VMs do not automatically inherit optimization features, such as NUMA or huge pages, from the host kernel. Disk and network I/O settings of the host might have a significant performance impact on the VM. Network traffic typically travels to a VM through a software-based bridge. Depending on the host devices and their models, there might be significant overhead due to emulation of particular hardware. The severity of the virtualization impact on the VM performance is influenced by a variety factors, which include: The number of concurrently running VMs. The amount of virtual devices used by each VM. The device types used by the VMs. Reducing VM performance loss RHEL 9 provides a number of features you can use to reduce the negative performance effects of virtualization. Notably: The TuneD service can automatically optimize the resource distribution and performance of your VMs. Block I/O tuning can improve the performances of the VM's block devices, such as disks. NUMA tuning can increase vCPU performance. Virtual networking can be optimized in various ways. Important Tuning VM performance can have negative effects on other virtualization functions. For example, it can make migrating the modified VM more difficult. 13.2. Optimizing virtual machine performance by using TuneD The TuneD utility is a tuning profile delivery mechanism that adapts RHEL for certain workload characteristics, such as requirements for CPU-intensive tasks or storage-network throughput responsiveness. It provides a number of tuning profiles that are pre-configured to enhance performance and reduce power consumption in a number of specific use cases. You can edit these profiles or create new profiles to create performance solutions tailored to your environment, including virtualized environments. To optimize RHEL 9 for virtualization, use the following profiles: For RHEL 9 virtual machines, use the virtual-guest profile. It is based on the generally applicable throughput-performance profile, but also decreases the swappiness of virtual memory. For RHEL 9 virtualization hosts, use the virtual-host profile. This enables more aggressive writeback of dirty memory pages, which benefits the host performance. Prerequisites The TuneD service is installed and enabled . Procedure To enable a specific TuneD profile: List the available TuneD profiles. Optional: Create a new TuneD profile or edit an existing TuneD profile. For more information, see Customizing TuneD profiles . Activate a TuneD profile. To optimize a virtualization host, use the virtual-host profile. On a RHEL guest operating system, use the virtual-guest profile. Verification Display the active profile for TuneD . Ensure that the TuneD profile settings have been applied on your system. Additional resources Monitoring and managing system status and performance 13.3. Virtual machine performance optimization for specific workloads Virtual machines (VMs) are frequently dedicated to perform a specific workload. You can improve the performance of your VMs by optimizing their configuration for the intended workload. Table 13.1. Recommended VM configurations for specific use cases Use case IOThread vCPU pinning vNUMA pinning huge pages multi-queue Database For database disks Yes * Yes * Yes * Yes, see: multi-queue virtio-blk, virtio-scsi Virtualized Network Function (VNF) No Yes Yes Yes Yes, see: multi-queue virtio-net High Performance Computing (HPC) No Yes Yes Yes No Backup Server For backup disks No No No Yes, see: multi-queue virtio-blk, virtio-scsi VM with many CPUs (Usually more than 32) No Yes * Yes * No No VM with large RAM (Usually more than 128 GB) No No Yes * Yes No * If the VM has enough CPUs and RAM to use more than one NUMA node. Note A VM can fit in more than one category of use cases. In this situation, you should apply all of the recommended configurations. 13.4. Optimizing libvirt daemons The libvirt virtualization suite works as a management layer for the RHEL hypervisor, and your libvirt configuration significantly impacts your virtualization host. Notably, RHEL 9 contains two different types of libvirt daemons, monolithic or modular, and which type of daemons you use affects how granularly you can configure individual virtualization drivers. 13.4.1. Types of libvirt daemons RHEL 9 supports the following libvirt daemon types: Monolithic libvirt The traditional libvirt daemon, libvirtd , controls a wide variety of virtualization drivers, by using a single configuration file - /etc/libvirt/libvirtd.conf . As such, libvirtd allows for centralized hypervisor configuration, but may use system resources inefficiently. Therefore, libvirtd will become unsupported in a future major release of RHEL. However, if you updated to RHEL 9 from RHEL 8, your host still uses libvirtd by default. Modular libvirt Newly introduced in RHEL 9, modular libvirt provides a specific daemon for each virtualization driver. These include the following: virtqemud - A primary daemon for hypervisor management virtinterfaced - A secondary daemon for host NIC management virtnetworkd - A secondary daemon for virtual network management virtnodedevd - A secondary daemon for host physical device management virtnwfilterd - A secondary daemon for host firewall management virtsecretd - A secondary daemon for host secret management virtstoraged - A secondary daemon for storage management Each of the daemons has a separate configuration file - for example /etc/libvirt/virtqemud.conf . As such, modular libvirt daemons provide better options for fine-tuning libvirt resource management. If you performed a fresh install of RHEL 9, modular libvirt is configured by default. steps If your RHEL 9 uses libvirtd , Red Hat recommends switching to modular daemons. For instructions, see Enabling modular libvirt daemons . 13.4.2. Enabling modular libvirt daemons In RHEL 9, the libvirt library uses modular daemons that handle individual virtualization driver sets on your host. For example, the virtqemud daemon handles QEMU drivers. If you performed a fresh install of a RHEL 9 host, your hypervisor uses modular libvirt daemons by default. However, if you upgraded your host from RHEL 8 to RHEL 9, your hypervisor uses the monolithic libvirtd daemon, which is the default in RHEL 8. If that is the case, Red Hat recommends enabling the modular libvirt daemons instead, because they provide better options for fine-tuning libvirt resource management. In addition, libvirtd will become unsupported in a future major release of RHEL. Prerequisites Your hypervisor is using the monolithic libvirtd service. If this command displays active , you are using libvirtd . Your virtual machines are shut down. Procedure Stop libvirtd and its sockets. Disable libvirtd to prevent it from starting on boot. Enable the modular libvirt daemons. Start the sockets for the modular daemons. Optional: If you require connecting to your host from remote hosts, enable and start the virtualization proxy daemon. Check whether the libvirtd-tls.socket service is enabled on your system. If libvirtd-tls.socket is not enabled ( listen_tls = 0 ), activate virtproxyd as follows: If libvirtd-tls.socket is enabled ( listen_tls = 1 ), activate virtproxyd as follows: To enable the TLS socket of virtproxyd , your host must have TLS certificates configured to work with libvirt . For more information, see the Upstream libvirt documentation . Verification Activate the enabled virtualization daemons. Verify that your host is using the virtqemud modular daemon. If the status is active , you have successfully enabled modular libvirt daemons. 13.5. Configuring virtual machine memory To improve the performance of a virtual machine (VM), you can assign additional host RAM to the VM. Similarly, you can decrease the amount of memory allocated to a VM so the host memory can be allocated to other VMs or tasks. To perform these actions, you can use the web console or the command line . 13.5.1. Memory overcommitment Virtual machines (VMs) running on a KVM hypervisor do not have dedicated blocks of physical RAM assigned to them. Instead, each VM functions as a Linux process where the host's Linux kernel allocates memory only when requested. In addition, the host's memory manager can move the VM's memory between its own physical memory and swap space. If memory overcommitment is enabled, the kernel can decide to allocate less physical memory than is requested by a VM, because often the requested amount of memory is not fully used by the VM's process. By default, memory overcommitment is enabled in the Linux kernel and the kernel estimates the safe amount of memory overcommitment for VM's requests. However, the system can still become unstable with frequent overcommitment for memory-intensive workloads. Memory overcommitment requires you to allocate sufficient swap space on the host physical machine to accommodate all VMs as well as enough memory for the host physical machine's processes. For instructions on the basic recommended swap space size, see: What is the recommended swap size for Red Hat platforms? Recommended methods to deal with memory shortages on the host: Allocate less memory per VM. Add more physical memory to the host. Use larger swap space. Important A VM will run slower if it is swapped frequently. In addition, overcommitting can cause the system to run out of memory (OOM), which may lead to the Linux kernel shutting down important system processes. Memory overcommit is not supported with device assignment. This is because when device assignment is in use, all virtual machine memory must be statically pre-allocated to enable direct memory access (DMA) with the assigned device. Additional resources Virtual memory parameters 13.5.2. Adding and removing virtual machine memory by using the web console To improve the performance of a virtual machine (VM) or to free up the host resources it is using, you can use the web console to adjust amount of memory allocated to the VM. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The guest OS is running the memory balloon drivers. To verify this is the case: Ensure the VM's configuration includes the memballoon device: If this commands displays any output and the model is not set to none , the memballoon device is present. Ensure the balloon drivers are running in the guest OS. In Windows guests, the drivers are installed as a part of the virtio-win driver package. For instructions, see Installing paravirtualized KVM drivers for Windows virtual machines . In Linux guests, the drivers are generally included by default and activate when the memballoon device is present. The web console VM plug-in is installed on your system . Procedure Optional: Obtain the information about the maximum memory and currently used memory for a VM. This will serve as a baseline for your changes, and also for verification. Log in to the RHEL 9 web console. For details, see Logging in to the web console . In the Virtual Machines interface, click the VM whose information you want to see. A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM's graphical interface. Click edit to the Memory line in the Overview pane. The Memory Adjustment dialog appears. Configure the virtual memory for the selected VM. Maximum allocation - Sets the maximum amount of host memory that the VM can use for its processes. You can specify the maximum memory when creating the VM or increase it later. You can specify memory as multiples of MiB or GiB. Adjusting maximum memory allocation is only possible on a shut-off VM. Current allocation - Sets the actual amount of memory allocated to the VM. This value can be less than the Maximum allocation but cannot exceed it. You can adjust the value to regulate the memory available to the VM for its processes. You can specify memory as multiples of MiB or GiB. If you do not specify this value, the default allocation is the Maximum allocation value. Click Save . The memory allocation of the VM is adjusted. Additional resources Adding and removing virtual machine memory by using the command line Optimizing virtual machine CPU performance 13.5.3. Adding and removing virtual machine memory by using the command line To improve the performance of a virtual machine (VM) or to free up the host resources it is using, you can use the CLI to adjust amount of memory allocated to the VM. Prerequisites The guest OS is running the memory balloon drivers. To verify this is the case: Ensure the VM's configuration includes the memballoon device: If this commands displays any output and the model is not set to none , the memballoon device is present. Ensure the ballon drivers are running in the guest OS. In Windows guests, the drivers are installed as a part of the virtio-win driver package. For instructions, see Installing paravirtualized KVM drivers for Windows virtual machines . In Linux guests, the drivers are generally included by default and activate when the memballoon device is present. Procedure Optional: Obtain the information about the maximum memory and currently used memory for a VM. This will serve as a baseline for your changes, and also for verification. Adjust the maximum memory allocated to a VM. Increasing this value improves the performance potential of the VM, and reducing the value lowers the performance footprint the VM has on your host. Note that this change can only be performed on a shut-off VM, so adjusting a running VM requires a reboot to take effect. For example, to change the maximum memory that the testguest VM can use to 4096 MiB: To increase the maximum memory of a running VM, you can attach a memory device to the VM. This is also referred to as memory hot plug . For details, see Attaching devices to virtual machines . Warning Removing memory devices from a running VM (also referred as a memory hot unplug) is not supported, and highly discouraged by Red Hat. Optional: You can also adjust the memory currently used by the VM, up to the maximum allocation. This regulates the memory load that the VM has on the host until the reboot, without changing the maximum VM allocation. Verification Confirm that the memory used by the VM has been updated: Optional: If you adjusted the current VM memory, you can obtain the memory balloon statistics of the VM to evaluate how effectively it regulates its memory use. Additional resources Adding and removing virtual machine memory by using the web console Optimizing virtual machine CPU performance 13.5.4. Configuring virtual machines to use huge pages In certain use cases, you can improve memory allocation for your virtual machines (VMs) by using huge pages instead of the default 4 KiB memory pages. For example, huge pages can improve performance for VMs with high memory utilization, such as database servers. Prerequisites The host is configured to use huge pages in memory allocation. For instructions, see: Configuring HugeTLB at boot time Procedure Shut down the selected VM if it is running. To configure a VM to use 1 GiB huge pages, open the XML definition of a VM for editing. For example, to edit a testguest VM, run the following command: Add the following lines to the <memoryBacking> section in the XML definition: <memoryBacking> <hugepages> <page size='1' unit='GiB'/> </hugepages> </memoryBacking> Verification Start the VM. Confirm that the host has successfully allocated huge pages for the running VM. On the host, run the following command: When you add together the number of free and reserved huge pages ( HugePages_Free + HugePages_Rsvd ), the result should be less than the total number of huge pages ( HugePages_Total ). The difference is the number of huge pages that is used by the running VM. Additional resources Configuring huge pages 13.5.5. Adding and removing virtual machine memory by using virtio-mem RHEL 9 provides the virtio-mem paravirtualized memory device. This device makes it possible to dynamically add or remove host memory in virtual machines (VMs). For example, you can use virtio-mem to move memory resources between running VMs or to resize VM memory in cloud setups based on your current requirements. 13.5.5.1. Overview of virtio-mem virtio-mem is a paravirtualized memory device that can be used to dynamically add or remove host memory in virtual machines (VMs). For example, you can use this device to move memory resources between running VMs or to resize VM memory in cloud setups based on your current requirements. By using virtio-mem , you can increase the memory of a VM beyond its initial size, and shrink it back to its original size, in units that can have the size of 4 to several hundred mebibytes (MiBs). Note, however, that virtio-mem also relies on a specific guest operating system configuration, especially to reliably unplug memory. virtio-mem feature limitations virtio-mem is currently not compatible with the following features: Using memory locking for real-time applications on the host Using encrypted virtualization on the host Combining virtio-mem with memballoon inflation and deflation on the host Unloading or reloading the virtio_mem driver in a VM Using vhost-user devices, with the exception of virtiofs Additional resources Configuring memory onlining in virtual machines Attaching a virtio-mem device to virtual machines 13.5.5.2. Configuring memory onlining in virtual machines Before using virtio-mem to attach memory to a running virtual machine (also known as memory hot-plugging), you must configure the virtual machine (VM) operating system to automatically set the hot-plugged memory to an online state. Otherwise, the guest operating system is not able to use the additional memory. You can choose from one of the following configurations for memory onlining: online_movable online_kernel auto-movable To learn about differences between these configurations, see: Comparison of memory onlining configurations Memory onlining is configured with udev rules by default in RHEL. However, when using virtio-mem , it is recommended to configure memory onlining directly in the kernel. Prerequisites The host uses the Intel 64, AMD64, or ARM 64 CPU architecture. The host uses RHEL 9.4 or later as the operating system. VMs running on the host use one of the following operating system versions: RHEL 8.10 Important Unplugging memory from a running VM is disabled by default in RHEL 8.10 VMs. RHEL 9 Procedure To set memory onlining to use the online_movable configuration in the VM: Set the memhp_default_state kernel command line parameter to online_movable : Reboot the VM. To set memory onlining to use the online_kernel configuration in the VM: Set the memhp_default_state kernel command line parameter to online_kernel : Reboot the VM. To use the auto-movable memory onlining policy in the VM: Set the memhp_default_state kernel command line parameter to online : Set the memory_hotplug.online_policy kernel command line parameter to auto-movable : Optional: To further tune the auto-movable onlining policy, change the memory_hotplug.auto_movable_ratio and memory_hotplug.auto_movable_numa_aware parameters: The memory_hotplug.auto_movable_ratio parameter sets the maximum ratio of memory only available for movable allocations compared to memory available for any allocations. The ratio is expressed in percents and the default value is: 301 (%), which is a 3:1 ratio. The memory_hotplug.auto_movable_numa_aware parameter controls whether the memory_hotplug.auto_movable_ratio parameter applies to memory across all available NUMA nodes or only for memory within a single NUMA node. The default value is: y (yes) For example, if the maximum ratio is set to 301% and the memory_hotplug.auto_movable_numa_aware is set to y (yes), than the 3:1 ratio is applied even within the NUMA node with the attached virtio-mem device. If the parameter is set to n (no), the maximum 3:1 ratio is applied only for all the NUMA nodes as a whole. Additionally, if the ratio is not exceeded, the newly hot-plugged memory will be available only for movable allocations. Otherwise, the newly hot-plugged memory will be available for both movable and unmovable allocations. Reboot the VM. Verification To see if the online_movable configuration has been set correctly, check the current value of the memhp_default_state kernel parameter: To see if the online_kernel configuration has been set correctly, check the current value of the memhp_default_state kernel parameter: To see if the auto-movable configuration has been set correctly, check the following kernel parameters: memhp_default_state : memory_hotplug.online_policy : memory_hotplug.auto_movable_ratio : memory_hotplug.auto_movable_numa_aware : Additional resources Overview of virtio-mem Attaching a virtio-mem device to virtual machines Configuring Memory Hot(Un)Plug 13.5.5.3. Attaching a virtio-mem device to virtual machines To attach additional memory to a running virtual machine (also known as memory hot-plugging) and afterwards be able to resize the hot-plugged memory, you can use a virtio-mem device. Specifically, you can use libvirt XML configuration files and virsh commands to define and attach virtio-mem devices to virtual machines (VMs). Prerequisites The host uses the Intel 64, AMD64, or ARM 64 CPU architecture. The host uses RHEL 9.4 or later as the operating system. VMs running on the host use one of the following operating system versions: RHEL 8.10 Important Unplugging memory from a running VM is disabled by default in RHEL 8.10 VMs. RHEL 9 The VM has memory onlining configured. For instructions, see: Configuring memory onlining in virtual machines Procedure Ensure the XML configuration of the target VM includes the maxMemory parameter: In this example, the XML configuration of the testguest1 VM defines a maxMemory parameter with a 128 gibibyte (GiB) size. The maxMemory size specifies the maximum memory the VM can use, which includes both initial and hot-plugged memory. Create and open an XML file to define virtio-mem devices on the host, for example: Add XML definitions of virtio-mem devices to the file and save it: <memory model='virtio-mem'> <target> <size unit='GiB'>48</size> <node>0</node> <block unit='MiB'>2</block> <requested unit='GiB'>16</requested> <current unit='GiB'>16</current> </target> <alias name='ua-virtiomem0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </memory> <memory model='virtio-mem'> <target> <size unit='GiB'>48</size> <node>1</node> <block unit='MiB'>2</block> <requested unit='GiB'>0</requested> <current unit='GiB'>0</current> </target> <alias name='ua-virtiomem1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memory> In this example, two virtio-mem devices are defined with the following parameters: size : This is the maximum size of the device. In the example, it is 48 GiB. The size must be a multiple of the block size. node : This is the assigned vNUMA node for the virtio-mem device. block : This is the block size of the device. It must be at least the size of the Transparent Huge Page (THP), which is 2 MiB on Intel 64 and AMD64 CPU architecture. On ARM64 architecture, the size of THP can be 2 MiB or 512 MiB depending on the base page size. The 2 MiB block size on Intel 64 or AMD64 architecture is usually a good default choice. When using virtio-mem with Virtual Function I/O (VFIO) or mediated devices (mdev) , the total number of blocks across all virtio-mem devices must not be larger than 32768, otherwise the plugging of RAM might fail. requested : This is the amount of memory you attach to the VM with the virtio-mem device. However, it is just a request towards the VM and it might not be resolved successfully, for example if the VM is not properly configured. The requested size must be a multiple of the block size and cannot exceed the maximum defined size . current : This represents the current size the virtio-mem device provides to the VM. The current size can differ from the requested size, for example when requests cannot be completed or when rebooting the VM. alias : This is an optional user-defined alias that you can use to specify the intended virtio-mem device, for example when editing the device with libvirt commands. All user-defined aliases in libvirt must start with the "ua-" prefix. Apart from these specific parameters, libvirt handles the virtio-mem device like any other PCI device. Use the XML file to attach the defined virtio-mem devices to a VM. For example, to permanently attach the two devices defined in the virtio-mem-device.xml to the running testguest1 VM: The --live option attaches the device to a running VM only, without persistence between boots. The --config option makes the configuration changes persistent. You can also attach the device to a shutdown VM without the --live option. Optional: To dynamically change the requested size of a virtio-mem device attached to a running VM, use the virsh update-memory-device command: In this example: testguest1 is the VM you want to update. --alias ua-virtiomem0 is the virtio-mem device specified by a previously defined alias. --requested-size 4GiB changes the requested size of the virtio-mem device to 4 GiB. Warning Unplugging memory from a running VM by reducing the requested size might be unreliable. Whether this process succeeds depends on various factors, such as the memory onlining policy that is used. In some cases, the guest operating system cannot complete the request successfully, because changing the amount of hot-plugged memory is not possible at that time. Additionally, unplugging memory from a running VM is disabled by default in RHEL 8.10 VMs. Optional: To unplug a virtio-mem device from a shut-down VM, use the virsh detach-device command: Optional: To unplug a virtio-mem device from a running VM: Change the requested size of the virtio-mem device to 0, otherwise the attempt to unplug a virtio-mem device from a running VM will fail. Unplug a virtio-mem device from the running VM: Verification In the VM, check the available RAM and see if the total amount now includes the hot-plugged memory: The current amount of plugged-in RAM can be also viewed on the host by displaying the XML configuration of the running VM: In this example: <currentMemory unit='GiB'>31</currentMemory> represents the total RAM available in the VM from all sources. <current unit='GiB'>16</current> represents the current size of the plugged-in RAM provided by the virtio-mem device. Additional resources Overview of virtio-mem Configuring memory onlining in virtual machines 13.5.5.4. Comparison of memory onlining configurations When attaching memory to a running RHEL virtual machine (also known as memory hot-plugging), you must set the hot-plugged memory to an online state in the virtual machine (VM) operating system. Otherwise, the system will not be able to use the memory. The following table summarizes the main considerations when choosing between the available memory onlining configurations. Table 13.2. Comparison of memory onlining configurations Configuration name Unplugging memory from a VM A risk of creating a memory zone imbalance A potential use case Memory requirements of the intended workload online_movable Hot-plugged memory can be reliably unplugged. Yes Hot-plugging a comparatively small amount of memory Mostly user-space memory auto-movable Movable portions of hot-plugged memory can be reliably unplugged. Minimal Hot-plugging a large amount of memory Mostly user-space memory online_kernel Hot-plugged memory cannot be reliably unplugged. No Unreliable memory unplugging is acceptable. User-space or kernel-space memory A zone imbalance is a lack of available memory pages in one of the Linux memory zones. A zone imbalance can negatively impact the system performance. For example, the kernel might crash if it runs out of free memory for unmovable allocations. Usually, movable allocations contain mostly user-space memory pages and unmovable allocations contain mostly kernel-space memory pages. Additional resources Onlining and Offlining Memory Blocks Zone Imbalances Configuring memory onlining in virtual machines 13.5.6. Additional resources Attaching devices to virtual machines Attaching devices to virtual machines . 13.6. Optimizing virtual machine I/O performance The input and output (I/O) capabilities of a virtual machine (VM) can significantly limit the VM's overall efficiency. To address this, you can optimize a VM's I/O by configuring block I/O parameters. 13.6.1. Tuning block I/O in virtual machines When multiple block devices are being used by one or more VMs, it might be important to adjust the I/O priority of specific virtual devices by modifying their I/O weights . Increasing the I/O weight of a device increases its priority for I/O bandwidth, and therefore provides it with more host resources. Similarly, reducing a device's weight makes it consume less host resources. Note Each device's weight value must be within the 100 to 1000 range. Alternatively, the value can be 0 , which removes that device from per-device listings. Procedure To display and set a VM's block I/O parameters: Display the current <blkio> parameters for a VM: # virsh dumpxml VM-name <domain> [...] <blkiotune> <weight>800</weight> <device> <path>/dev/sda</path> <weight>1000</weight> </device> <device> <path>/dev/sdb</path> <weight>500</weight> </device> </blkiotune> [...] </domain> Edit the I/O weight of a specified device: For example, the following changes the weight of the /dev/sda device in the testguest1 VM to 500. Verification Check that the VM's block I/O parameters have been configured correctly. Important Certain kernels do not support setting I/O weights for specific devices. If the step does not display the weights as expected, it is likely that this feature is not compatible with your host kernel. 13.6.2. Disk I/O throttling in virtual machines When several VMs are running simultaneously, they can interfere with system performance by using excessive disk I/O. Disk I/O throttling in KVM virtualization provides the ability to set a limit on disk I/O requests sent from the VMs to the host machine. This can prevent a VM from over-utilizing shared resources and impacting the performance of other VMs. To enable disk I/O throttling, set a limit on disk I/O requests sent from each block device attached to VMs to the host machine. Procedure Use the virsh domblklist command to list the names of all the disk devices on a specified VM. Find the host block device where the virtual disk that you want to throttle is mounted. For example, if you want to throttle the sdb virtual disk from the step, the following output shows that the disk is mounted on the /dev/nvme0n1p3 partition. Set I/O limits for the block device by using the virsh blkiotune command. The following example throttles the sdb disk on the rollin-coal VM to 1000 read and write I/O operations per second and to 50 MB per second read and write throughput. Additional information Disk I/O throttling can be useful in various situations, for example when VMs belonging to different customers are running on the same host, or when quality of service guarantees are given for different VMs. Disk I/O throttling can also be used to simulate slower disks. I/O throttling can be applied independently to each block device attached to a VM and supports limits on throughput and I/O operations. Red Hat does not support using the virsh blkdeviotune command to configure I/O throttling in VMs. For more information about unsupported features when using RHEL 9 as a VM host, see Unsupported features in RHEL 9 virtualization . 13.6.3. Enabling multi-queue on storage devices When using virtio-blk or virtio-scsi storage devices in your virtual machines (VMs), the multi-queue feature provides improved storage performance and scalability. It enables each virtual CPU (vCPU) to have a separate queue and interrupt to use without affecting other vCPUs. The multi-queue feature is enabled by default for the Q35 machine type, however you must enable it manually on the i440FX machine type. You can tune the number of queues to be optimal for your workload, however the optimal number differs for each type of workload and you must test which number of queues works best in your case. Procedure To enable multi-queue on a storage device, edit the XML configuration of the VM. In the XML configuration, find the intended storage device and change the queues parameter to use multiple I/O queues. Replace N with the number of vCPUs in the VM, up to 16. A virtio-blk example: <disk type='block' device='disk'> <driver name='qemu' type='raw' queues='N' /> <source dev='/dev/sda'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> A virtio-scsi example: <controller type='scsi' index='0' model='virtio-scsi'> <driver queues='N' /> </controller> Restart the VM for the changes to take effect. 13.6.4. Configuring dedicated IOThreads To improve the Input/Output (IO) performance of a disk on your virtual machine (VM), you can configure a dedicated IOThread that is used to manage the IO operations of the VM's disk. Normally, the IO operations of a disk are a part of the main QEMU thread, which can decrease the responsiveness of the VM as a whole during intensive IO workloads. By separating the IO operations to a dedicated IOThread , you can significantly increase the responsiveness and performance of your VM. Procedure Shut down the selected VM if it is running. On the host, add or edit the <iothreads> tag in the XML configuration of the VM. For example, to create a single IOThread for a testguest1 VM: Note For optimal results, use only 1-2 IOThreads per CPU on the host. Assign a dedicated IOThread` to a VM disk. For example, to assign an IOThread with ID of 1 to a disk on the testguest1 VM: Note IOThread IDs start from 1 and you must dedicate only a single IOThread to a disk. Usually, a one dedicated IOThread per VM is sufficient for optimal performance. When using virtio-scsi storage devices, assign a dedicated IOThread` to the virtio-scsi controller. For example, to assign an IOThread with ID of 1 to a controller on the testguest1 VM: Verification Evaluate the impact of your changes on your VM performance. For details, see: Virtual machine performance monitoring tools 13.6.5. Configuring virtual disk caching KVM provides several virtual disk caching modes. For intensive Input/Output (IO) workloads, selecting the optimal caching mode can significantly increase the virtual machine (VM) performance. Virtual disk cache modes overview writethrough Host page cache is used for reading only. Writes are reported as completed only when the data has been committed to the storage device. The sustained IO performance is decreased but this mode has good write guarantees. writeback Host page cache is used for both reading and writing. Writes are reported as complete when data reaches the host's memory cache, not physical storage. This mode has faster IO performance than writethrough but it is possible to lose data on host failure. none Host page cache is bypassed entirely. This mode relies directly on the write queue of the physical disk, so it has a predictable sustained IO performance and offers good write guarantees on a stable guest. It is also a safe cache mode for VM live migration. Procedure Shut down the selected VM if it is running. Edit the XML configuration of the selected VM. Find the disk device and edit the cache option in the driver tag. <domain type='kvm'> <name>testguest1</name> ... <devices> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' iothread='1'/> <source file='/var/lib/libvirt/images/test-disk.raw'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> ... </devices> ... </domain> 13.7. Optimizing virtual machine CPU performance Much like physical CPUs in host machines, vCPUs are critical to virtual machine (VM) performance. As a result, optimizing vCPUs can have a significant impact on the resource efficiency of your VMs. To optimize your vCPU: Adjust how many host CPUs are assigned to the VM. You can do this using the CLI or the web console . Ensure that the vCPU model is aligned with the CPU model of the host. For example, to set the testguest1 VM to use the CPU model of the host: On an ARM 64 system, use --cpu host-passthrough . Manage kernel same-page merging (KSM) . If your host machine uses Non-Uniform Memory Access (NUMA), you can also configure NUMA for its VMs. This maps the host's CPU and memory processes onto the CPU and memory processes of the VM as closely as possible. In effect, NUMA tuning provides the vCPU with a more streamlined access to the system memory allocated to the VM, which can improve the vCPU processing effectiveness. For details, see Configuring NUMA in a virtual machine and Virtual machine performance optimization for specific workloads . 13.7.1. vCPU overcommitment vCPU overcommitment allows you to have a setup where the sum of all vCPUs in virtual machines (VMs) running on a host exceeds the number of physical CPUs on the host. However, you might experience performance deterioration when simultaneously running more cores in your VMs than are physically available on the host. For best performance, assign VMs with only as many vCPUs as are required to run the intended workloads in each VM. vCPU overcommitment recommendations: Assign the minimum amount of vCPUs required by by the VM's workloads for best performance. Avoid overcommitting vCPUs in production without extensive testing. If overcomitting vCPUs, the safe ratio is typically 5 vCPUs to 1 physical CPU for loads under 100%. It is not recommended to have more than 10 total allocated vCPUs per physical processor core. Monitor CPU usage to prevent performance degradation under heavy loads. Important Applications that use 100% of memory or processing resources may become unstable in overcommitted environments. Do not overcommit memory or CPUs in a production environment without extensive testing, as the CPU overcommit ratio is workload-dependent. 13.7.2. Adding and removing virtual CPUs by using the command line To increase or optimize the CPU performance of a virtual machine (VM), you can add or remove virtual CPUs (vCPUs) assigned to the VM. When performed on a running VM, this is also referred to as vCPU hot plugging and hot unplugging. However, note that vCPU hot unplug is not supported in RHEL 9, and Red Hat highly discourages its use. Prerequisites Optional: View the current state of the vCPUs in the targeted VM. For example, to display the number of vCPUs on the testguest VM: This output indicates that testguest is currently using 1 vCPU, and 1 more vCPu can be hot plugged to it to increase the VM's performance. However, after reboot, the number of vCPUs testguest uses will change to 2, and it will be possible to hot plug 2 more vCPUs. Procedure Adjust the maximum number of vCPUs that can be attached to a VM, which takes effect on the VM's boot. For example, to increase the maximum vCPU count for the testguest VM to 8: Note that the maximum may be limited by the CPU topology, host hardware, the hypervisor, and other factors. Adjust the current number of vCPUs attached to a VM, up to the maximum configured in the step. For example: To increase the number of vCPUs attached to the running testguest VM to 4: This increases the VM's performance and host load footprint of testguest until the VM's boot. To permanently decrease the number of vCPUs attached to the testguest VM to 1: This decreases the VM's performance and host load footprint of testguest after the VM's boot. However, if needed, additional vCPUs can be hot plugged to the VM to temporarily increase its performance. Verification Confirm that the current state of vCPU for the VM reflects your changes. Additional resources Managing virtual CPUs by using the web console 13.7.3. Managing virtual CPUs by using the web console By using the RHEL 9 web console, you can review and configure virtual CPUs used by virtual machines (VMs) to which the web console is connected. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . In the Virtual Machines interface, click the VM whose information you want to see. A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM's graphical interface. Click edit to the number of vCPUs in the Overview pane. The vCPU details dialog appears. Configure the virtual CPUs for the selected VM. vCPU Count - The number of vCPUs currently in use. Note The vCPU count cannot be greater than the vCPU Maximum. vCPU Maximum - The maximum number of virtual CPUs that can be configured for the VM. If this value is higher than the vCPU Count , additional vCPUs can be attached to the VM. Sockets - The number of sockets to expose to the VM. Cores per socket - The number of cores for each socket to expose to the VM. Threads per core - The number of threads for each core to expose to the VM. Note that the Sockets , Cores per socket , and Threads per core options adjust the CPU topology of the VM. This may be beneficial for vCPU performance and may impact the functionality of certain software in the guest OS. If a different setting is not required by your deployment, keep the default values. Click Apply . The virtual CPUs for the VM are configured. Note Changes to virtual CPU settings only take effect after the VM is restarted. Additional resources Adding and removing virtual CPUs by using the command line 13.7.4. Configuring NUMA in a virtual machine The following methods can be used to configure Non-Uniform Memory Access (NUMA) settings of a virtual machine (VM) on a RHEL 9 host. For ease of use, you can set up a VM's NUMA configuration by using automated utilities and services. However, manual NUMA setup is more likely to yield a significant performance improvement. Prerequisites The host is a NUMA-compatible machine. To detect whether this is the case, use the virsh nodeinfo command and see the NUMA cell(s) line: If the value of the line is 2 or greater, the host is NUMA-compatible. Optional: You have the numactl package installed on the host. Procedure Automatic methods Set the VM's NUMA policy to Preferred . For example, to configure the testguest5 VM: Use the numad service to automatically align the VM CPU with memory resources. Start the numad service to automatically align the VM CPU with memory resources. Manual methods To manually tune NUMA settings, you can specify which host NUMA nodes will be assigned specifically to a certain VM. This can improve the host memory usage by the VM's vCPU. Optional: Use the numactl command to view the NUMA topology on the host: Edit the XML configuration of a VM to assign CPU and memory resources to specific NUMA nodes. For example, the following configuration sets testguest6 to use vCPUs 0-7 on NUMA node 0 and vCPUS 8-15 on NUMA node 1 . Both nodes are also assigned 16 GiB of VM's memory. If the VM is running, restart it to apply the configuration. Note For best performance results, it is recommended to respect the maximum memory size for each NUMA node on the host. Known issues NUMA tuning currently cannot be performed on IBM Z hosts . Additional resources Sample vCPU performance tuning scenario View the current NUMA configuration of your system using the numastat utility 13.7.5. Configuring virtual CPU pinning To improve the CPU performance of a virtual machine (VM), you can pin a virtual CPU (vCPU) to a specific physical CPU thread on the host. This ensures that the vCPU will have its own dedicated physical CPU thread, which can significantly improve the vCPU performance. To further optimize the CPU performance, you can also pin QEMU process threads associated with a specified VM to a specific host CPU. Procedure Check the CPU topology on the host: In this example, the output contains NUMA nodes and the available physical CPU threads on the host. Check the number of vCPU threads inside the VM: In this example, the output contains NUMA nodes and the available vCPU threads inside the VM. Pin specific vCPU threads from a VM to a specific host CPU or range of CPUs. This is recommended as a safe method of vCPU performance improvement. For example, the following commands pin vCPU threads 0 to 3 of the testguest6 VM to host CPUs 1, 3, 5, 7, respectively: Optional: Verify whether the vCPU threads are successfully pinned to CPUs. After pinning vCPU threads, you can also pin QEMU process threads associated with a specified VM to a specific host CPU or range of CPUs. This can further help the QEMU process to run more efficiently on the physical CPU. For example, the following commands pin the QEMU process thread of testguest6 to CPUs 2 and 4, and verify this was successful: 13.7.6. Configuring virtual CPU capping You can use virtual CPU (vCPU) capping to limit the amount of CPU resources a virtual machine (VM) can use. vCPU capping can improve the overall performance by preventing excessive use of host's CPU resources by a single VM and by making it easier for the hypervisor to manage CPU scheduling. Procedure View the current vCPU scheduling configuration on the host. To configure an absolute vCPU cap for a VM, set the vcpu_period and vcpu_quota parameters. Both parameters use a numerical value that represents a time duration in microseconds. Set the vcpu_period parameter by using the virsh schedinfo command. For example: In this example, the vcpu_period is set to 100,000 microseconds, which means the scheduler enforces vCPU capping during this time interval. You can also use the --live --config options to configure a running VM without restarting it. Set the vcpu_quota parameter by using the virsh schedinfo command. For example: In this example, the vcpu_quota is set to 50,000 microseconds, which specifies the maximum amount of CPU time that the VM can use during the vcpu_period time interval. In this case, vcpu_quota is set as the half of vcpu_period , so the VM can use up to 50% of the CPU time during that interval. You can also use the --live --config options to configure a running VM without restarting it. Verification Check that the vCPU scheduling parameters have the correct values. 13.7.7. Tuning CPU weights The CPU weight (or CPU shares ) setting controls how much CPU time a virtual machine (VM) receives compared to other running VMs. By increasing the CPU weight of a specific VM, you can ensure that this VM gets more CPU time relative to other VMs. To prioritize CPU time allocation between multiple VMs, set the cpu_shares parameter The possible CPU weight values range from 0 to 262144 and the default value for a new KVM VM is 1024 . Procedure Check the current CPU weight of a VM. Adjust the CPU weight to a preferred value. In this example, cpu_shares is set to 2048. This means that if all other VMs have the value set to 1024, this VM gets approximately twice the amount of CPU time. You can also use the --live --config options to configure a running VM without restarting it. 13.7.8. Enabling and disabling kernel same-page merging Kernel Same-Page Merging (KSM) improves memory density by sharing identical memory pages between virtual machines (VMs). Therefore, enabling KSM might improve memory efficiency of your VM deployment. However, enabling KSM also increases CPU utilization, and might negatively affect overall performance depending on the workload. In RHEL 9 and later, KSM is disabled by default. To enable KSM and test its impact on your VM performance, see the following instructions. Prerequisites Root access to your host system. Procedure Enable KSM: Warning Enabling KSM increases CPU utilization and affects overall CPU performance. Install the ksmtuned service: Start the service: To enable KSM for a single session, use the systemctl utility to start the ksm and ksmtuned services. To enable KSM persistently, use the systemctl utility to enable the ksm and ksmtuned services. Monitor the performance and resource consumption of VMs on your host to evaluate the benefits of activating KSM. Specifically, ensure that the additional CPU usage by KSM does not offset the memory improvements and does not cause additional performance issues. In latency-sensitive workloads, also pay attention to cross-NUMA page merges. Optional: If KSM has not improved your VM performance, disable it: To disable KSM for a single session, use the systemctl utility to stop ksm and ksmtuned services. To disable KSM persistently, use the systemctl utility to disable ksm and ksmtuned services. Note Memory pages shared between VMs before deactivating KSM will remain shared. To stop sharing, delete all the PageKSM pages in the system by using the following command: However, this command increases memory usage, and might cause performance problems on your host or your VMs. 13.8. Optimizing virtual machine network performance Due to the virtual nature of a VM's network interface controller (NIC), the VM loses a portion of its allocated host network bandwidth, which can reduce the overall workload efficiency of the VM. The following tips can minimize the negative impact of virtualization on the virtual NIC (vNIC) throughput. Procedure Use any of the following methods and observe if it has a beneficial effect on your VM network performance: Enable the vhost_net module On the host, ensure the vhost_net kernel feature is enabled: If the output of this command is blank, enable the vhost_net kernel module: Set up multi-queue virtio-net To set up the multi-queue virtio-net feature for a VM, use the virsh edit command to edit to the XML configuration of the VM. In the XML, add the following to the <devices> section, and replace N with the number of vCPUs in the VM, up to 16: If the VM is running, restart it for the changes to take effect. Batching network packets In Linux VM configurations with a long transmission path, batching packets before submitting them to the kernel may improve cache utilization. To set up packet batching, use the following command on the host, and replace tap0 with the name of the network interface that the VMs use: SR-IOV If your host NIC supports SR-IOV, use SR-IOV device assignment for your vNICs. For more information, see Managing SR-IOV devices . Additional resources Understanding virtual networking 13.9. Virtual machine performance monitoring tools To identify what consumes the most VM resources and which aspect of VM performance needs optimization, performance diagnostic tools, both general and VM-specific, can be used. Default OS performance monitoring tools For standard performance evaluation, you can use the utilities provided by default by your host and guest operating systems: On your RHEL 9 host, as root, use the top utility or the system monitor application, and look for qemu and virt in the output. This shows how much host system resources your VMs are consuming. If the monitoring tool displays that any of the qemu or virt processes consume a large portion of the host CPU or memory capacity, use the perf utility to investigate. For details, see below. In addition, if a vhost_net thread process, named for example vhost_net-1234 , is displayed as consuming an excessive amount of host CPU capacity, consider using virtual network optimization features , such as multi-queue virtio-net . On the guest operating system, use performance utilities and applications available on the system to evaluate which processes consume the most system resources. On Linux systems, you can use the top utility. On Windows systems, you can use the Task Manager application. perf kvm You can use the perf utility to collect and analyze virtualization-specific statistics about the performance of your RHEL 9 host. To do so: On the host, install the perf package: Use one of the perf kvm stat commands to display perf statistics for your virtualization host: For real-time monitoring of your hypervisor, use the perf kvm stat live command. To log the perf data of your hypervisor over a period of time, activate the logging by using the perf kvm stat record command. After the command is canceled or interrupted, the data is saved in the perf.data.guest file, which can be analyzed by using the perf kvm stat report command. Analyze the perf output for types of VM-EXIT events and their distribution. For example, the PAUSE_INSTRUCTION events should be infrequent, but in the following output, the high occurrence of this event suggests that the host CPUs are not handling the running vCPUs well. In such a scenario, consider shutting down some of your active VMs, removing vCPUs from these VMs, or tuning the performance of the vCPUs . Other event types that can signal problems in the output of perf kvm stat include: INSN_EMULATION - suggests suboptimal VM I/O configuration . For more information about using perf to monitor virtualization performance, see the perf-kvm man page on your system. numastat To see the current NUMA configuration of your system, you can use the numastat utility, which is provided by installing the numactl package. The following shows a host with 4 running VMs, each obtaining memory from multiple NUMA nodes. This is not optimal for vCPU performance, and warrants adjusting : In contrast, the following shows memory being provided to each VM by a single node, which is significantly more efficient. 13.10. Additional resources Optimizing Windows virtual machines | [
"tuned-adm list Available profiles: - balanced - General non-specialized TuneD profile - desktop - Optimize for the desktop use-case [...] - virtual-guest - Optimize for running inside a virtual guest - virtual-host - Optimize for running KVM guests Current active profile: balanced",
"tuned-adm profile selected-profile",
"tuned-adm profile virtual-host",
"tuned-adm profile virtual-guest",
"tuned-adm active Current active profile: virtual-host",
"tuned-adm verify Verification succeeded, current system settings match the preset profile. See tuned log file ('/var/log/tuned/tuned.log') for details.",
"systemctl is-active libvirtd.service active",
"systemctl stop libvirtd.service systemctl stop libvirtd{,-ro,-admin,-tcp,-tls}.socket",
"systemctl disable libvirtd.service systemctl disable libvirtd{,-ro,-admin,-tcp,-tls}.socket",
"for drv in qemu interface network nodedev nwfilter secret storage; do systemctl unmask virtUSD{drv}d.service; systemctl unmask virtUSD{drv}d{,-ro,-admin}.socket; systemctl enable virtUSD{drv}d.service; systemctl enable virtUSD{drv}d{,-ro,-admin}.socket; done",
"for drv in qemu network nodedev nwfilter secret storage; do systemctl start virtUSD{drv}d{,-ro,-admin}.socket; done",
"grep listen_tls /etc/libvirt/libvirtd.conf listen_tls = 0",
"systemctl unmask virtproxyd.service systemctl unmask virtproxyd{,-ro,-admin}.socket systemctl enable virtproxyd.service systemctl enable virtproxyd{,-ro,-admin}.socket systemctl start virtproxyd{,-ro,-admin}.socket",
"systemctl unmask virtproxyd.service systemctl unmask virtproxyd{,-ro,-admin,-tls}.socket systemctl enable virtproxyd.service systemctl enable virtproxyd{,-ro,-admin,-tls}.socket systemctl start virtproxyd{,-ro,-admin,-tls}.socket",
"virsh uri qemu:///system",
"systemctl is-active virtqemud.service active",
"virsh dumpxml testguest | grep memballoon <memballoon model='virtio'> </memballoon>",
"virsh dominfo testguest Max memory: 2097152 KiB Used memory: 2097152 KiB",
"virsh dumpxml testguest | grep memballoon <memballoon model='virtio'> </memballoon>",
"virsh dominfo testguest Max memory: 2097152 KiB Used memory: 2097152 KiB",
"virt-xml testguest --edit --memory memory=4096,currentMemory=4096 Domain 'testguest' defined successfully. Changes will take effect after the domain is fully powered off.",
"virsh setmem testguest --current 2048",
"virsh dominfo testguest Max memory: 4194304 KiB Used memory: 2097152 KiB",
"virsh domstats --balloon testguest Domain: 'testguest' balloon.current=365624 balloon.maximum=4194304 balloon.swap_in=0 balloon.swap_out=0 balloon.major_fault=306 balloon.minor_fault=156117 balloon.unused=3834448 balloon.available=4035008 balloon.usable=3746340 balloon.last-update=1587971682 balloon.disk_caches=75444 balloon.hugetlb_pgalloc=0 balloon.hugetlb_pgfail=0 balloon.rss=1005456",
"virsh edit testguest",
"<memoryBacking> <hugepages> <page size='1' unit='GiB'/> </hugepages> </memoryBacking>",
"cat /proc/meminfo | grep Huge HugePages_Total: 4 HugePages_Free: 2 HugePages_Rsvd: 1 Hugepagesize: 1024000 kB",
"grubby --update-kernel=ALL --remove-args=memhp_default_state --args=memhp_default_state=online_movable",
"grubby --update-kernel=ALL --remove-args=memhp_default_state --args=memhp_default_state=online_kernel",
"grubby --update-kernel=ALL --remove-args=memhp_default_state --args=memhp_default_state=online",
"grubby --update-kernel=ALL --remove-args=\"memory_hotplug.online_policy\" --args=memory_hotplug.online_policy=auto-movable",
"grubby --update-kernel=ALL --remove-args=\"memory_hotplug.auto_movable_ratio\" --args=memory_hotplug.auto_movable_ratio= <percentage> grubby --update-kernel=ALL --remove-args=\"memory_hotplug.memory_auto_movable_numa_aware\" --args=memory_hotplug.auto_movable_numa_aware= <y/n>",
"cat /sys/devices/system/memory/auto_online_blocks online_movable",
"cat /sys/devices/system/memory/auto_online_blocks online_kernel",
"cat /sys/devices/system/memory/auto_online_blocks online",
"cat /sys/module/memory_hotplug/parameters/online_policy auto-movable",
"cat /sys/module/memory_hotplug/parameters/auto_movable_ratio 301",
"cat /sys/module/memory_hotplug/parameters/auto_movable_numa_aware y",
"virsh edit testguest1 <domain type='kvm'> <name>testguest1</name> <maxMemory unit='GiB'>128</maxMemory> </domain>",
"vim virtio-mem-device.xml",
"<memory model='virtio-mem'> <target> <size unit='GiB'>48</size> <node>0</node> <block unit='MiB'>2</block> <requested unit='GiB'>16</requested> <current unit='GiB'>16</current> </target> <alias name='ua-virtiomem0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </memory> <memory model='virtio-mem'> <target> <size unit='GiB'>48</size> <node>1</node> <block unit='MiB'>2</block> <requested unit='GiB'>0</requested> <current unit='GiB'>0</current> </target> <alias name='ua-virtiomem1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memory>",
"virsh attach-device testguest1 virtio-mem-device.xml --live --config",
"virsh update-memory-device testguest1 --alias ua-virtiomem0 --requested-size 4GiB",
"virsh detach-device testguest1 virtio-mem-device.xml",
"virsh update-memory-device testguest1 --alias ua-virtiomem0 --requested-size 0",
"virsh detach-device testguest1 virtio-mem-device.xml --config",
"free -h total used free shared buff/cache available Mem: 31Gi 5.5Gi 14Gi 1.3Gi 11Gi 23Gi Swap: 8.0Gi 0B 8.0Gi",
"numactl -H available: 1 nodes (0) node 0 cpus: 0 1 2 3 4 5 6 7 node 0 size: 29564 MB node 0 free: 13351 MB node distances: node 0 0: 10",
"virsh dumpxml testguest1 <domain type='kvm'> <name>testguest1</name> <currentMemory unit='GiB'>31</currentMemory> <memory model='virtio-mem'> <target> <size unit='GiB'>48</size> <node>0</node> <block unit='MiB'>2</block> <requested unit='GiB'>16</requested> <current unit='GiB'>16</current> </target> <alias name='ua-virtiomem0'/> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </domain>",
"<domain> [...] <blkiotune> <weight>800</weight> <device> <path>/dev/sda</path> <weight>1000</weight> </device> <device> <path>/dev/sdb</path> <weight>500</weight> </device> </blkiotune> [...] </domain>",
"virsh blkiotune VM-name --device-weights device , I/O-weight",
"virsh blkiotune testguest1 --device-weights /dev/sda, 500",
"virsh blkiotune testguest1 Block I/O tuning parameters for domain testguest1: weight : 800 device_weight : [ {\"sda\": 500}, ]",
"virsh domblklist rollin-coal Target Source ------------------------------------------------ vda /var/lib/libvirt/images/rollin-coal.qcow2 sda - sdb /home/horridly-demanding-processes.iso",
"lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT zram0 252:0 0 4G 0 disk [SWAP] nvme0n1 259:0 0 238.5G 0 disk ├─nvme0n1p1 259:1 0 600M 0 part /boot/efi ├─nvme0n1p2 259:2 0 1G 0 part /boot └─nvme0n1p3 259:3 0 236.9G 0 part └─luks-a1123911-6f37-463c-b4eb-fxzy1ac12fea 253:0 0 236.9G 0 crypt /home",
"virsh blkiotune VM-name --parameter device , limit",
"virsh blkiotune rollin-coal --device-read-iops-sec /dev/nvme0n1p3,1000 --device-write-iops-sec /dev/nvme0n1p3,1000 --device-write-bytes-sec /dev/nvme0n1p3,52428800 --device-read-bytes-sec /dev/nvme0n1p3,52428800",
"virsh edit <example_vm>",
"<disk type='block' device='disk'> <driver name='qemu' type='raw' queues='N' /> <source dev='/dev/sda'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk>",
"<controller type='scsi' index='0' model='virtio-scsi'> <driver queues='N' /> </controller>",
"virsh edit <testguest1> <domain type='kvm'> <name>testguest1</name> <vcpu placement='static'>8</vcpu> <iothreads>1</iothreads> </domain>",
"virsh edit <testguest1> <domain type='kvm'> <name>testguest1</name> <devices> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' iothread='1' /> <source file='/var/lib/libvirt/images/test-disk.raw'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> </devices> </domain>",
"virsh edit <testguest1> <domain type='kvm'> <name>testguest1</name> <devices> <controller type='scsi' index='0' model='virtio-scsi'> <driver iothread='1' /> <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/> </controller> </devices> </domain>",
"virsh edit <vm_name>",
"<domain type='kvm'> <name>testguest1</name> <devices> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' iothread='1'/> <source file='/var/lib/libvirt/images/test-disk.raw'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> </devices> </domain>",
"virt-xml testguest1 --edit --cpu host-model",
"virsh vcpucount testguest maximum config 4 maximum live 2 current config 2 current live 1",
"virsh setvcpus testguest 8 --maximum --config",
"virsh setvcpus testguest 4 --live",
"virsh setvcpus testguest 1 --config",
"virsh vcpucount testguest maximum config 8 maximum live 4 current config 1 current live 4",
"virsh nodeinfo CPU model: x86_64 CPU(s): 48 CPU frequency: 1200 MHz CPU socket(s): 1 Core(s) per socket: 12 Thread(s) per core: 2 NUMA cell(s): 2 Memory size: 67012964 KiB",
"dnf install numactl",
"virt-xml testguest5 --edit --vcpus placement=auto virt-xml testguest5 --edit --numatune mode=preferred",
"echo 1 > /proc/sys/kernel/numa_balancing",
"systemctl start numad",
"numactl --hardware available: 2 nodes (0-1) node 0 size: 18156 MB node 0 free: 9053 MB node 1 size: 18180 MB node 1 free: 6853 MB node distances: node 0 1 0: 10 20 1: 20 10",
"virsh edit <testguest6> <domain type='kvm'> <name>testguest6</name> <vcpu placement='static'>16</vcpu> <cpu ...> <numa> <cell id='0' cpus='0-7' memory='16' unit='GiB'/> <cell id='1' cpus='8-15' memory='16' unit='GiB'/> </numa> </domain>",
"lscpu -p=node,cpu Node,CPU 0,0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 1,0 1,1 1,2 1,3 1,4 1,5 1,6 1,7",
"lscpu -p=node,cpu Node,CPU 0,0 0,1 0,2 0,3",
"virsh vcpupin testguest6 0 1 virsh vcpupin testguest6 1 3 virsh vcpupin testguest6 2 5 virsh vcpupin testguest6 3 7",
"virsh vcpupin testguest6 VCPU CPU Affinity ---------------------- 0 1 1 3 2 5 3 7",
"virsh emulatorpin testguest6 2,4 virsh emulatorpin testguest6 emulator: CPU Affinity ---------------------------------- *: 2,4",
"virsh schedinfo <vm_name> Scheduler : posix cpu_shares : 0 vcpu_period : 0 vcpu_quota : 0 emulator_period: 0 emulator_quota : 0 global_period : 0 global_quota : 0 iothread_period: 0 iothread_quota : 0",
"virsh schedinfo <vm_name> --set vcpu_period=100000",
"virsh schedinfo <vm_name> --set vcpu_quota=50000",
"virsh schedinfo <vm_name> Scheduler : posix cpu_shares : 2048 vcpu_period : 100000 vcpu_quota : 50000",
"virsh schedinfo <vm_name> Scheduler : posix cpu_shares : 1024 vcpu_period : 0 vcpu_quota : 0 emulator_period: 0 emulator_quota : 0 global_period : 0 global_quota : 0 iothread_period: 0 iothread_quota : 0",
"virsh schedinfo <vm_name> --set cpu_shares=2048 Scheduler : posix cpu_shares : 2048 vcpu_period : 0 vcpu_quota : 0 emulator_period: 0 emulator_quota : 0 global_period : 0 global_quota : 0 iothread_period: 0 iothread_quota : 0",
"{PackageManagerCommand} install ksmtuned",
"systemctl start ksm systemctl start ksmtuned",
"systemctl enable ksm Created symlink /etc/systemd/system/multi-user.target.wants/ksm.service /usr/lib/systemd/system/ksm.service systemctl enable ksmtuned Created symlink /etc/systemd/system/multi-user.target.wants/ksmtuned.service /usr/lib/systemd/system/ksmtuned.service",
"systemctl stop ksm systemctl stop ksmtuned",
"systemctl disable ksm Removed /etc/systemd/system/multi-user.target.wants/ksm.service. systemctl disable ksmtuned Removed /etc/systemd/system/multi-user.target.wants/ksmtuned.service.",
"echo 2 > /sys/kernel/mm/ksm/run",
"lsmod | grep vhost vhost_net 32768 1 vhost 53248 1 vhost_net tap 24576 1 vhost_net tun 57344 6 vhost_net",
"modprobe vhost_net",
"<interface type='network'> <source network='default'/> <model type='virtio'/> <driver name='vhost' queues='N'/> </interface>",
"ethtool -C tap0 rx-frames 64",
"dnf install perf",
"perf kvm stat report Analyze events for all VMs, all VCPUs: VM-EXIT Samples Samples% Time% Min Time Max Time Avg time EXTERNAL_INTERRUPT 365634 31.59% 18.04% 0.42us 58780.59us 204.08us ( +- 0.99% ) MSR_WRITE 293428 25.35% 0.13% 0.59us 17873.02us 1.80us ( +- 4.63% ) PREEMPTION_TIMER 276162 23.86% 0.23% 0.51us 21396.03us 3.38us ( +- 5.19% ) PAUSE_INSTRUCTION 189375 16.36% 11.75% 0.72us 29655.25us 256.77us ( +- 0.70% ) HLT 20440 1.77% 69.83% 0.62us 79319.41us 14134.56us ( +- 0.79% ) VMCALL 12426 1.07% 0.03% 1.02us 5416.25us 8.77us ( +- 7.36% ) EXCEPTION_NMI 27 0.00% 0.00% 0.69us 1.34us 0.98us ( +- 3.50% ) EPT_MISCONFIG 5 0.00% 0.00% 5.15us 10.85us 7.88us ( +- 11.67% ) Total Samples:1157497, Total events handled time:413728274.66us.",
"numastat -c qemu-kvm Per-node process memory usage (in MBs) PID Node 0 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Total --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- 51722 (qemu-kvm) 68 16 357 6936 2 3 147 598 8128 51747 (qemu-kvm) 245 11 5 18 5172 2532 1 92 8076 53736 (qemu-kvm) 62 432 1661 506 4851 136 22 445 8116 53773 (qemu-kvm) 1393 3 1 2 12 0 0 6702 8114 --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- Total 1769 463 2024 7462 10037 2672 169 7837 32434",
"numastat -c qemu-kvm Per-node process memory usage (in MBs) PID Node 0 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Total --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- 51747 (qemu-kvm) 0 0 7 0 8072 0 1 0 8080 53736 (qemu-kvm) 0 0 7 0 0 0 8113 0 8120 53773 (qemu-kvm) 0 0 7 0 0 0 1 8110 8118 59065 (qemu-kvm) 0 0 8050 0 0 0 0 0 8051 --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- Total 0 0 8072 0 8072 0 8114 8110 32368"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/optimizing-virtual-machine-performance-in-rhel_monitoring-and-managing-system-status-and-performance |
Chapter 25. Rack schema reference | Chapter 25. Rack schema reference Used in: KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec Full list of Rack schema properties The rack option configures rack awareness. A rack can represent an availability zone, data center, or an actual rack in your data center. The rack is configured through a topologyKey . topologyKey identifies a label on OpenShift nodes that contains the name of the topology in its value. An example of such a label is topology.kubernetes.io/zone (or failure-domain.beta.kubernetes.io/zone on older OpenShift versions), which contains the name of the availability zone in which the OpenShift node runs. You can configure your Kafka cluster to be aware of the rack in which it runs, and enable additional features such as spreading partition replicas across different racks or consuming messages from the closest replicas. For more information about OpenShift node labels, see Well-Known Labels, Annotations and Taints . Consult your OpenShift administrator regarding the node label that represents the zone or rack into which the node is deployed. 25.1. Spreading partition replicas across racks When rack awareness is configured, AMQ Streams will set broker.rack configuration for each Kafka broker. The broker.rack configuration assigns a rack ID to each broker. When broker.rack is configured, Kafka brokers will spread partition replicas across as many different racks as possible. When replicas are spread across multiple racks, the probability that multiple replicas will fail at the same time is lower than if they would be in the same rack. Spreading replicas improves resiliency, and is important for availability and reliability. To enable rack awareness in Kafka, add the rack option to the .spec.kafka section of the Kafka custom resource as shown in the example below. Example rack configuration for Kafka apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... rack: topologyKey: topology.kubernetes.io/zone # ... Note The rack in which brokers are running can change in some cases when the pods are deleted or restarted. As a result, the replicas running in different racks might then share the same rack. Use Cruise Control and the KafkaRebalance resource with the RackAwareGoal to make sure that replicas remain distributed across different racks. When rack awareness is enabled in the Kafka custom resource, AMQ Streams will automatically add the OpenShift preferredDuringSchedulingIgnoredDuringExecution affinity rule to distribute the Kafka brokers across the different racks. However, the preferred rule does not guarantee that the brokers will be spread. Depending on your exact OpenShift and Kafka configurations, you should add additional affinity rules or configure topologySpreadConstraints for both ZooKeeper and Kafka to make sure the nodes are properly distributed accross as many racks as possible. For more information see Configuring pod scheduling . 25.2. Consuming messages from the closest replicas Rack awareness can also be used in consumers to fetch data from the closest replica. This is useful for reducing the load on your network when a Kafka cluster spans multiple datacenters and can also reduce costs when running Kafka in public clouds. However, it can lead to increased latency. In order to be able to consume from the closest replica, rack awareness has to be configured in the Kafka cluster, and the RackAwareReplicaSelector has to be enabled. The replica selector plugin provides the logic that enables clients to consume from the nearest replica. The default implementation uses LeaderSelector to always select the leader replica for the client. Specify RackAwareReplicaSelector for the replica.selector.class to switch from the default implementation. Example rack configuration with enabled replica-aware selector apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... rack: topologyKey: topology.kubernetes.io/zone config: # ... replica.selector.class: org.apache.kafka.common.replica.RackAwareReplicaSelector # ... In addition to the Kafka broker configuration, you also need to specify the client.rack option in your consumers. The client.rack option should specify the rack ID in which the consumer is running. RackAwareReplicaSelector associates matching broker.rack and client.rack IDs, to find the nearest replica and consume from it. If there are multiple replicas in the same rack, RackAwareReplicaSelector always selects the most up-to-date replica. If the rack ID is not specified, or if it cannot find a replica with the same rack ID, it will fall back to the leader replica. Figure 25.1. Example showing client consuming from replicas in the same availability zone You can also configure Kafka Connect, MirrorMaker 2 and Kafka Bridge so that connectors consume messages from the closest replicas. You enable rack awareness in the KafkaConnect , KafkaMirrorMaker2 , and KafkaBridge custom resources. The configuration does does not set affinity rules, but you can also configure affinity or topologySpreadConstraints . For more information see Configuring pod scheduling . When deploying Kafka Connect using AMQ Streams, you can use the rack section in the KafkaConnect custom resource to automatically configure the client.rack option. Example rack configuration for Kafka Connect apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect # ... spec: # ... rack: topologyKey: topology.kubernetes.io/zone # ... When deploying MirrorMaker 2 using AMQ Streams, you can use the rack section in the KafkaMirrorMaker2 custom resource to automatically configure the client.rack option. Example rack configuration for MirrorMaker 2 apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 # ... spec: # ... rack: topologyKey: topology.kubernetes.io/zone # ... When deploying Kafka Bridge using AMQ Streams, you can use the rack section in the KafkaBridge custom resource to automatically configure the client.rack option. Example rack configuration for Kafka Bridge apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge # ... spec: # ... rack: topologyKey: topology.kubernetes.io/zone # ... 25.3. Rack schema properties Property Description topologyKey A key that matches labels assigned to the OpenShift cluster nodes. The value of the label is used to set a broker's broker.rack config, and the client.rack config for Kafka Connect or MirrorMaker 2. string | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone config: # replica.selector.class: org.apache.kafka.common.replica.RackAwareReplicaSelector #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # rack: topologyKey: topology.kubernetes.io/zone #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 spec: # rack: topologyKey: topology.kubernetes.io/zone #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge spec: # rack: topologyKey: topology.kubernetes.io/zone #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-rack-reference |
Chapter 50. EntityOperatorTemplate schema reference | Chapter 50. EntityOperatorTemplate schema reference Used in: EntityOperatorSpec Property Description deployment Template for Entity Operator Deployment . DeploymentTemplate pod Template for Entity Operator Pods . PodTemplate topicOperatorContainer Template for the Entity Topic Operator container. ContainerTemplate userOperatorContainer Template for the Entity User Operator container. ContainerTemplate tlsSidecarContainer Template for the Entity Operator TLS sidecar container. ContainerTemplate serviceAccount Template for the Entity Operator service account. ResourceTemplate entityOperatorRole Template for the Entity Operator Role. ResourceTemplate topicOperatorRoleBinding Template for the Entity Topic Operator RoleBinding. ResourceTemplate userOperatorRoleBinding Template for the Entity Topic Operator RoleBinding. ResourceTemplate | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-EntityOperatorTemplate-reference |
Chapter 18. Orchestration (heat) Parameters | Chapter 18. Orchestration (heat) Parameters You can modify the heat service with orchestration parameters. Parameter Description ApacheCertificateKeySize Override the private key size used when creating the certificate for this service. CertificateKeySize Specifies the private key size used when creating the certificate. The default value is 2048 . EnableCache Enable caching with memcached. The default value is True . HeatApiOptEnvVars Hash of optional environment variables. HeatApiOptVolumes List of optional volumes to be mounted. HeatAuthEncryptionKey Auth encryption key for heat-engine. HeatConfigureDelegatedRoles Create delegated roles. The default value is False . HeatConvergenceEngine Enables the heat engine with the convergence architecture. The default value is True . HeatCorsAllowedOrigin Indicate whether this resource may be shared with the domain received in the request "origin" header. HeatCronPurgeDeletedAge Cron to purge database entries marked as deleted and older than USDage - Age. The default value is 30 . HeatCronPurgeDeletedAgeType Cron to purge database entries marked as deleted and older than USDage - Age type. The default value is days . HeatCronPurgeDeletedDestination Cron to purge database entries marked as deleted and older than USDage - Log destination. The default value is /dev/null . HeatCronPurgeDeletedEnsure Cron to purge database entries marked as deleted and older than USDage - Ensure. The default value is present . HeatCronPurgeDeletedHour Cron to purge database entries marked as deleted and older than USDage - Hour. The default value is 0 . HeatCronPurgeDeletedMaxDelay Cron to purge database entries marked as deleted and older than USDage - Max Delay. The default value is 3600 . HeatCronPurgeDeletedMinute Cron to purge database entries marked as deleted and older than USDage - Minute. The default value is 1 . HeatCronPurgeDeletedMonth Cron to purge database entries marked as deleted and older than USDage - Month. The default value is * . HeatCronPurgeDeletedMonthday Cron to purge database entries marked as deleted and older than USDage - Month Day. The default value is * . HeatCronPurgeDeletedUser Cron to purge database entries marked as deleted and older than USDage - User. The default value is heat . HeatCronPurgeDeletedWeekday Cron to purge database entries marked as deleted and older than USDage - Week Day. The default value is * . HeatEnableDBPurge Whether to create cron job for purging soft deleted rows in the OpenStack Orchestration (heat) database. The default value is True . HeatEngineOptEnvVars Hash of optional environment variables. HeatEngineOptVolumes List of optional volumes to be mounted. HeatEnginePluginDirs An array of directories to search for plug-ins. HeatMaxJsonBodySize Maximum raw byte size of the OpenStack Orchestration (heat) API JSON request body. The default value is 4194304 . HeatMaxNestedStackDepth Maximum number of nested stack depth. The default value is 6 . HeatMaxResourcesPerStack Maximum resources allowed per top-level stack. -1 stands for unlimited. The default value is 1000 . HeatPassword The password for the Orchestration service and database account. HeatReauthenticationAuthMethod Allow reauthentication on token expiry, such that long-running tasks may complete. Note this defeats the expiry of any provided user tokens. HeatStackDomainAdminPassword The admin password for the OpenStack Orchestration (heat) domain in OpenStack Identity (keystone). HeatWorkers Number of workers for OpenStack Orchestration (heat) service. Note that more workers creates a larger number of processes on systems, which results in excess memory consumption. It is recommended to choose a suitable non-default value on systems with high CPU core counts. 0 sets to the OpenStack internal default, which is equal to the number of CPU cores on the node. The default value is 0 . HeatYaqlLimitIterators The maximum number of elements in collection yaql expressions can take for its evaluation. The default value is 1000 . HeatYaqlMemoryQuota The maximum size of memory in bytes that yaql exrpessions can take for its evaluation. The default value is 100000 . MemcachedTLS Set to True to enable TLS on Memcached service. Because not all services support Memcached TLS, during the migration period, Memcached will listen on 2 ports - on the port set with MemcachedPort parameter (above) and on 11211, without TLS. The default value is False . MemcacheUseAdvancedPool Use the advanced (eventlet safe) memcached client pool. The default value is True . NotificationDriver Driver or drivers to handle sending notifications. The default value is noop . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/overcloud_parameters/ref_orchestration-heat-parameters_overcloud_parameters |
API Documentation | API Documentation Red Hat Single Sign-On 7.6 For Use with Red Hat Single Sign-On 7.6 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/api_documentation/index |
Chapter 6. Red Hat Quay Robot Account overview | Chapter 6. Red Hat Quay Robot Account overview Robot Accounts are used to set up automated access to the repositories in your Red Hat Quay registry. They are similar to OpenShift Container Platform service accounts. Setting up a Robot Account results in the following: Credentials are generated that are associated with the Robot Account. Repositories and images that the Robot Account can push and pull images from are identified. Generated credentials can be copied and pasted to use with different container clients, such as Docker, Podman, Kubernetes, Mesos, and so on, to access each defined repository. Each Robot Account is limited to a single user namespace or Organization. For example, the Robot Account could provide access to all repositories for the user quayadmin . However, it cannot provide access to repositories that are not in the user's list of repositories. Robot Accounts can be created using the Red Hat Quay UI, or through the CLI using the Red Hat Quay API. After creation, Red Hat Quay administrators can leverage more advanced features with Robot Accounts, such as keyless authentication. 6.1. Creating a robot account by using the UI Use the following procedure to create a robot account using the v2 UI. Procedure On the v2 UI, click Organizations . Click the name of the organization that you will create the robot account for, for example, test-org . Click the Robot accounts tab Create robot account . In the Provide a name for your robot account box, enter a name, for example, robot1 . The name of your Robot Account becomes a combination of your username plus the name of the robot, for example, quayadmin+robot1 Optional. The following options are available if desired: Add the robot account to a team. Add the robot account to a repository. Adjust the robot account's permissions. On the Review and finish page, review the information you have provided, then click Review and finish . The following alert appears: Successfully created robot account with robot name: <organization_name> + <robot_name> . Alternatively, if you tried to create a robot account with the same name as another robot account, you might receive the following error message: Error creating robot account . Optional. You can click Expand or Collapse to reveal descriptive information about the robot account. Optional. You can change permissions of the robot account by clicking the kebab menu Set repository permissions . The following message appears: Successfully updated repository permission . Optional. You can click the name of your robot account to obtain the following information: Robot Account : Select this obtain the robot account token. You can regenerate the token by clicking Regenerate token now . Kubernetes Secret : Select this to download credentials in the form of a Kubernetes pull secret YAML file. Podman : Select this to copy a full podman login command line that includes the credentials. Docker Configuration : Select this to copy a full docker login command line that includes the credentials. 6.2. Creating a robot account by using the Red Hat Quay API Use the following procedure to create a robot account using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to create a new robot account for an organization using the PUT /api/v1/organization/{orgname}/robots/{robot_shortname} endpoint: USD curl -X PUT -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_name>" Example output {"name": "orgname+robot-name", "created": "Fri, 10 May 2024 15:11:00 -0000", "last_accessed": null, "description": "", "token": "<example_secret>", "unstructured_metadata": null} Enter the following command to create a new robot account for the current user with the PUT /api/v1/user/robots/{robot_shortname} endpoint: USD curl -X PUT -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/user/robots/<robot_name>" Example output {"name": "quayadmin+robot-name", "created": "Fri, 10 May 2024 15:24:57 -0000", "last_accessed": null, "description": "", "token": "<example_secret>", "unstructured_metadata": null} 6.3. Bulk managing robot account repository access Use the following procedure to manage, in bulk, robot account repository access by using the Red Hat Quay v2 UI. Prerequisites You have created a robot account. You have created multiple repositories under a single organization. Procedure On the Red Hat Quay v2 UI landing page, click Organizations in the navigation pane. On the Organizations page, select the name of the organization that has multiple repositories. The number of repositories under a single organization can be found under the Repo Count column. On your organization's page, click Robot accounts . For the robot account that will be added to multiple repositories, click the kebab icon Set repository permissions . On the Set repository permissions page, check the boxes of the repositories that the robot account will be added to. For example: Set the permissions for the robot account, for example, None , Read , Write , Admin . Click save . An alert that says Success alert: Successfully updated repository permission appears on the Set repository permissions page, confirming the changes. Return to the Organizations Robot accounts page. Now, the Repositories column of your robot account shows the number of repositories that the robot account has been added to. 6.4. Disabling robot accounts by using the UI Red Hat Quay administrators can manage robot accounts by disallowing users to create new robot accounts. Important Robot accounts are mandatory for repository mirroring. Setting the ROBOTS_DISALLOW configuration field to true breaks mirroring configurations. Users mirroring repositories should not set ROBOTS_DISALLOW to true in their config.yaml file. This is a known issue and will be fixed in a future release of Red Hat Quay. Use the following procedure to disable robot account creation. Prerequisites You have created multiple robot accounts. Procedure Update your config.yaml field to add the ROBOTS_DISALLOW variable, for example: ROBOTS_DISALLOW: true Restart your Red Hat Quay deployment. Verification: Creating a new robot account Navigate to your Red Hat Quay repository. Click the name of a repository. In the navigation pane, click Robot Accounts . Click Create Robot Account . Enter a name for the robot account, for example, <organization-name/username>+<robot-name> . Click Create robot account to confirm creation. The following message appears: Cannot create robot account. Robot accounts have been disabled. Please contact your administrator. Verification: Logging into a robot account On the command-line interface (CLI), attempt to log in as one of the robot accounts by entering the following command: USD podman login -u="<organization-name/username>+<robot-name>" -p="KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678" <quay-server.example.com> The following error message is returned: Error: logging into "<quay-server.example.com>": invalid username/password You can pass in the log-level=debug flag to confirm that robot accounts have been deactivated: USD podman login -u="<organization-name/username>+<robot-name>" -p="KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678" --log-level=debug <quay-server.example.com> ... DEBU[0000] error logging into "quay-server.example.com": unable to retrieve auth token: invalid username/password: unauthorized: Robot accounts have been disabled. Please contact your administrator. 6.5. Regenerating a robot account token by using the Red Hat Quay API Use the following procedure to regenerate a robot account token using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to regenerate a robot account token for an organization using the POST /api/v1/organization/{orgname}/robots/{robot_shortname}/regenerate endpoint: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/organization/<orgname>/robots/<robot_shortname>/regenerate" Example output {"name": "test-org+test", "created": "Fri, 10 May 2024 17:46:02 -0000", "last_accessed": null, "description": "", "token": "<example_secret>"} Enter the following command to regenerate a robot account token for the current user with the POST /api/v1/user/robots/{robot_shortname}/regenerate endpoint: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/user/robots/<robot_shortname>/regenerate" Example output {"name": "quayadmin+test", "created": "Fri, 10 May 2024 14:12:11 -0000", "last_accessed": null, "description": "", "token": "<example_secret>"} 6.6. Deleting a robot account by using the UI Use the following procedure to delete a robot account using the Red Hat Quay UI. Procedure Log into your Red Hat Quay registry: Click the name of the Organization that has the robot account. Click Robot accounts . Check the box of the robot account to be deleted. Click the kebab menu. Click Delete . Type confirm into the textbox, then click Delete . 6.7. Deleting a robot account by using the Red Hat Quay API Use the following procedure to delete a robot account using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to delete a robot account for an organization using the DELETE /api/v1/organization/{orgname}/robots/{robot_shortname} endpoint: curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_shortname>" The CLI does not return information when deleting a robot account with the API. To confirm deletion, you can check the Red Hat Quay UI, or you can enter the following GET /api/v1/organization/{orgname}/robots command to see if details are returned for the robot account: USD curl -X GET -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots" Example output {"robots": []} Enter the following command to delete a robot account for the current user with the DELETE /api/v1/user/robots/{robot_shortname} endpoint: USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/user/robots/<robot_shortname>" The CLI does not return information when deleting a robot account for the current user with the API. To confirm deletion, you can check the Red Hat Quay UI, or you can enter the following GET /api/v1/user/robots/{robot_shortname} command to see if details are returned for the robot account: USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/user/robots/<robot_shortname>" Example output {"message":"Could not find robot with specified username"} 6.8. Keyless authentication with robot accounts In versions of Red Hat Quay, robot account tokens were valid for the lifetime of the token unless deleted or regenerated. Tokens that do not expire have security implications for users who do not want to store long-term passwords or manage the deletion, or regeneration, or new authentication tokens. With Red Hat Quay 3.13, Red Hat Quay administrators are provided the ability to exchange external OIDC tokens for short-lived, or ephemeral robot account tokens with either Red Hat Single Sign-On (based on the Keycloak project) or Microsoft Entra ID. This allows robot accounts to leverage tokens that last one hour, which are are refreshed regularly and can be used to authenticate individual transactions. This feature greatly enhances the security of your Red Hat Quay registry by mitigating the possibility of robot token exposure by removing the tokens after one hour. Configuring keyless authentication with robot accounts is a multi-step procedure that requires setting a robot federation, generating an OAuth2 token from your OIDC provider, and exchanging the OAuth2 token for a robot account access token. 6.8.1. Generating an OAuth2 token with Red Hat Sign Sign-On The following procedure shows you how to generate an OAuth2 token using Red Hat Single Sign-On. Depending on your OIDC provider, these steps will vary. Procedure On the Red Hat Single Sign-On UI: Click Clients and then the name of the application or service that can request authentication of a user. On the Settings page of your client, ensure that the following options are set or enabled: Client ID Valid redirect URI Client authentication Authorization Standard flow Direct access grants Note Settings can differ depending on your setup. On the Credentials page, store the Client Secret for future use. On the Users page, click Add user and enter a username, for example, service-account-quaydev . Then, click Create . Click the name of of the user, for example service-account-quaydev on the Users page. Click the Credentials tab Set password and provide a password for the user. If warranted, you can make this password temporary by selecting the Temporary option. Click the Realm settings tab OpenID Endpoint Configuration . Store the /protocol/openid-connect/token endpoint. For example: http://localhost:8080/realms/master/protocol/openid-connect/token On a web browser, navigate to the following URL: http://<keycloak_url>/realms/<realm_name>/protocol/openid-connect/auth?response_type=code&client_id=<client_id> When prompted, log in with the service-account-quaydev user and the temporary password you set. Complete the login by providing the required information and setting a permanent password if necessary. You are redirected to the URI address provided for your client. For example: https://localhost:3000/cb?session_state=5c9bce22-6b85-4654-b716-e9bbb3e755bc&iss=http%3A%2F%2Flocalhost%3A8080%2Frealms%2Fmaster&code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43 Take note of the code provided in the address. For example: code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43 Note This is a temporary code that can only be used one time. If necessary, you can refresh the page or revisit the URL to obtain another code. On your terminal, use the following curl -X POST command to generate a temporary OAuth2 access token: USD curl -X POST "http://localhost:8080/realms/master/protocol/openid-connect/token" 1 -H "Content-Type: application/x-www-form-urlencoded" \ -d "client_id=quaydev" 2 -d "client_secret=g8gPsBLxVrLo2PjmZkYBdKvcB9C7fmBz" 3 -d "grant_type=authorization_code" -d "code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43" 4 1 The protocol/openid-connect/token endpoint found on the Realm settings page of the Red Hat Single Sign-On UI. 2 The Client ID used for this procedure. 3 The Client Secret for the Client ID. 4 The code returned from the redirect URI. Example output {"access_token":"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJTVmExVHZ6eDd2cHVmc1dkZmc1SHdua1ZDcVlOM01DN1N5T016R0QwVGhVIn0...", "expires_in":60,"refresh_expires_in":1800,"refresh_token":"eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJiNTBlZTVkMS05OTc1LTQwMzUtYjNkNy1lMWQ5ZTJmMjg0MTEifQ.oBDx6B3pUkXQO8m-M3hYE7v-w25ak6y70CQd5J8f5EuldhvTwpWrC1K7yOglvs09dQxtq8ont12rKIoCIi4WXw","token_type":"Bearer","not-before-policy":0,"session_state":"5c9bce22-6b85-4654-b716-e9bbb3e755bc","scope":"profile email"} Store the access_token from the previously step, as it will be exchanged for a Red Hat Quay robot account token in the following procedure. 6.8.2. Setting up a robot account federation by using the Red Hat Quay v2 UI The following procedure shows you how to set up a robot account federation by using the Red Hat Quay v2 UI. This procedure uses Red Hat Single Sign-On, which is based on the Keycloak project. These steps, and the information used to set up a robot account federation, will vary depending on your OIDC provider. Prerequisites You have created an organization. The following example uses fed_test . You have created a robot account. The following example uses fest_test+robot1 . You have configured a OIDC for your Red Hat Quay deployment. The following example uses Red Hat Single Sign-On. Procedure On the Red Hat Single Sign-On main page: Select the appropriate realm that is authenticated for use with Red Hat Quay. Store the issuer URL, for example, https://keycloak-auth-realm.quayadmin.org/realms/quayrealm . Click Users the name of the user to be linked with the robot account for authentication. You must use the same user account that you used when generating the OAuth2 access token. On the Details page, store the ID of the user, for example, 449e14f8-9eb5-4d59-a63e-b7a77c75f770 . Note The information collected in this step will vary depending on your OIDC provider. For example, with Red Hat Single Sign-On, the ID of a user is used as the Subject to set up the robot account federation in a subsequent step. For a different OIDC provider, like Microsoft Entra ID, this information is stored as the Subject . On your Red Hat Quay registry: Navigate to Organizations and click the name of your organization, for example, fed_test . Click Robot Accounts . Click the menu kebab Set robot federation . Click the + symbol. In the popup window, include the following information: Issuer URL : https://keycloak-auth-realm.quayadmin.org/realms/quayrealm . For Red Hat Single Sign-On, this is the the URL of your Red Hat Single Sign-On realm. This might vary depending on your OIDC provider. Subject : 449e14f8-9eb5-4d59-a63e-b7a77c75f770 . For Red Hat Single Sign-On, the Subject is the ID of your Red Hat Single Sign-On user. This varies depending on your OIDC provider. For example, if you are using Microsoft Entra ID, the Subject will be the Subject or your Entra ID user. Click Save . 6.8.3. Exchanging an OAuth2 access token for a Red Hat Quay robot account token The following procedure leverages the access token generated in the procedure to create a new Red Hat Quay robot account token. The new Red Hat Quay robot account token is used for authentication between your OIDC provider and Red Hat Quay. Note The following example uses a Python script to exchange the OAuth2 access token for a Red Hat Quay robot account token. Prerequisites You have the python3 CLI tool installed. Procedure Save the following Python script in a .py file, for example, robot_fed_token_auth.py import requests import os TOKEN=os.environ.get('TOKEN') robot_user = "fed-test+robot1" def get_quay_robot_token(fed_token): URL = "https://<quay-server.example.com>/oauth2/federation/robot/token" response = requests.get(URL, auth=(robot_user,fed_token)) 1 print(response) print(response.text) if __name__ == "__main__": get_quay_robot_token(TOKEN) 1 If your Red Hat Quay deployment is using custom SSL/TLS certificates, the response must be response = requests.get(URL,auth=(robot_user,fed_token),verify=False) , which includes the verify=False flag. Export the OAuth2 access token as TOKEN . For example: USD export TOKEN = eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJTVmExVHZ6eDd2cHVmc1dkZmc1SHdua1ZDcVlOM01DN1N5T016R0QwVGhVIn0... Run the robot_fed_token_auth.py script by entering the following command: USD python3 robot_fed_token_auth.py Example output <Response [200]> {"token": "291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6InByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZ..."} Important This token expires after one hour. After one hour, a new token must be generated. Export the robot account access token as QUAY_TOKEN . For example: USD export QUAY_TOKEN=291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6InByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZ 6.8.4. Pushing and pulling images After you have generated a new robot account access token and exported it, you can log in and the robot account using the access token and push and pull images. Prerequisites You have exported the OAuth2 access token into a new robot account access token. Procedure Log in to your Red Hat Quay registry using the fest_test+robot1 robot account and the QUAY_TOKEN access token. For example: USD podman login <quay-server.example.com> -u fed_test+robot1 -p USDQUAY_TOKEN Pull an image from a Red Hat Quay repository for which the robot account has the proper permissions. For example: USD podman pull <quay-server.example.com/<repository_name>/<image_name>> Example output Getting image source signatures Copying blob 900e6061671b done Copying config 8135583d97 done Writing manifest to image destination Storing signatures 8135583d97feb82398909c9c97607159e6db2c4ca2c885c0b8f590ee0f9fe90d 0.57user 0.11system 0:00.99elapsed 68%CPU (0avgtext+0avgdata 78716maxresident)k 800inputs+15424outputs (18major+6528minor)pagefaults 0swaps Attempt to pull an image from a Red Hat Quay repository for which the robot account does not have the proper permissions. For example: USD podman pull <quay-server.example.com/<different_repository_name>/<image_name>> Example output Error: initializing source docker://quay-server.example.com/example_repository/busybox:latest: reading manifest in quay-server.example.com/example_repository/busybox: unauthorized: access to the requested resource is not authorized After one hour, the credentials for this robot account are set to expire. Afterwards, you must generate a new access token for this robot account. | [
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_name>\"",
"{\"name\": \"orgname+robot-name\", \"created\": \"Fri, 10 May 2024 15:11:00 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\", \"unstructured_metadata\": null}",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/user/robots/<robot_name>\"",
"{\"name\": \"quayadmin+robot-name\", \"created\": \"Fri, 10 May 2024 15:24:57 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\", \"unstructured_metadata\": null}",
"ROBOTS_DISALLOW: true",
"podman login -u=\"<organization-name/username>+<robot-name>\" -p=\"KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678\" <quay-server.example.com>",
"Error: logging into \"<quay-server.example.com>\": invalid username/password",
"podman login -u=\"<organization-name/username>+<robot-name>\" -p=\"KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678\" --log-level=debug <quay-server.example.com>",
"DEBU[0000] error logging into \"quay-server.example.com\": unable to retrieve auth token: invalid username/password: unauthorized: Robot accounts have been disabled. Please contact your administrator.",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<orgname>/robots/<robot_shortname>/regenerate\"",
"{\"name\": \"test-org+test\", \"created\": \"Fri, 10 May 2024 17:46:02 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\"}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>/regenerate\"",
"{\"name\": \"quayadmin+test\", \"created\": \"Fri, 10 May 2024 14:12:11 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\"}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_shortname>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots\"",
"{\"robots\": []}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>\"",
"{\"message\":\"Could not find robot with specified username\"}",
"http://localhost:8080/realms/master/protocol/openid-connect/token",
"http://<keycloak_url>/realms/<realm_name>/protocol/openid-connect/auth?response_type=code&client_id=<client_id>",
"https://localhost:3000/cb?session_state=5c9bce22-6b85-4654-b716-e9bbb3e755bc&iss=http%3A%2F%2Flocalhost%3A8080%2Frealms%2Fmaster&code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43",
"code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43",
"curl -X POST \"http://localhost:8080/realms/master/protocol/openid-connect/token\" 1 -H \"Content-Type: application/x-www-form-urlencoded\" -d \"client_id=quaydev\" 2 -d \"client_secret=g8gPsBLxVrLo2PjmZkYBdKvcB9C7fmBz\" 3 -d \"grant_type=authorization_code\" -d \"code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43\" 4",
"{\"access_token\":\"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJTVmExVHZ6eDd2cHVmc1dkZmc1SHdua1ZDcVlOM01DN1N5T016R0QwVGhVIn0...\", \"expires_in\":60,\"refresh_expires_in\":1800,\"refresh_token\":\"eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJiNTBlZTVkMS05OTc1LTQwMzUtYjNkNy1lMWQ5ZTJmMjg0MTEifQ.oBDx6B3pUkXQO8m-M3hYE7v-w25ak6y70CQd5J8f5EuldhvTwpWrC1K7yOglvs09dQxtq8ont12rKIoCIi4WXw\",\"token_type\":\"Bearer\",\"not-before-policy\":0,\"session_state\":\"5c9bce22-6b85-4654-b716-e9bbb3e755bc\",\"scope\":\"profile email\"}",
"import requests import os TOKEN=os.environ.get('TOKEN') robot_user = \"fed-test+robot1\" def get_quay_robot_token(fed_token): URL = \"https://<quay-server.example.com>/oauth2/federation/robot/token\" response = requests.get(URL, auth=(robot_user,fed_token)) 1 print(response) print(response.text) if __name__ == \"__main__\": get_quay_robot_token(TOKEN)",
"export TOKEN = eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJTVmExVHZ6eDd2cHVmc1dkZmc1SHdua1ZDcVlOM01DN1N5T016R0QwVGhVIn0",
"python3 robot_fed_token_auth.py",
"<Response [200]> {\"token\": \"291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6InByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZ...\"}",
"export QUAY_TOKEN=291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6InByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZ",
"podman login <quay-server.example.com> -u fed_test+robot1 -p USDQUAY_TOKEN",
"podman pull <quay-server.example.com/<repository_name>/<image_name>>",
"Getting image source signatures Copying blob 900e6061671b done Copying config 8135583d97 done Writing manifest to image destination Storing signatures 8135583d97feb82398909c9c97607159e6db2c4ca2c885c0b8f590ee0f9fe90d 0.57user 0.11system 0:00.99elapsed 68%CPU (0avgtext+0avgdata 78716maxresident)k 800inputs+15424outputs (18major+6528minor)pagefaults 0swaps",
"podman pull <quay-server.example.com/<different_repository_name>/<image_name>>",
"Error: initializing source docker://quay-server.example.com/example_repository/busybox:latest: reading manifest in quay-server.example.com/example_repository/busybox: unauthorized: access to the requested resource is not authorized"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/use_red_hat_quay/allow-robot-access-user-repo |
Appendix B. Contact information | Appendix B. Contact information Red Hat Decision Manager documentation team: [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/author-group |
Chapter 14. Managing custom file type content | Chapter 14. Managing custom file type content In Satellite, you might require methods of managing and distributing SSH keys and source code files or larger files such as virtual machine images and ISO files. To achieve this, custom products in Red Hat Satellite include repositories for custom file types. This provides a generic method to incorporate arbitrary files in a product. You can upload files to the repository and synchronize files from an upstream Satellite Server. When you add files to a custom file type repository, you can use the normal Satellite management functions such as adding a specific version to a content view to provide version control and making the repository of files available on various Capsule Servers. You must download the files on clients over HTTP or HTTPS using curl -O . You can create a file type repository in Satellite Server only in a custom product, but there is flexibility in how you create the repository source. You can create an independent repository source in a directory on Satellite Server, or on a remote HTTP server, and then synchronize the contents of that directory into Satellite. This method is useful when you have multiple files to add to a Satellite repository. 14.1. Creating a local source for a custom file type repository You can create a custom file type repository source, from a directory of files, on the base system where Satellite is installed using Pulp Manifest . You can then synchronize the files into a repository and manage the custom file type content like any other content type. Use this procedure to configure a repository in a directory on the base system where Satellite is installed. To create a file type repository in a directory on a remote server, see Section 14.2, "Creating a remote source for a custom file type repository" . Procedure Ensure the Utils repository is enabled. Enable the satellite-utils module: Install the Pulp Manifest package: Note that this command stops the Satellite service and re-runs satellite-installer . Alternatively, to prevent downtime caused by stopping the service, you can use the following: Create a directory that you want to use as the file type repository, such as: Add the parent folder into allowed import paths: Add files to the directory or create a test file: Run the Pulp Manifest command to create the manifest: Verify the manifest was created: Now, you can import your local source as a custom file type repository. Use the file:// URL scheme and the name of the directory to specify an Upstream URL , such as file:///var/lib/pulp/ local_repos / my_file_repo . For more information, see Section 14.3, "Creating a custom file type repository" . If you update the contents of your directory, re-run Pulp Manifest and sync the repository in Satellite. For more information, see Section 4.7, "Synchronizing repositories" . 14.2. Creating a remote source for a custom file type repository You can create a custom file type repository source from a directory of files that is external to Satellite Server using Pulp Manifest . You can then synchronize the files into a repository on Satellite Server over HTTP or HTTPS and manage the custom file type content like any other content type. Use this procedure to configure a repository in a directory on a remote server. To create a file type repository in a directory on the base system where Satellite Server is installed, see Section 14.1, "Creating a local source for a custom file type repository" . Prerequisites You have a server running Red Hat Enterprise Linux 8 registered to your Satellite or the Red Hat CDN. Your server has an entitlement to the Red Hat Enterprise Linux Server and Red Hat Satellite Utils repositories. You have installed an HTTP server. For more information about configuring a web server, see Setting up the Apache HTTP web server in Red Hat Enterprise Linux 8 Deploying different types of servers . Procedure On your server, enable the required repositories: Enable the satellite-utils module: Install the Pulp Manifest package: Create a directory that you want to use as the file type repository in the HTTP server's public folder: Add files to the directory or create a test file: Run the Pulp Manifest command to create the manifest: Verify the manifest was created: Now, you can import your remote source as a custom file type repository. Use the path to the directory to specify an Upstream URL , such as http://server.example.com/ my_file_repo . For more information, see Section 14.3, "Creating a custom file type repository" . If you update the contents of your directory, re-run Pulp Manifest and sync the repository in Satellite. For more information, see Section 4.7, "Synchronizing repositories" . 14.3. Creating a custom file type repository The procedure for creating a custom file type repository is the same as the procedure for creating any custom content, except that when you create the repository, you select the file type. You must create a product and then add a custom repository. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Products . Select a product that you want to create a repository for. On the Repositories tab, click New Repository . In the Name field, enter a name for the repository. Satellite automatically completes the Label field based on the name. Optional: In the Description field, enter a description for the repository. From the Type list, select file as type of repository. Optional: In the Upstream URL field, enter the URL of the upstream repository to use as a source. If you do not enter an upstream URL, you can manually upload packages. For more information, see Section 14.4, "Uploading files to a custom file type repository" . Select Verify SSL to verify that the SSL certificates of the repository are signed by a trusted CA. Optional: In the Upstream Username field, enter the user name for the upstream repository if required for authentication. Clear this field if the repository does not require authentication. Optional: In the Upstream Password field, enter the corresponding password for the upstream repository. Clear this field if the repository does not require authentication. Optional: In the Upstream Authentication Token field, provide the token of the upstream repository user for authentication. Leave this field empty if the repository does not require authentication. From the Mirroring Policy list, select the type of content synchronization Satellite Server performs. For more information, see Section 4.12, "Mirroring policies overview" . Optional: In the HTTP Proxy Policy field, select an HTTP proxy. By default, it uses the Global Default HTTP proxy. Optional: You can clear the Unprotected checkbox to require a subscription entitlement certificate for accessing this repository. By default, the repository is published through HTTP. Optional: In the SSL CA Cert field, select the SSL CA Certificate for the repository. Optional: In the SSL Client Cert field, select the SSL Client Certificate for the repository. Optional: In the SSL Client Key field, select the SSL Client Key for the repository. Click Save to create the repository. CLI procedure Create a custom product: Table 14.1. Optional parameters for the hammer product create command Option Description --gpg-key-id gpg_key_id GPG key numeric identifier --sync-plan-id sync_plan_id Sync plan numeric identifier --sync-plan sync_plan_name Sync plan name to search by Create a file type repository: Table 14.2. Optional parameters for the hammer repository create command Option Description --checksum-type sha_version Repository checksum (either sha1 or sha256 ) --download-policy policy_name Download policy for repositories (either immediate or on_demand ) --gpg-key-id gpg_key_id GPG key numeric identifier --gpg-key gpg_key_name Key name to search by --mirror-on-sync boolean Must this repo be mirrored from the source, and stale packages removed, when synced? Set to true or false , yes or no , 1 or 0 . --publish-via-http boolean Must this also be published using HTTP? Set to true or false , yes or no , 1 or 0 . --upstream-password repository_password Password for the upstream repository user --upstream-username repository_username Upstream repository user, if required for authentication --url My_Repository_URL URL of the remote repository --verify-ssl-on-sync boolean Verify that the upstream SSL certificates of the remote repository are signed by a trusted CA? Set to true or false , yes or no , 1 or 0 . 14.4. Uploading files to a custom file type repository Use this procedure to upload files to a custom file type repository. Procedure In the Satellite web UI, navigate to Content > Products . Select a custom product by name. Select a file type repository by name. Click Browse to search and select the file you want to upload. Click Upload to upload the selected file to Satellite Server. Visit the URL where the repository is published to see the file. CLI procedure The --path option can indicate a file, a directory of files, or a glob expression of files. Globs must be escaped by single or double quotes. 14.5. Downloading files to a host from a custom file type repository You can download files to a client over HTTPS using curl -O , and optionally over HTTP if the Unprotected option for repositories is selected. Prerequisites You have a custom file type repository. For more information, see Section 14.3, "Creating a custom file type repository" . You know the name of the file you want to download to clients from the file type repository. To use HTTPS you require the following certificates on the client: The katello-server-ca.crt . For more information, see Importing the Katello Root CA Certificate in Administering Red Hat Satellite . An Organization Debug Certificate. For more information, see Creating an Organization Debug Certificate in Administering Red Hat Satellite . Procedure In the Satellite web UI, navigate to Content > Products . Select a custom product by name. Select a file type repository by name. Ensure to select the Unprotected checkbox to access the repository published through HTTP. Copy the Published At URL. On your client, download the file from Satellite Server: For HTTPS: For HTTP: CLI procedure List the file type repositories. Display the repository information. If Unprotected is enabled, the output is similar to this: If Unprotected is not enabled, the output is similar to this: On your client, download the file from Satellite Server: For HTTPS: For HTTP: | [
"subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=satellite-utils-6.15-for-rhel-8-x86_64-rpms",
"dnf module enable satellite-utils",
"satellite-maintain packages install python3.11-pulp_manifest",
"satellite-maintain packages unlock satellite-maintain packages install python39-pulp_manifest satellite-maintain packages lock",
"mkdir -p /var/lib/pulp/ local_repos / my_file_repo",
"satellite-installer --foreman-proxy-content-pulpcore-additional-import-paths /var/lib/pulp/ local_repos",
"touch /var/lib/pulp/ local_repos / my_file_repo / test.txt",
"pulp-manifest /var/lib/pulp/ local_repos / my_file_repo",
"ls /var/lib/pulp/ local_repos / my_file_repo PULP_MANIFEST test.txt",
"subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=satellite-utils-6.15-for-rhel-8-x86_64-rpms",
"dnf module enable satellite-utils",
"dnf install python3.11-pulp_manifest",
"mkdir /var/www/html/pub/ my_file_repo",
"touch /var/www/html/pub/ my_file_repo / test.txt",
"pulp-manifest /var/www/html/pub/ my_file_repo",
"ls /var/www/html/pub/ my_file_repo PULP_MANIFEST test.txt",
"hammer product create --description \" My_Files \" --name \" My_File_Product \" --organization \" My_Organization \" --sync-plan \" My_Sync_Plan \"",
"hammer repository create --content-type \"file\" --name \" My_Files \" --organization \" My_Organization \" --product \" My_File_Product \"",
"hammer repository upload-content --id repo_ID --organization \" My_Organization \" --path example_file",
"curl --cacert ./_katello-server-ca.crt --cert ./_My_Organization_key-cert.pem --remote-name https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repository_Label / My_File",
"curl --remote-name http:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repository_Label / My_File",
"hammer repository list --content-type file ---|------------|-------------------|--------------|---- ID | NAME | PRODUCT | CONTENT TYPE | URL ---|------------|-------------------|--------------|---- 7 | My_Files | My_File_Product | file | ---|------------|-------------------|--------------|----",
"hammer repository info --name \" My_Files \" --organization-id My_Organization_ID --product \" My_File_Product \"",
"Publish Via HTTP: yes Published At: https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_File_Product_Label / My_Files_Label /",
"Publish Via HTTP: no Published At: https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_File_Product_Label / My_Files_Label /",
"curl --cacert ./_katello-server-ca.crt --cert ./_My_Organization_key-cert.pem --remote-name https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repository_Label / My_File",
"curl --remote-name http:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repository_Label / My_File"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_content/managing_custom_file_type_content_content-management |
9.2.2. RPM as an IDS | 9.2.2. RPM as an IDS The RPM Package Manager (RPM) is another program that can be used as a host-based IDS. RPM contains various options for querying packages and their contents. These verification options can be invaluable to an administrator who suspects that critical system files and executables have been modified. The following list details some RPM options that can verify file integrity on a Red Hat Enterprise Linux system. Refer to the System Administrators Guide for complete information about using RPM. Important Some of the commands in the following list require the importation of the Red Hat GPG public key into the system's RPM keyring. This key verifies that packages installed on the system contain an Red Hat package signature, which ensures that the packages originated from Red Hat. The key can be imported by issuing the following command as root (substituting <version> with the version of RPM installed on the system): rpm -V package_name The -V option verifies the files in the installed package called package_name . If it shows no output and exits, this means that none of the files have been modified in any way since the last time the RPM database was updated. If there is an error, such as the following then the file has been modified in some way and you must assess whether to keep the file (such as with modified configuration files in the /etc/ directory) or delete the file and reinstall the package that contains it. The following list defines the elements of the 8-character string ( S.5....T in the above example) that notifies of a verification failure. . - The test has passed this phase of verification ? - The test has found a file that could not be read, which is most likely a file permission issue S - The test has encountered a file that that is smaller or larger than it was when originally installed on the system 5 - The test has found a file whose md5 checksum does not match the original checksum of the file when first installed M - The test has detected a file permission or file type error on the file D - The test has encountered a device file mismatch in major/minor number L - The test has found a symbolic link that has been changed to another file path U - The test has found a file that had its user ownership changed G - The test has found a file that had its group ownership changed T - The test has encountered mtime verification errors on the file rpm -Va The -Va option verifies all installed packages and finds any failure in its verification tests (much like the -V option, but more verbose in its output since it is verifying every installed package). rpm -Vf /bin/ls The -Vf option verifies individual files in an installed package. This can be useful when performing a quick verification of a suspect file. rpm -K application-1.0.i386.rpm The -K option is useful for checking the md5 checksum and the GPG signature of an RPM package file. This is useful for checking whether a package about to be installed is signed by Red Hat or any organization for which you have the GPG public key imported into a GPG keyring. A package that has not been properly signed triggers an error message similar to the following: Exercise caution when installing packages that are unsigned as they are not approved by Red Hat, Inc and could contain malicious code. RPM can be a powerful tool, as evidenced by its many verification tools for installed packages and RPM package files. It is strongly recommended that the contents of the RPM database directory ( /var/lib/rpm/ ) be backed up to read-only media, such as CD-ROM, after installation of Red Hat Enterprise Linux. Doing so allows verification of files and packages against the read-only database, rather than against the database on the system, as malicious users may corrupt the database and skew the results. | [
"--import /usr/share/doc/rpm- <version> /RPM-GPG-KEY",
"S.5....T c /bin/ps",
"application-1.0.i386.rpm (SHA1) DSA sha1 md5 (GPG) NOT OK (MISSING KEYS: GPG#897da07a)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-ids-host-rpm |
Introduction | Introduction This document provides a high-level overview of Red Hat Cluster Suite for Red Hat Enterprise Linux 4. Although the information in this document is an overview, you should have advanced working knowledge of Red Hat Enterprise Linux and understand the concepts of server computing to gain a good comprehension of the information. For more information about using Red Hat Enterprise Linux, refer to the following resources: Red Hat Enterprise Linux Installation Guide - Provides information regarding installation. Red Hat Enterprise Linux Introduction to System Administration - Provides introductory information for new Red Hat Enterprise Linux system administrators. Red Hat Enterprise Linux System Administration Guide - Provides more detailed information about configuring Red Hat Enterprise Linux to suit your particular needs as a user. Red Hat Enterprise Linux Reference Guide - Provides detailed information suited for more experienced users to reference when needed, as opposed to step-by-step instructions. Red Hat Enterprise Linux Security Guide - Details the planning and the tools involved in creating a secured computing environment for the data center, workplace, and home. This document contains overview information about Red Hat Cluster Suite for Red Hat Enterprise Linux 4 and is part of a documentation set that provides conceptual, procedural, and reference information about Red Hat Cluster Suite for Red Hat Enterprise Linux 4. Red Hat Cluster Suite documentation and other Red Hat documents are available in HTML, PDF, and RPM versions on the Red Hat Enterprise Linux Documentation CD and online at http://www.redhat.com/docs/ . For more information about Red Hat Cluster Suite for Red Hat Enterprise Linux 4, refer to the following resources: Configuring and Managing a Red Hat Cluster - Provides information about installing, configuring and managing Red Hat Cluster components. LVM Administrator's Guide: Configuration and Administration - Provides a description of the Logical Volume Manager (LVM), including information on running LVM in a clustered environment. Global File System: Configuration and Administration - Provides information about installing, configuring, and maintaining Red Hat GFS (Red Hat Global File System). Using Device-Mapper Multipath - Provides information about using the Device-Mapper Multipath feature of Red Hat Enterprise Linux 4. Using GNBD with Global File System - Provides an overview on using Global Network Block Device (GNBD) with Red Hat GFS. Linux Virtual Server Administration - Provides information on configuring high-performance systems and services with the Linux Virtual Server (LVS). Red Hat Cluster Suite Release Notes - Provides information about the current release of Red Hat Cluster Suite. Red Hat Cluster Suite documentation and other Red Hat documents are available in HTML, PDF, and RPM versions on the Red Hat Enterprise Linux Documentation CD and online at http://www.redhat.com/docs/ . 1. Feedback If you spot a typo, or if you have thought of a way to make this document better, we would love to hear from you. Please submit a report in Bugzilla ( http://bugzilla.redhat.com/bugzilla/ ) against the component rh-cs-en . Be sure to mention the document's identifier: By mentioning this document's identifier, we know exactly which version of the guide you have. If you have a suggestion for improving the documentation, try to be as specific as possible. If you have found an error, please include the section number and some of the surrounding text so we can find it easily. | [
"Cluster_Suite_Overview(EN)-4.8 (2009-04-24:T15:25)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/ch-intro-cso |
Subsets and Splits