title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 3. Package Namespace Change for JBoss EAP 8.0
Chapter 3. Package Namespace Change for JBoss EAP 8.0 This section provides additional information for the package namespace changes in JBoss EAP 8.0. JBoss EAP 8.0 provides full support for Jakarta EE 10 and many other implementations of the Jakarta EE 10 APIs. An important change supported by Jakarta EE 10 for JBoss EAP 8.0 is the package namespace change. 3.1. javax to jakarta Namespace change A key difference between Jakarta EE 8 and EE 10 is the renaming of the EE API Java packages from javax.* to jakarta.* . This follows the move of Java EE to the Eclipse Foundation and the establishment of Jakarta EE. Adapting to this namespace change is the biggest task of migrating an application from JBoss EAP 7 to JBoss EAP 8. To migrate applications to Jakarta EE 10, you must complete the following steps: Update any import statements or other source code uses of EE API classes from the javax package to the jakarta package. Update the names of any EE-specified system properties or other configuration properties that begin with javax to begin with jakarta . For any application-provided implementations of EE interfaces or abstract classes that are bootstrapped using the java.util.ServiceLoader mechanism, change the name of the resource that identifies the implementation class from META-INF/services/javax.[rest_of_name] to META-INF/services/jakarta.[rest_of_name] . Note The Red Hat Migration Toolkit can assist in updating the namespaces in the application source code. For more information, see How to use Red Hat Migration Toolkit for Auto-Migration of an Application to the Jakarta EE 10 Namespace . In cases where source code migration is not an option, the Open Source Eclipse Transformer project provides bytecode transformation tooling to transform existing Java archives from the javax namespace to the jakarta namespace. Note This change does not affect javax packages that are part of Java SE. Additional resources For more information, see The javax to jakarta Package Namespace Change . Revised on 2024-02-21 14:02:54 UTC
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/introduction_to_red_hat_jboss_enterprise_application_platform/package-namespace-change-for-jboss-eap-8-0_assembly-intro-eap
Chapter 23. Crypto (JCE)
Chapter 23. Crypto (JCE) Since Camel 2.3 Only producer is supported With Camel cryptographic endpoints and Java Cryptographic extension, it is easy to create Digital Signatures for Exchanges. Camel provides a pair of flexible endpoints which get used in concert to create a signature for an exchange in one part of the exchange workflow and then verify the signature in a later part of the workflow. 23.1. Dependencies When using crypto with Red Hat build of Camel Spring Boot ensure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-crypto-starter</artifactId> </dependency> 23.2. Introduction Digital signatures make use of Asymmetric Cryptographic techniques to sign messages. From a high level, the algorithms use pairs of complimentary keys with the special property that data encrypted with one key can only be decrypted with the other. The private key, is closely guarded and used to 'sign' the message while the other, public key, is shared around to anyone interested in verifying the signed messages. Messages are signed by using the private key to encrypting a digest of the message. This encrypted digest is transmitted along with the message. On the other side the verifier recalculates the message digest and uses the public key to decrypt the digest in the signature. If both digests match, the verifier knows only the holder of the private key could have created the signature. Camel uses the Signature service from the Java Cryptographic Extension to do all the heavy cryptographic lifting required to create exchange signatures. The following are the resources for explaining the mechanics of Cryptography, Message digests and Digital Signatures and how to leverage them with the JCE. Bruce Schneier's Applied Cryptography Beginning Cryptography with Java by David Hook The ever insightful Wikipedia Digital_signatures 23.3. URI format Camel provides a pair of crypto endpoints to create and verify signatures crypto:sign creates the signature and stores it in the Header keyed by the constant org.apache.camel.component.crypto.DigitalSignatureConstants.SIGNATURE , that is, "CamelDigitalSignature" . crypto:verify reads in the contents of this header and do the verification calculation. To function, the sign and verify process needs a pair of keys to be shared, signing requiring a PrivateKey and verifying a PublicKey (or a Certificate containing one). Using the JCE, it is very simple to generate these key pairs but it is usually most secure to use a KeyStore to house and share your keys. The DSL is very flexible about how keys are supplied and provides a number of mechanisms. A crypto:sign endpoint is typically defined in one route and the complimentary crypto:verify in another, though for simplicity in the examples they appear one after the other. Both signing and verifying should be configured identically. 23.4. Configuring Options Camel components are configured on two separate levels: component level endpoint level 23.4.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 23.4.2. Configuring Endpoint Options Endpoints have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. Use Property Placeholders to configure options that allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 23.5. Component Options The Crypto (JCE) component supports 21 options that are listed below. Name Description Default Type algorithm (producer) Sets the JCE name of the Algorithm that should be used for the signer. SHA256withRSA String alias (producer) Sets the alias used to query the KeyStore for keys and \\{link java.security.cert.Certificate Certificates} to be used in signing and verifying exchanges. This value can be provided at runtime via the message header org.apache.camel.component.crypto.DigitalSignatureConstants#KEYSTORE_ALIAS. String certificateName (producer) Sets the reference name for a PrivateKey that can be found in the registry. String keystore (producer) Sets the KeyStore that can contain keys and Certificates for use in signing and verifying exchanges. A KeyStore is typically used with an alias, either one supplied in the Route definition or dynamically via the message header CamelSignatureKeyStoreAlias. If no alias is supplied and there is only a single entry in the Keystore, then this single entry will be used. KeyStore keystoreName (producer) Sets the reference name for a Keystore that can be found in the registry. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean privateKey* (producer) Set the PrivateKey that should be used to sign the exchange. PrivateKey privateKeyName (producer) Sets the reference name for a PrivateKey that can be found in the registry. String provider (producer) Set the id of the security provider that provides the configured Signature algorithm. String publicKeyName (producer) references that should be resolved when the context changes. String secureRandomName (producer) Sets the reference name for a SecureRandom that can be found in the registry. String signatureHeaderName (producer) Set the name of the message header that should be used to store the base64 encoded signature. This defaults to 'CamelDigitalSignature'. String autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean bufferSize (advanced) Set the size of the buffer used to read in the Exchange payload data. 2048 Integer certificate (advanced) Set the Certificate that should be used to verify the signature in the exchange based on its payload. Certificate clearHeaders (advanced) Determines if the Signature specific headers be cleared after signing and verification. Defaults to true, and should only be made otherwise at your extreme peril as vital private information such as Keys and passwords may escape if unset. true boolean configuration (advanced) To use the shared DigitalSignatureConfiguration as configuration. DigitalSignatureConfiguration keyStoreParameters (advanced) Sets the KeyStore that can contain keys and Certficates for use in signing and verifying exchanges based on the given KeyStoreParameters. A KeyStore is typically used with an alias, either one supplied in the Route definition or dynamically via the message header CamelSignatureKeyStoreAlias. If no alias is supplied and there is only a single entry in the Keystore, then this single entry will be used. KeyStoreParameters publicKey (advanced) Set the PublicKey that should be used to verify the signature in the exchange. PublicKey secureRandom (advanced) Set the SecureRandom used to initialize the Signature service. SecureRandom password (security) Sets the password used to access an aliased PrivateKey in the KeyStore. String 23.6. Endpoint Options The Crypto (JCE) endpoint is configured using URI syntax: The following are the path and query parameters: 23.6.1. Path Parameters (2 parameters) Name Description Default Type cryptoOperation (producer) Required Set the Crypto operation from that supplied after the crypto scheme in the endpoint uri e.g. crypto:sign sets sign as the operation. Enum values: * sign * verify CryptoOperation name (producer) Required The logical name of this operation. String 23.6.2. Query Parameters (19 parameters) Name Description Default Type algorithm (producer) Sets the JCE name of the Algorithm that should be used for the signer. SHA256withRSA String alias (producer) Sets the alias used to query the KeyStore for keys and \\{link java.security.cert.Certificate Certificates} to be used in signing and verifying exchanges. This value can be provided at runtime via the message header org.apache.camel.component.crypto.DigitalSignatureConstants#KEYSTORE_ALIAS. String certificateName (producer) Sets the reference name for a PrivateKey that can be found in the registry. String keystore (producer) Sets the KeyStore that can contain keys and Certficates for use in signing and verifying exchanges. A KeyStore is typically used with an alias, either one supplied in the Route definition or dynamically via the message header CamelSignatureKeyStoreAlias. If no alias is supplied and there is only a single entry in the Keystore, then this single entry will be used. KeyStore keystoreName (producer) Sets the reference name for a Keystore that can be found in the registry. String privateKey (producer) Set the PrivateKey that should be used to sign the exchange. PrivateKey privateKeyName (producer) Sets the reference name for a PrivateKey that can be found in the registry. String provider (producer) Set the id of the security provider that provides the configured Signature algorithm. String publicKeyName (producer) references that should be resolved when the context changes. String secureRandomName (producer) Sets the reference name for a SecureRandom that can be found in the registry. String signatureHeaderName (producer) Set the name of the message header that should be used to store the base64 encoded signature. This defaults to 'CamelDigitalSignature'. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages using Camel routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time. false boolean bufferSize (advanced) Set the size of the buffer used to read in the Exchange payload data. 2048 Integer certificate (advanced) Set the Certificate that should be used to verify the signature in the exchange based on its payload. Certificate clearHeaders (advanced) Determines if the Signature specific headers be cleared after signing and verification. Defaults to true, and should only be made otherwise at your extreme peril as vital private information such as Keys and passwords may escape if unset. true boolean keyStoreParameters (advanced) Sets the KeyStore that can contain keys and Certficates for use in signing and verifying exchanges based on the given KeyStoreParameters. A KeyStore is typically used with an alias, either one supplied in the Route definition or dynamically via the message header CamelSignatureKeyStoreAlias. If no alias is supplied and there is only a single entry in the Keystore, then this single entry will be used. KeyStoreParameters publicKey (advanced) Set the PublicKey that should be used to verify the signature in the exchange. PublicKey secureRandom (advanced) Set the SecureRandom used to initialize the Signature service. SecureRandom password (security) Sets the password used to access an aliased PrivateKey in the KeyStore. String 23.7. Message Headers The Crypto (JCE) component supports 4 message headers that are listed below. Name Description Default Type [ CamelSignaturePrivateKey (producer) Constant: SIGNATURE_PRIVATE_KEY The PrivateKey that should be used to sign the message. PrivateKey [ CamelSignaturePublicKeyOrCert (producer) Constant: SIGNATURE_PUBLIC_KEY_OR_CERT The Certificate or PublicKey that should be used to verify the signature. Certificate or PublicKey CamelSignatureKeyStoreAlias (producer) Constant: KEYSTORE_ALIAS The alias used to query the KeyStore for keys and Certificates to be used in signing and verifying exchanges. String CamelSignatureKeyStorePassword (producer) Constant: KEYSTORE_PASSWORD The password used to access an aliased PrivateKey in the KeyStore. char[] 23.8. Using 23.8.1. Raw keys The most basic way to sign and verify an exchange is with a KeyPair as follows. KeyPair keyPair = KeyGenerator.getInstance("RSA").generateKeyPair(); from("direct:sign") .setHeader(DigitalSignatureConstants.SIGNATURE_PRIVATE_KEY, constant(keys.getPrivate())) .to("crypto:sign:message") .to("direct:verify"); from("direct:verify") .setHeader(DigitalSignatureConstants.SIGNATURE_PUBLIC_KEY_OR_CERT, constant(keys.getPublic())) .to("crypto:verify:check"); The same can be achieved with the Spring XML Extensions using references to keys. 23.8.2. KeyStores and Aliases. The JCE provides a very versatile keystore concept for housing pairs of private keys and certificates, keeping them encrypted and password protected. They can be retrieved by applying an alias to the retrieval APIs. There are a number of ways to get keys and Certificates into a keystore, most often this is done with the external 'keytool' application. The following command will create a keystore containing a key and certificate aliased by bob , which can be used in the following examples. The password for the keystore and the key is letmein . keytool -genkey -keyalg RSA -keysize 2048 -keystore keystore.jks -storepass letmein -alias bob -dname "CN=Bob,OU=IT,O=Camel" -noprompt The following route first signs an exchange using Bob's alias from the KeyStore bound into the Camel Registry, and then verifies it using the same alias. from("direct:sign") .to("crypto:sign:keystoreSign?alias=bob&keystoreName=myKeystore&password=letmein") .log("Signature: USD{header.CamelDigitalSignature}") .to("crypto:verify:keystoreVerify?alias=bob&keystoreName=myKeystore&password=letmein") .log("Verified: USD{body}"); The following code shows how to load the keystore created using the above keytool command and bind it into the registry with the name myKeystore for use in the above route. The example makes use of the @Configuration and @BindToRegistry annotations introduced in Camel 3 to instantiate the KeyStore and register it with the name myKeyStore . @Configuration public class KeystoreConfig { @BindToRegistry public KeyStore myKeystore() throws Exception { KeyStore store = KeyStore.getInstance("JKS"); try (FileInputStream fis = new FileInputStream("keystore.jks")) { store.load(fis, "letmein".toCharArray()); } return store; } } Again in Spring a ref is used to lookup an actual keystore instance. 23.8.3. Changing JCE Provider and Algorithm Changing the Signature algorithm or the Security provider is a simple matter of specifying their names. You will need to also use Keys that are compatible with the algorithm you choose. 23.8.4. Changing the Signature Message Header It may be desirable to change the message header used to store the signature. A different header name can be specified in the route definition as follows from("direct:sign") .to("crypto:sign:keystoreSign?alias=bob&keystoreName=myKeystore&password=letmein&signatureHeaderName=mySignature") .log("Signature: USD{header.mySignature}") .to("crypto:verify:keystoreVerify?alias=bob&keystoreName=myKeystore&password=letmein&signatureHeaderName=mySignature"); ===Changing the bufferSize In case you need to update the size of the buffer. 23.8.5. Supplying Keys dynamically. When using a Recipient list or similar EIP, the recipient of an exchange can vary dynamically. Using the same key across all recipients may be neither feasible nor desirable. It would be useful to be able to specify signature keys dynamically on a per-exchange basis. The exchange could then be dynamically enriched with the key of its target recipient prior to signing. To facilitate this the signature mechanisms allow for keys to be supplied dynamically via the message headers below. DigitalSignatureConstants.SIGNATURE_PRIVATE_KEY , "CamelSignaturePrivateKey" DigitalSignatureConstants.SIGNATURE_PUBLIC_KEY_OR_CERT , "CamelSignaturePublicKeyOrCert" Even better would be to dynamically supply a keystore alias. Again the alias can be supplied in a message header DigitalSignatureConstants.KEYSTORE_ALIAS , "CamelSignatureKeyStoreAlias" The header would be set as follows: Exchange unsigned = getMandatoryEndpoint("direct:alias-sign").createExchange(); unsigned.getIn().setBody(payload); unsigned.getIn().setHeader(DigitalSignatureConstants.KEYSTORE_ALIAS, "bob"); unsigned.getIn().setHeader(DigitalSignatureConstants.KEYSTORE_PASSWORD, "letmein".toCharArray()); template.send("direct:alias-sign", unsigned); Exchange signed = getMandatoryEndpoint("direct:alias-sign").createExchange(); signed.getIn().copyFrom(unsigned.getMessage()); signed.getIn().setHeader(DigitalSignatureConstants.KEYSTORE_ALIAS, "bob"); template.send("direct:alias-verify", signed); 23.9. Spring Boot Auto-Configuration The component supports 47 options that are listed below. Name Description Default Type camel.component.crypto.algorithm Sets the JCE name of the Algorithm that should be used for the signer. SHA256withRSA String camel.component.crypto.alias Sets the alias used to query the KeyStore for keys and {link java.security.cert.Certificate Certificates} to be used in signing and verifying exchanges. This value can be provided at runtime via the message header org.apache.camel.component.crypto.DigitalSignatureConstants#KEYSTORE_ALIAS. String camel.component.crypto.autowired-enabled Whether auto-wiring is enabled. This is used for automatic auto-wiring options (the option must be marked as auto-wired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.crypto.buffer-size Set the size of the buffer used to read in the Exchange payload data. 2048 Integer camel.component.crypto.certificate Set the Certificate that should be used to verify the signature in the exchange based on its payload. The option is a java.security.cert.Certificate type. Certificate camel.component.crypto.certificate-name Sets the reference name for a PrivateKey that can be found in the registry. String camel.component.crypto.clear-headers Determines if the Signature specific headers be cleared after signing and verification. Defaults to true, and should only be made otherwise at your extreme peril as vital private information such as Keys and passwords may escape if unset. true Boolean camel.component.crypto.configuration To use the shared DigitalSignatureConfiguration as configuration. The option is a org.apache.camel.component.crypto.DigitalSignatureConfiguration type. DigitalSignatureConfiguration camel.component.crypto.enabled Whether to enable auto configuration of the crypto component. This is enabled by default. Boolean camel.component.crypto.key-store-parameters Sets the KeyStore that can contain keys and Certficates for use in signing and verifying exchanges based on the given KeyStoreParameters. A KeyStore is typically used with an alias, either one supplied in the Route definition or dynamically via the message header CamelSignatureKeyStoreAlias. If no alias is supplied and there is only a single entry in the Keystore, then this single entry will be used. The option is a org.apache.camel.support.jsse.KeyStoreParameters type. KeyStoreParameters camel.component.crypto.keystore Sets the KeyStore that can contain keys and Certficates for use in signing and verifying exchanges. A KeyStore is typically used with an alias, either one supplied in the Route definition or dynamically via the message header CamelSignatureKeyStoreAlias. If no alias is supplied and there is only a single entry in the Keystore, then this single entry will be used. The option is a java.security.KeyStore type. KeyStore camel.component.crypto.keystore-name Sets the reference name for a Keystore that can be found in the registry. String camel.component.crypto.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.crypto.password Sets the password used to access an aliased PrivateKey in the KeyStore. String camel.component.crypto.private-key Set the PrivateKey that should be used to sign the exchange. The option is a java.security.PrivateKey type. PrivateKey camel.component.crypto.private-key-name Sets the reference name for a PrivateKey that can be found in the registry. String camel.component.crypto.provider Set the id of the security provider that provides the configured Signature algorithm. String camel.component.crypto.public-key Set the PublicKey that should be used to verify the signature in the exchange. The option is a java.security.PublicKey type. PublicKey camel.component.crypto.public-key-name references that should be resolved when the context changes. String camel.component.crypto.secure-random Set the SecureRandom used to initialize the Signature service. The option is a java.security.SecureRandom type. SecureRandom camel.component.crypto.secure-random-name Sets the reference name for a SecureRandom that can be found in the registry. String camel.component.crypto.signature-header-name Set the name of the message header that should be used to store the base64 encoded signature. This defaults to 'CamelDigitalSignature'. String camel.dataformat.crypto.algorithm The JCE algorithm name indicating the cryptographic algorithm that will be used. String camel.dataformat.crypto.algorithm-parameter-ref A JCE AlgorithmParameterSpec used to initialize the Cipher. Will lookup the type using the given name as a java.security.spec.AlgorithmParameterSpec type. String camel.dataformat.crypto.buffer-size The size of the buffer used in the signature process. 4096 Integer camel.dataformat.crypto.crypto-provider The name of the JCE Security Provider that should be used. String camel.dataformat.crypto.enabled Whether to enable auto configuration of the crypto data format. This is enabled by default. Boolean camel.dataformat.crypto.init-vector-ref Refers to a byte array containing the Initialization Vector that will be used to initialize the Cipher. String camel.dataformat.crypto.inline Flag indicating that the configured IV should be inlined into the encrypted data stream. Is by default false. false Boolean camel.dataformat.crypto.key-ref Refers to the secret key to lookup from the register to use. String camel.dataformat.crypto.mac-algorithm The JCE algorithm name indicating the Message Authentication algorithm. HmacSHA1 String camel.dataformat.crypto.should-append-h-m-a-c Flag indicating that a Message Authentication Code should be calculated and appended to the encrypted data. true Boolean camel.dataformat.pgp.algorithm Symmetric key encryption algorithm; possible values are defined in org.bouncycastle.bcpg.SymmetricKeyAlgorithmTags; for example 2 (= TRIPLE DES), 3 (= CAST5), 4 (= BLOWFISH), 6 (= DES), 7 (= AES_128). Only relevant for encrypting. Integer camel.dataformat.pgp.armored This option will cause PGP to base64 encode the encrypted text, making it available for copy/paste, etc. false Boolean camel.dataformat.pgp.compression-algorithm Compression algorithm; possible values are defined in org.bouncycastle.bcpg.CompressionAlgorithmTags; for example 0 (= UNCOMPRESSED), 1 (= ZIP), 2 (= ZLIB), 3 (= BZIP2). Only relevant for encrypting. Integer camel.dataformat.pgp.enabled Whether to enable auto configuration of the pgp data format. This is enabled by default. Boolean camel.dataformat.pgp.hash-algorithm Signature hash algorithm; possible values are defined in org.bouncycastle.bcpg.HashAlgorithmTags; for example 2 (= SHA1), 8 (= SHA256), 9 (= SHA384), 10 (= SHA512), 11 (=SHA224). Only relevant for signing. Integer camel.dataformat.pgp.integrity Adds an integrity check/sign into the encryption file. The default value is true. true Boolean camel.dataformat.pgp.key-file-name Filename of the keyring; must be accessible as a classpath resource (but you can specify a location in the file system by using the file: prefix). String camel.dataformat.pgp.key-userid The user ID of the key in the PGP keyring used during encryption. Can also be only a part of a user ID. For example, if the user ID is Test User then you can use the part Test User or to address the user ID. String camel.dataformat.pgp.password Password used when opening the private key (not used for encryption). String camel.dataformat.pgp.provider Java Cryptography Extension (JCE) provider, default is Bouncy Castle (BC). Alternatively you can use, for example, the IAIK JCE provider; in this case the provider must be registered beforehand and the Bouncy Castle provider must not be registered beforehand. The Sun JCE provider does not work. String camel.dataformat.pgp.signature-key-file-name Filename of the keyring to use for signing (during encryption) or for signature verification (during decryption); must be accessible as a classpath resource (but you can specify a location in the file system by using the file: prefix). String camel.dataformat.pgp.signature-key-ring Keyring used for signing/verifying as byte array. You can not set the signatureKeyFileName and signatureKeyRing at the same time. String camel.dataformat.pgp.signature-key-userid User ID of the key in the PGP keyring used for signing (during encryption) or signature verification (during decryption). During the signature verification process the specified User ID restricts the public keys from the public keyring which can be used for the verification. If no User ID is specified for the signature verification then any public key in the public keyring can be used for the verification. Can also be only a part of a user ID. For example, if the user ID is Test User then you can use the part Test User or to address the User ID. String camel.dataformat.pgp.signature-password Password used when opening the private key used for signing (during encryption). String camel.dataformat.pgp.signature-verification-option Controls the behavior for verifying the signature during un-marshaling. There are 4 values possible: optional: The PGP message may or may not contain signatures; if it does contain signatures, then a signature verification is executed. required: The PGP message must contain at least one signature; if this is not the case an exception (PGPException) is thrown. A signature verification is executed. ignore: Contained signatures in the PGP message are ignored; no signature verification is executed. no_signature_allowed: The PGP message must not contain a signature; otherwise an exception (PGPException) is thrown. String
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-crypto-starter</artifactId> </dependency>", "crypto:sign:name[?options] crypto:verify:name[?options]", "crypto:cryptoOperation:name", "KeyPair keyPair = KeyGenerator.getInstance(\"RSA\").generateKeyPair(); from(\"direct:sign\") .setHeader(DigitalSignatureConstants.SIGNATURE_PRIVATE_KEY, constant(keys.getPrivate())) .to(\"crypto:sign:message\") .to(\"direct:verify\"); from(\"direct:verify\") .setHeader(DigitalSignatureConstants.SIGNATURE_PUBLIC_KEY_OR_CERT, constant(keys.getPublic())) .to(\"crypto:verify:check\");", "keytool -genkey -keyalg RSA -keysize 2048 -keystore keystore.jks -storepass letmein -alias bob -dname \"CN=Bob,OU=IT,O=Camel\" -noprompt", "from(\"direct:sign\") .to(\"crypto:sign:keystoreSign?alias=bob&keystoreName=myKeystore&password=letmein\") .log(\"Signature: USD{header.CamelDigitalSignature}\") .to(\"crypto:verify:keystoreVerify?alias=bob&keystoreName=myKeystore&password=letmein\") .log(\"Verified: USD{body}\");", "@Configuration public class KeystoreConfig { @BindToRegistry public KeyStore myKeystore() throws Exception { KeyStore store = KeyStore.getInstance(\"JKS\"); try (FileInputStream fis = new FileInputStream(\"keystore.jks\")) { store.load(fis, \"letmein\".toCharArray()); } return store; } }", "from(\"direct:sign\") .to(\"crypto:sign:keystoreSign?alias=bob&keystoreName=myKeystore&password=letmein&signatureHeaderName=mySignature\") .log(\"Signature: USD{header.mySignature}\") .to(\"crypto:verify:keystoreVerify?alias=bob&keystoreName=myKeystore&password=letmein&signatureHeaderName=mySignature\");", "Exchange unsigned = getMandatoryEndpoint(\"direct:alias-sign\").createExchange(); unsigned.getIn().setBody(payload); unsigned.getIn().setHeader(DigitalSignatureConstants.KEYSTORE_ALIAS, \"bob\"); unsigned.getIn().setHeader(DigitalSignatureConstants.KEYSTORE_PASSWORD, \"letmein\".toCharArray()); template.send(\"direct:alias-sign\", unsigned); Exchange signed = getMandatoryEndpoint(\"direct:alias-sign\").createExchange(); signed.getIn().copyFrom(unsigned.getMessage()); signed.getIn().setHeader(DigitalSignatureConstants.KEYSTORE_ALIAS, \"bob\"); template.send(\"direct:alias-verify\", signed);" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-crypto-jce-component
Chapter 44. Kernel
Chapter 44. Kernel eBPF system call for tracing Red Hat Enterprise Linux 7.6 introduces the Extended Berkeley Packet Filter tool (eBPF) as a Technology Preview. This tool is enabled only for the tracing subsystem. For details, see the Red Hat Knowledgebase article at https://access.redhat.com/articles/3550581 . (BZ# 1559615 , BZ#1559756, BZ#1311586) Heterogeneous memory management included as a Technology Preview Red Hat Enterprise Linux 7.3 introduced the heterogeneous memory management (HMM) feature as a Technology Preview. This feature has been added to the kernel as a helper layer for devices that want to mirror a process address space into their own memory management unit (MMU). Thus a non-CPU device processor is able to read system memory using the unified system address space. To enable this feature, add experimental_hmm=enable to the kernel command line. (BZ#1230959) criu rebased to version 3.5 Red Hat Enterprise Linux 7.2 introduced the criu tool as a Technology Preview. This tool implements Checkpoint/Restore in User-space (CRIU) , which can be used to freeze a running application and store it as a collection of files. Later, the application can be restored from its frozen state. Note that the criu tool depends on Protocol Buffers , a language-neutral, platform-neutral extensible mechanism for serializing structured data. The protobuf and protobuf-c packages, which provide this dependency, were also introduced in Red Hat Enterprise Linux 7.2 as a Technology Preview. In Red Hat Enterprise Linux 7.6, the criu packages have been upgraded to upstream version 3.9, which provides a number of bug fixes and optimization for the runC container runtime. In addition, support for the 64-bit ARM architectures and the little-endian variant of IBM Power Systems CPU architectures has been fixed. (BZ# 1400230 , BZ#1464596) kexec as a Technology Preview The kexec system call has been provided as a Technology Preview. This system call enables loading and booting into another kernel from the currently running kernel, thus performing the function of the boot loader from within the kernel. Hardware initialization, which is normally done during a standard system boot, is not performed during a kexec boot, which significantly reduces the time required for a reboot. (BZ#1460849) kexec fast reboot as a Technology Preview The kexec fast reboot feature, which was introduced in Red Hat Enterprise Linux 7.5, continues to be available as a Technology Preview. kexec fast reboot makes the reboot significantly faster. To use this feature, you must load the kexec kernel manually, and then reboot the operating system. It is not possible to make kexec fast reboot as the default reboot action. Special case is using kexec fast reboot for Anaconda . It still does not enable to make kexec fast reboot default. However, when used with Anaconda , the operating system can automatically use kexec fast reboot after the installation is complete in case that user boots kernel with the anaconda option. To schedule a kexec reboot, use the inst.kexec command on the kernel command line, or include a reboot --kexec line in the Kickstart file. (BZ#1464377) perf cqm has been replaced by resctrl The Intel Cache Allocation Technology (CAT) was introduced in Red Hat Enterprise Linux 7.4 as a Technology Preview. However, the perf cqm tool did not work correctly due to an incompatibility between perf infrastructure and Cache Quality of Service Monitoring (CQM) hardware support. Consequently, multiple problems occurred when using perf cqm . These problems included most notably: perf cqm did not support the group of tasks which is allocated using resctrl perf cqm gave random and inaccurate data due to several problems with recycling perf cqm did not provide enough support when running different kinds of events together (the different events are, for example, tasks, system-wide, and cgroup events) perf cqm provided only partial support for cgroup events The partial support for cgroup events did not work in cases with a hierarchy of cgroup events, or when monitoring a task in a cgroup and the cgroup together Monitoring tasks for the lifetime caused perf overhead perf cqm reported the aggregate cache occupancy or memory bandwidth over all sockets, while in most cloud and VMM-bases use cases the individual per-socket usage is needed In Red Hat Enterprise Linux 7.5, perf cqm was replaced by the approach based on the resctrl file system, which addressed all of the aforementioned problems. (BZ# 1457533 , BZ#1288964) TC HW offloading available as a Technology Preview Starting with Red Hat Enterprise Linux 7.6, Traffic Control (TC) Hardware offloading has been provided as a Technology Preview. Hardware offloading enables that the selected functions of network traffic processing, such as shaping, scheduling, policing and dropping, are executed directly in the hardware instead of waiting for software processing, which improves the performance. (BZ#1503123) AMD xgbe network driver available as a Technology Preview Starting with Red Hat Enterprise Linux 7.6, the AMD xgbe network driver has been provided as a Technology Preview. (BZ#1589397)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/technology_previews_kernel
Chapter 16. Network File System
Chapter 16. Network File System A Network File System ( NFS ) allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network. [16] In Red Hat Enterprise Linux, the nfs-utils package is required for full NFS support. Enter the following command to see if the nfs-utils is installed: If it is not installed and you want to use NFS, use the yum utility as root to install it: 16.1. NFS and SELinux When running SELinux, the NFS daemons are confined by default except the nfsd process, which is labeled with the unconfined kernel_t domain type. The SELinux policy allows NFS to share files by default. Also, passing SELinux labels between a client and the server is supported, which provides better security control of confined domains accessing NFS volumes. For example, when a home directory is set up on an NFS volume, it is possible to specify confined domains that are able to access only the home directory and not other directories on the volume. Similarly, applications, such as Secure Virtualization, can set the label of an image file on an NFS volume, thus increasing the level of separation of virtual machines. The support for labeled NFS is disabled by default. To enable it, see Section 16.4.1, "Enabling SELinux Labeled NFS Support" . [16] See the Network File System (NFS) chapter in the Storage Administration Guide for more information.
[ "~]USD rpm -q nfs-utils package nfs-utils is not installed", "~]# yum install nfs-utils" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/chap-managing_confined_services-network_file_system
1.2. Bringing Linux Services Together
1.2. Bringing Linux Services Together Identity Management unifies disparate yet related Linux services into a single management environment. From there, it establishes a simple, easy way to bring host machines into the domain of those services. An IdM server is, at its core, an identity and authentication server. The primary IdM server, essentially a domain controller, uses a Kerberos server and KDC for authentication. An LDAP backend contains all of the domain information, including users, client machines, and domain configuration. Figure 1.1. The IdM Server: Unifying Services Other services are included to provide support for the core identity/authentication functions. DNS is used for machine discovery and for connecting to other clients in the domain. NTP is used to synchronize all domain clocks so that logging, certificates, and operations can occur as expected. A certificate service provides certificates for Kerberos-aware services. All of these additional services work together under the control of the IdM server. The IdM server also has a set of tools which are used to manage all of the IdM-associated services. Rather than managing the LDAP server, KDC, or DNS settings individually, using different tools on local machines, IdM has a single management toolset (CLI and web UI) that allows centralized and cohesive administration of the domain. 1.2.1. Authentication: Kerberos KDC Kerberos is an authentication protocol. Kerberos uses symmetric key cryptography to generate tickets to users. Kerberos-aware services check the ticket cache (a keytab ) and authenticate users with valid tickets. Kerberos authentication is significantly safer than normal password-based authentication because passwords are never sent over the network - even when services are accessed on other machines. In Identity Management, the Kerberos administration server is set up on the IdM domain controller, and all of the Kerberos data are stored in IdM's backend Directory Server. The Directory Server instance defines and enforces access controls for the Kerberos data. Note The IdM Kerberos server is managed through IdM tools instead of Kerberos tools because all of its data are stored in the Directory Server instance. The KDC is unaware of the Directory Server, so managing the KDC with Kerberos tools does not affect the IdM configuration. 1.2.2. Data Storage: 389 Directory Server Identity Management contains an internal 389 Directory Server instance. All of the Kerberos information, user accounts, groups, services, policy information, DNS zone and host entries, and all other information in IdM is stored in this 389 Directory Server instance. When multiple servers are configured, they can talk to each other because 389 Directory Server supports multi-master replication . Agreements are automatically configured between the initial server and any additional replicas which are added to the domain. 1.2.3. Authentication: Dogtag Certificate System Kerberos can use certificates along with keytabs for authentication, and some services require certificates for secure communication. Identity Management includes a certificate authority, through Dogtag Certificate System, with the server. This CA issues certificates to the server, replicas, and hosts and services within the IdM domain. The CA can be a root CA or it can have its policies defined by another, external CA (so that it is subordinate to that CA). Whether the CA is a root or subordinate CA is determined when the IdM server is set up. 1.2.4. Server/Client Discovery: DNS Identity Management defines a domain - multiple machines with different users and services, each accessing shared resources and using shared identity, authentication, and policy configuration. The clients need to be able to contact each other, as IdM servers. Additionally, services like Kerberos depend on hostnames to identify their principal identities. Hostnames are associated with IP addresses using the Domain Name Service (DNS). DNS maps hostnames to IP addresses and IP addresses to hostnames, providing a resource that clients can use when they need to look up a host. From the time a client is enrolled in the IdM domain, it uses DNS service discovery to locate IdM servers within the domain and then all of the services and clients within the domain. The client installation tool automatically configures the local System Security Services Daemon (SSSD) to use the IdM domain for service discovery. SSSD uses DNS already to look for LDAP/TCP and Kerberos / UDP services; the client installation only needs to supply the domain name. SSSD service discovery is covered in the SSSD chapter in the Red Hat Enterprise Linux Deployment Guide . On the server, the installation script configures the DNS file to set which services the DNS service discovery queries. By default, DNS discovery queries the LDAP service on TCP and different Kerberos services on both UDP and TCP. The DNS file which is created is described in Section 17.2, "Using IdM and DNS Service Discovery with an Existing DNS Configuration" . Note While it is technically possible to configure the IdM domain to use DNS service discovery without having an IdM server host the DNS services, this is not recommended. Multiple DNS servers are usually configured, each one working as an authoritative resource for machines within a specific domain. Having the IdM server also be a DNS server is optional, but it is strongly recommended. When the IdM server also manages DNS, there is tight integration between the DNS zones and the IdM clients and the DNS configuration can be managed using native IdM tools. Even if an IdM server is a DNS server, other external DNS servers can still be used. 1.2.5. Management: SSSD The System Security Services Daemon (SSSD) is a platform application that caches credentials. Most system authentication is configured locally, which means that services must check with a local user store to determine users and credentials. SSSD allows a local service to check with a local cache in SSSD; the cache may be taken from any variety of remote identity providers, including Identity Management. SSSD can cache user names and passwords, Kerberos principals and keytabs, automount maps, sudo rules that are defined on IPA servers, and SSH keys that are used by Identity Management domain users and systems. This allows two significant benefits to administrators: all identity configuration can be centralized in a single application (the IdM server); and, external information can be cached on a local system to continue normal authentication operations, in case the system or the IdM server becomes unavailable. SSSD is automatically configured by IdM client installation and management scripts, so the system configuration never needs to be manually updated, even as domain configuration changes. Consistently with Windows Active Directory, SSSD allows the user to log in with either the user name attribute or the User Principal Name (UPN) attribute. SSSD supports the true , false , and preserve values for the case_sensitive option. When the preserve value is enabled, the input matches regardless of the case, but the output is always the same case as on the server; SSSD preserves the case for the UID field as it is configured. SSSD allows certain cached entries to be refreshed in the background, so the entries are returned instantly because the back end keeps them updated at all times. Currently, entries for users, groups, and netgroups are supported. 1.2.6. Management: NTP Many services require that servers and clients have the same system time, within a certain variance. For example, Kerberos tickets use time stamps to determine their validity. If the times between the server and client skew outside the allowed range, then any Kerberos tickets are invalidated. Clocks are synchronized over a network using Network Time Protocol (NTP). A central server acts as an authoritative clock and all of the clients which reference that NTP server sync their times to match. When the IdM server is the NTP server for the domain, all times and dates are synchronized before any other operations are performed. This allows all of the date-related services - including password expirations, ticket and certificate expirations, account lockout settings, and entry creation dates - to function as expected. The IdM server, by default, works as the NTP server for the domain. Other NTP servers can also be used for the hosts.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/ipa-linux-services
Chapter 2. Service Telemetry Framework release information
Chapter 2. Service Telemetry Framework release information Notes for updates released during the supported lifecycle of this Service Telemetry Framework (STF) release appear in the advisory text associated with each update. 2.1. Service Telemetry Framework 1.5.0 These release notes highlight enhancements and removed functionality of this release of Service Telemetry Framework (STF). This release includes the following advisories: RHEA-2022:8735-01 Release of components for Service Telemetry Framework 1.5.0 - Container Images 2.1.1. Release notes This section outlines important details about the release, including recommended practices and notable changes to STF. You must take this information into account to ensure the best possible outcomes for your installation. BZ# 2121457 STF 1.5.0 supports OpenShift Container Platform 4.10. releases of STF were limited to OpenShift Container Platform 4.8, which is nearing the end of extended support. OpenShift Container Platform 4.10 is an Extended Update Support (EUS) release with full support until November 2022, and maintenance support until September 2023. For more information, see Red Hat OpenShift Container Platform Life Cycle Policy . 2.1.2. Deprecated Functionality The items in this section are either no longer supported, or will no longer be supported in a future release. BZ# 2153825 The sg-core application plugin elasticsearch is deprecated in STF 1.5. BZ# 2152901 The use of prometheus-webhook-snmp is deprecated in STF 1.5. 2.1.3. Removed Functionality BZ# 2150029 The section in the STF documentation describing how to use STF and Gnocchi together has been removed. The use of Gnocchi is limited to use for autoscaling. 2.2. Service Telemetry Framework 1.5.1 These release notes highlight enhancements and removed functionality of this release of Service Telemetry Framework (STF). This release includes the following advisory: RHSA-2023:1529-04 Release of components for Service Telemetry Framework 1.5.1 - Container Images 2.2.1. Release notes This section outlines important details about the release, including recommended practices and notable changes to STF. You must take this information into account to ensure the best possible outcomes for your installation. BZ# 2176537 STF 1.5.1 supports OpenShift Container Platform 4.10 and 4.12. releases of STF were limited to OpenShift Container Platform 4.8, which is nearing the end of extended support. OpenShift Container Platform 4.12 is an Extended Update Support (EUS) release currently in full support, and maintenance support until July 2024. For more information, see Red Hat OpenShift Container Platform Life Cycle Policy . BZ# 2173856 There is an issue where the events datasource in Grafana is unavailable when events storage is disabled. The default setting of events storage is disabled. The virtual machine dashboard presents warnings about a missing datasource because the datasource is using annotations and is unavailable by default. Workaround (if any): You can use the available switch on the virtual machine dashboard to disable the annotations and match the default deployment options in STF. 2.2.2. Enhancements This release of STF features the following enhancements: BZ# 2092544 You can have more control over certificate renewal configuration with additional certificate expiration configuration for CA and endpoint certificates for QDR and Elasticsearch. STF-559 You can now use the additional SNMP trap delivery controls in STF to configure the trap delivery target, port, community, default trap OID, default trap severity, and trap OID prefix. BZ# 2159464 This feature has been rebuilt on golang 1.18, to remain on a supported golang version, which benefits future maintenance activities. 2.3. Service Telemetry Framework 1.5.2 These release notes highlight enhancements and removed functionality of this release of Service Telemetry Framework (STF). This release includes the following advisory: RHEA-2023:3785 Release for Service Telemetry Framework 1.5.2 2.3.1. Bug fixes These bugs were fixed in this release of STF: BZ# 2211897 Previously, you installed Prometheus Operator from OperatorHub.io Operators CatalogSource, which interfered with in-cluster monitoring in Red Hat OpenShift Container Platform. To remedy this, you now use Prometheus Operator from the Community Operators CatalogSource during STF installation. For more information on how to migrate from OperatorHub.io Operators CatalogSource to Community Operators CatalogSource, see the Knowledge Base Article Migrating Service Telemetry Framework to Prometheus Operator from community-operators 2.3.2. Enhancements This release of STF features the following enhancements: BZ# 2138179 You can now deploy Red Hat OpenStack Platform (RHOSP) with director Operator for monitoring RHOSP 16.2 with STF. 2.3.3. Removed functionality The following functionality has been removed from this release of STF: BZ# 2189670 Documentation about ephemeral storage is removed. Ensure that you use persistent storage in production deployments. 2.4. Service Telemetry Framework 1.5.3 These release notes highlight enhancements and removed functionality of this release of Service Telemetry Framework (STF). This release includes the following advisory: RHEA-2023:123051-01 Release for Service Telemetry Framework 1.5.3 2.4.1. Enhancements This release of STF features the following enhancements: JIRA# STF-1525 In versions of STF before 1.5.3, STF used the Role-based access control (RBAC) profiles created by Red Hat OpenShift Container Platform (RHOCP) Cluster Monitoring Operator. STF now manages its own RBAC profiles as part of the deployment. STF no longer requires RHOCP Cluster Monitoring on the cluster, resulting in an independent RBAC control interface. JIRA# STF-1512 To match the polling frequency of Ceilometer, the default polling frequency in STF of the scrape interval of the Smart Gateways and the polling frequency of collectd are now 30 seconds each. JIRA# STF-1485 If you deploy STF 1.5 with Red Hat OpenShift Container Platform (RHOCP) version 4.12 or newer, the default channel of the Certificate Manager for RHOCP Operator is stable-v1 . Deployments of STF 1.5 with RHOCP 4.10 use a channel that is technical preview and the deployment procedure is different. Ensure that you migrate to the stable-v1 channel before you upgrade RHOCP to version 4.13 or newer. For more information about migrating the Certificate Manager for RHOCP Operator from the tech-preview channel to the stable-v1 channel, see the Red Hat knowledgebase article Updating Service Telemetry Framework cert-manager dependency from tech-preview to stable-v1 . JIRA# STF-496 In this release of STF, STF metrics data store (Prometheus) is supported when you use Red Hat Cluster Observability Operator (COO). For more information about migrating from the community Prometheus Operator to Red Hat Cluster Observability Operator, see the Red Hat knowledgebase article, Migrating Service Telemetry Framework to fully supported operators . JIRA# STF-1277 In this release of STF, you can forward events to a user-provided Elasticsearch instance. JIRA# STF-1224 STF now supports Red Hat OpenShift Container Platform (RHOCP) versions from 4.12 to 4.14. JIRA# STF-1387 In this release of STF, you can configure the backends.events.elasticsearch.forwarding parameter of the ServiceTelemetry object to forward storage events to an Elasticsearch instance. For more information about enabling Elasticsearch as a storage back end for events, see Primary parameters of the ServiceTelemetry object in the Service Telemetry Framework 1.5 Guide. 2.4.2. Removed functionality The following functionality has been removed from this release of STF: JIRA# STF-1526 STF 1.5 supports the latest Red Hat OpenShift Container Platform(RHOCP) Extended Update Support (EUS) releases, such as RHOCP 4.12 and RHOCP 4.14. Other versions of RHOCP are supported only for upgrading between EUS releases. RHOCP 4.10 is end-of-life, so STF 1.5.3 is not included in the RHOCP 4.10 CatalogSource. For more information about the supported life cycle of RHOCP, see https://access.redhat.com/support/policy/updates/openshift JIRA# STF-1504 In this release of STF, the interface in Service Telemetry Operator that you use to manage a logging storage backend is removed. Using Loki to store logs that were transported with amqp1 is not supported in production environments. JIRA# STF-1498 In this release of STF, events are not managed by default. The events pipeline is disabled when you deploy Red Hat OpenStack Platform (RHOSP). 2.4.3. Deprecated functionality The items in this section are either no longer supported, or will no longer be supported in a future release. JIRA# STF-1507 STF high availability (HA) mode is deprecated. JIRA# STF-1493 Elasticsearch management is deprecated when you deploy STF with the value of the observabilityStrategy parameter set to use_community . Elasticsearch management is removed if you set the value of the observabilityStrategy parameter to use_hybrid or use_redhat . You can still use AMQ Interconnect to transmit events from RHOSP to STF with an external Elasticsearch that you configure with a URL and other parameters, to enable the events Smart Gateway to connect and store events. For more information about how to provide a compatible connection with a user-provided instance of Elasticsearch, see the Red Hat knowledgebase article Using Service Telemetry Framework with Elasticsearch . JIRA# STF-1531 The basic authorization login methods for the STF UI interfaces are deprecated and replaced by the OAuth UI login methods. JIRA# STF-1097 In this version of STF, deploying with Elasticsearch is not supported. The Elasticsearch plugin in sg-core is deprecated and the limited use of events in dashboards for STF is removed. STF now uses a forwarding model to allow transport and storage of events to a user-provided Elasticsearch instance through the sg-core component. 2.5. Service Telemetry Framework 1.5.4 These release notes highlight enhancements and removed functionality of this release of Service Telemetry Framework (STF). This release includes the following advisory: RHSA-2024:127788-02 Release for Service Telemetry Framework 1.5.4 2.5.1. Enhancements This release of STF features the following enhancements: JIRA# OSPRH-800 In this release of STF, you can now deploy STF in a Red Hat OpenShift Container Platform(RHOCP) disconnected environment. For more information about deploying STF in a RHOCP disconnected environment, see Deploying STF on Red Hat OpenShift Container Platform-disconnected environments in the Service Telemetry Framework 1.5 guide. JIRA# OSPRH-2577 In this release of STF, STF requests Grafana, GrafanaDashboard, and GrafanaDatasource objects from Grafana Operator v5 community operator, not Grafana Operator v4. Grafana Operator v5 is the recommended Grafana version for STF 1.5.4, but STF can request objects from Grafana Operator v4 if the Custom Resource Definitions (CRDs) for Grafana Operator v5 are not available. The default route for Grafana Operator v5 has changed from grafana-route to default-grafana-route . For more information about migrating Grafana Operator, see the Red Hat Knowledgebase solution Migrate from Grafana Operator v4 to v5. JIRA# OSPRH-2140 Previously, STF used a static target version of Prometheus, namely version 2.43.0, when you migrated from the community Prometheus Operator to the supported Cluster Observability Operator. In this release of STF, when you define the value of the observabilityStrategy parameter to use_redhat in the ServiceTelemetry object, which is the default, the Service Telemetry Operator does not request a specific version of Prometheus. If you do not specify the version of Prometheus, STF uses the default version provided by Cluster Observability Operator. JIRA# OSPRH-825 In this release of STF, if you install the Grafana Operator from the community CatalogSource and enable graphing, you can load dashboards automatically into Grafana using the graphing.dashboards.enabled parameter. You do not have to load dashboards from the github.com/infrawatch/dashboards repository. 2.5.2. Removed functionality JIRA# OSPRH-3492 In this release of STF, you cannot use basic authentication methods for the STF UI interfaces and must use the oauth-proxy interface for authentication. 2.5.3. Deprecated Functionality The items in this section are either no longer supported, or will no longer be supported in a future release. Deprecation of Service Telemetry Framework After this release, STF is deprecated from full support and moves to maintenance support. At the end of the support lifecycle for Red Hat OpenStack Platform (RHOSP) 17.1, STF moves to Extended Lifecycle Support (ELS). During the maintenance support lifecycle, Red Hat will not add new features to STF 1.5. Red Hat will continue to rebase and release STF onto Extended Update Support (EUS) versions of Red Hat OpenShift Container Platform as they become available during the lifecycle of RHOSP 17.1. Red Hat continues to address critical STF bugs and CVEs. For more information on the product lifecycle for RHOSP and STF, see the Red Hat OpenStack Platform Support Life Cycle and Service Telemetry Framework Life Cycle pages on the customer portal. 2.6. Service Telemetry Framework 1.5.5 These release notes highlight enhancements and removed functionality of this release of Service Telemetry Framework (STF). This release includes the following advisory: RHBA-2024:138183-01 Release for Service Telemetry Framework 1.5.5 2.6.1. Enhancements This release of STF features the following enhancements: JIRA# OSPRH-9699 STF 1.5.5 includes support for Red Hat OpenShift Container Platform (OCP) version 4.16. 2.6.2. Bug fixes These bugs were fixed in this release of STF: JIRA# OSPRH-10081 This release of STF fixes an issue where the user had no permissions to see the Prometheus dashboards, even with the RoleBinding set. For more information, see the Red Hat Knowledgebase solution Openshift user has no rights to see prometheus and alertmanager dashboards in service telemetry framework. JIRA# OSPRH-10082 This release of STF fixes an issue where the following error could occur when deploying STF with observabilityStrategy: use_community , a deployed Prometheus Operator, and no Cluster Observability Operator (COO). 2.7. Documentation Changes This section details the major documentation updates delivered with Service Telemetry Framework (STF) 1.5, and the changes made to the documentation set that include adding new features, enhancements, and corrections. The section also details the addition of new titles and the removal of retired or replaced titles. Table 2.1. Document changes Date Versions impacted Affected content Description of change Mar 2025 1.5.4 https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/service_telemetry_framework_1.5/assembly-preparing-your-ocp-environment-for-stf_assembly#deploying-stf-on-openshift-disconnected-environments_assembly-preparing-your-ocp-environment-for-stf Channel name updated from development to stable in imagesetconfig.yaml . Mar 2025 1.5.4 https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/service_telemetry_framework_1.5/assembly-installing-the-core-components-of-stf_assembly#deploying-observability-operator_assembly-installing-the-core-components-of-stf Value for channel updated from development to stable in the subscription for Cluster Observability Operator. Mar 2024 1.5.4 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/service_telemetry_framework_1.5/index#deploying-stf-on-openshift-disconnected-environments_assembly-preparing-your-ocp-environment-for-stf You can now deploy STF on RHOCP-disconnected environments. Mar 2024 1.5.4 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/service_telemetry_framework_1.5/index#setting-up-grafana-to-host-the-dashboard_assembly-advanced-features Grafana Operator v4 development was discontinued upstream since December 2023. Ensure that you use Grafana Operator v5 instead. Nov 2023 1.5.3 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/service_telemetry_framework_1.5/index#creating-the-base-configuration-for-stf_assembly-completing-the-stf-configuration Event storage now uses a forwarding model and event delivery is not enabled by default. The instructions for enabling event delivery are available in the Red Hat Knowledgebase solution https://access.redhat.com/articles/7032697 Nov 2023 1.5.3 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/service_telemetry_framework_1.5/index#creating-openstack-environment-file-for-multiple-clouds_assembly-completing-the-stf-configuration https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/service_telemetry_framework_1.5/index#configuring-the-stf-connection-for-the-overcloud_assembly-completing-the-stf-configuration In the stf-connectors.yaml template, you can now use the short hostname for the the qdr::router_id value instead of the default FQDN which might be too long in older versions of QDR. You do not need to update your current configuration if the default values are less than 61 characters long. Nov 2023 1.5.3 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/service_telemetry_framework_1.5/index#importing-dashboards_assembly-advanced-features The procedure for importing the events dashboard was removed. Event delivery is no longer enabled by default. Nov 2023 1.5.3 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/service_telemetry_framework_1.5/index#creating-the-base-configuration-for-stf_assembly-completing-the-stf-configuration In the enable-stf.yaml file, the default polling frequency for collectd is increased from 5 seconds to 30 seconds to match the polling frequency of Ceilometer. The relevant parameters are CollectdAmqpInterval and CollectdDefaultPollingInterval . Nov 2023 1.5.3 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/service_telemetry_framework_1.5/index#creating-the-base-configuration-for-stf_assembly-completing-the-stf-configuration In the enable-stf.yaml file, the number of example default pollsters for Ceilometer is reduced to limit the number of unnecessary endpoints that are not used by the STF sample dashboards. Nov 2023 1.5.3 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/service_telemetry_framework_1.5/index#deploying-stf-to-the-openshift-environment_assembly-installing-the-core-components-of-stf The oc wait command is added to verification procedures throughout the STF Guide to highlight that some commands take time to complete. Nov 2023 1.5.3 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/service_telemetry_framework_1.5/index#configuring-the-stf-connection-for-the-overcloud_assembly-completing-the-stf-configuration In new deployments, the QDR connection uses basic password authentication by default. Configuration procedures are updated to add the password authentication step. Nov 2023 1.5.3 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/service_telemetry_framework_1.5/index#creating-an-alert-rule-in-prometheus_assembly-advanced-features PrometheusRules custom resources have been updated to match the apiVersion used by the Cluster Observability Operator. The commands used monitoring.coreos.com instead of monitoring.rhobs. 22 Jun 2023 1.5.2 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/service_telemetry_framework_1.5/index#configuring-the-stf-connection-for-the-overcloud_assembly-completing-the-stf-configuration More information about AMQ Interconnect topic parameters and topic addresses for cloud configurations. 22 Jun 2023 1.5.2 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html/service_telemetry_framework_1.5/index Section added about Red Hat OpenStack Platform (RHOSP) with director Operator for monitoring RHOSP 16.2 with STF. 30 Mar 2023 1.5.1 Removed section from STF documentation titled, "Deploying to non-standard network topologies". The recommendations were unnecessary and potentially inaccurate. 30 Mar 2023 1.5.1 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/service_telemetry_framework_1.5/index#configuration-parameters-for-snmptraps_assembly-advanced-features The additional configuration parameters available in STF 1.5.1 have been added to the "Sending Alerts as SNMP traps" section. There is more information and examples for configuring a ServiceTelemetry object for SNMP trap delivery from Prometheus Alerts. 30 Mar 2023 1.5.1 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/service_telemetry_framework_1.5/index#proc-updating-the-amq-interconnect-ca-certificate_assembly-renewing-the-amq-interconnect-certificate The tripleo-ansible-inventory.yaml path has been updated to match the correct path on RHOSP 13 and 16.2 deployments. 01 Dec 2022 1.5 Removed section from STF documentation about using Gnocchi with STF. You can only use Gnocchi for autoscaling.
[ "2024-06-27T05:24:43.253007084Z TASK [Ensure no rhobs Prometheus is installed if not using it] ******************************** 2024-06-27T05:24:43.253007084Z fatal: [localhost]: FAILED! => {\"changed\": false, \"msg\": \"Failed to find exact match for monitoring.rhobs/v1.prometheus by [kind, name, singularName, shortNames]\"}" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/service_telemetry_framework_release_notes_1.5/assembly-stf-release-information_osp
Chapter 38. Verifying the KIE Server installation
Chapter 38. Verifying the KIE Server installation Verify that KIE Server is installed correctly. Prerequisites KIE Server is installed and configured. Procedure To start KIE Server, enter one of the following commands in the JWS_HOME /tomcat/bin directory: On Linux or UNIX-based systems: USD ./startup.sh On Windows: startup.bat After a few minutes, review the files in the JWS_HOME /tomcat/logs directory and correct any errors. To verify that KIE Server is working correctly, enter http://localhost:8080/kie-server/services/rest/server in a web browser. Enter the user name and password stored in the tomcat-users.xml file.
[ "./startup.sh", "startup.bat" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/jws-kie-server-verify-proc_install-on-jws
Chapter 295. Camel SCR (deprecated)
Chapter 295. Camel SCR (deprecated) Available as of Camel 2.15 SCR stands for Service Component Runtime and is an implementation of OSGi Declarative Services specification. SCR enables any plain old Java object to expose and use OSGi services with no boilerplate code. OSGi framework knows your object by looking at SCR descriptor files in its bundle which are typically generated from Java annotations by a plugin such as org.apache.felix:maven-scr-plugin . Running Camel in an SCR bundle is a great alternative for Spring DM and Blueprint based solutions having significantly fewer lines of code between you and the OSGi framework. Using SCR your bundle can remain completely in Java world; there is no need to edit XML or properties files. This offers you full control over everything and means your IDE of choice knows exactly what is going on in your project. 295.1. Camel SCR support Camel-scr bundle is not included in Apache Camel versions prior 2.15.0, but the artifact itself can be used with any Camel version since 2.12.0. org.apache.camel/camel-scr bundle provides a base class, AbstractCamelRunner , which manages a Camel context for you and a helper class, ScrHelper , for using your SCR properties in unit tests. Camel-scr feature for Apache Karaf defines all features and bundles required for running Camel in SCR bundles. AbstractCamelRunner class ties CamelContext's lifecycle to Service Component's lifecycle and handles configuration with help of Camel's PropertiesComponent. All you have to do to make a Service Component out of your java class is to extend it from AbstractCamelRunner and add the following org.apache.felix.scr.annotations on class level: Add required annotations @Component @References({ @Reference(name = "camelComponent",referenceInterface = ComponentResolver.class, cardinality = ReferenceCardinality.MANDATORY_MULTIPLE, policy = ReferencePolicy.DYNAMIC, policyOption = ReferencePolicyOption.GREEDY, bind = "gotCamelComponent", unbind = "lostCamelComponent") }) Then implement getRouteBuilders() method which returns the Camel routes you want to run: Implement getRouteBuilders() @Override protected List<RoutesBuilder> getRouteBuilders() { List<RoutesBuilder> routesBuilders = new ArrayList<>(); routesBuilders.add(new YourRouteBuilderHere(registry)); routesBuilders.add(new AnotherRouteBuilderHere(registry)); return routesBuilders; } And finally provide the default configuration with: Default configuration in annotations @Properties({ @Property(name = "camelContextId", value = "my-test"), @Property(name = "active", value = "true"), @Property(name = "...", value = "..."), ... }) That's all. And if you used camel-archetype-scr to generate a project all this is already taken care of. Below is an example of a complete Service Component class, generated by camel-archetype-scr: CamelScrExample.java // This file was generated from org.apache.camel.archetypes/camel-archetype-scr/2.15-SNAPSHOT package example; import java.util.ArrayList; import java.util.List; import org.apache.camel.scr.AbstractCamelRunner; import example.internal.CamelScrExampleRoute; import org.apache.camel.RoutesBuilder; import org.apache.camel.spi.ComponentResolver; import org.apache.felix.scr.annotations.*; @Component(label = CamelScrExample.COMPONENT_LABEL, description = CamelScrExample.COMPONENT_DESCRIPTION, immediate = true, metatype = true) @Properties({ @Property(name = "camelContextId", value = "camel-scr-example"), @Property(name = "camelRouteId", value = "foo/timer-log"), @Property(name = "active", value = "true"), @Property(name = "from", value = "timer:foo?period=5000"), @Property(name = "to", value = "log:foo?showHeaders=true"), @Property(name = "messageOk", value = "Success: {{from}} -> {{to}}"), @Property(name = "messageError", value = "Failure: {{from}} -> {{to}}"), @Property(name = "maximumRedeliveries", value = "0"), @Property(name = "redeliveryDelay", value = "5000"), @Property(name = "backOffMultiplier", value = "2"), @Property(name = "maximumRedeliveryDelay", value = "60000") }) @References({ @Reference(name = "camelComponent",referenceInterface = ComponentResolver.class, cardinality = ReferenceCardinality.MANDATORY_MULTIPLE, policy = ReferencePolicy.DYNAMIC, policyOption = ReferencePolicyOption.GREEDY, bind = "gotCamelComponent", unbind = "lostCamelComponent") }) public class CamelScrExample extends AbstractCamelRunner { public static final String COMPONENT_LABEL = "example.CamelScrExample"; public static final String COMPONENT_DESCRIPTION = "This is the description for camel-scr-example."; @Override protected List<RoutesBuilder> getRouteBuilders() { List<RoutesBuilder> routesBuilders = new ArrayList<>(); routesBuilders.add(new CamelScrExampleRoute(registry)); return routesBuilders; } } CamelContextId and active properties control the CamelContext's name (defaults to "camel-runner-default") and whether it will be started or not (defaults to "false"), respectively. In addition to these you can add and use as many properties as you like. Camel's PropertiesComponent handles recursive properties and prefixing with fallback without problem. AbstractCamelRunner will make these properties available to your RouteBuilders with help of Camel's PropertiesComponent and it will also inject these values into your Service Component's and RouteBuilder's fields when their names match. The fields can be declared with any visibility level, and many types are supported (String, int, boolean, URL, ... ). Below is an example of a RouteBuilder class generated by camel-archetype-scr : CamelScrExampleRoute.java // This file was generated from org.apache.camel.archetypes/camel-archetype-scr/2.15-SNAPSHOT package example.internal; import org.apache.camel.LoggingLevel; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.impl.SimpleRegistry; import org.apache.commons.lang.Validate; public class CamelScrExampleRoute extends RouteBuilder { SimpleRegistry registry; // Configured fields private String camelRouteId; private Integer maximumRedeliveries; private Long redeliveryDelay; private Double backOffMultiplier; private Long maximumRedeliveryDelay; public CamelScrExampleRoute(final SimpleRegistry registry) { this.registry = registry; } @Override public void configure() throws Exception { checkProperties(); // Add a bean to Camel context registry registry.put("test", "bean"); errorHandler(defaultErrorHandler() .retryAttemptedLogLevel(LoggingLevel.WARN) .maximumRedeliveries(maximumRedeliveries) .redeliveryDelay(redeliveryDelay) .backOffMultiplier(backOffMultiplier) .maximumRedeliveryDelay(maximumRedeliveryDelay)); from("{{from}}") .startupOrder(2) .routeId(camelRouteId) .onCompletion() .to("direct:processCompletion") .end() .removeHeaders("CamelHttp*") .to("{{to}}"); from("direct:processCompletion") .startupOrder(1) .routeId(camelRouteId + ".completion") .choice() .when(simple("USD{exception} == null")) .log("{{messageOk}}") .otherwise() .log(LoggingLevel.ERROR, "{{messageError}}") .end(); } } public void checkProperties() { Validate.notNull(camelRouteId, "camelRouteId property is not set"); Validate.notNull(maximumRedeliveries, "maximumRedeliveries property is not set"); Validate.notNull(redeliveryDelay, "redeliveryDelay property is not set"); Validate.notNull(backOffMultiplier, "backOffMultiplier property is not set"); Validate.notNull(maximumRedeliveryDelay, "maximumRedeliveryDelay property is not set"); } } Let's take a look at CamelScrExampleRoute in more detail. // Configured fields private String camelRouteId; private Integer maximumRedeliveries; private Long redeliveryDelay; private Double backOffMultiplier; private Long maximumRedeliveryDelay; The values of these fields are set with values from properties by matching their names. // Add a bean to Camel context registry registry.put("test", "bean"); If you need to add some beans to CamelContext's registry for your routes, you can do it like this. public void checkProperties() { Validate.notNull(camelRouteId, "camelRouteId property is not set"); Validate.notNull(maximumRedeliveries, "maximumRedeliveries property is not set"); Validate.notNull(redeliveryDelay, "redeliveryDelay property is not set"); Validate.notNull(backOffMultiplier, "backOffMultiplier property is not set"); Validate.notNull(maximumRedeliveryDelay, "maximumRedeliveryDelay property is not set"); } It is a good idea to check that required parameters are set and they have meaningful values before allowing the routes to start. from("{{from}}") .startupOrder(2) .routeId(camelRouteId) .onCompletion() .to("direct:processCompletion") .end() .removeHeaders("CamelHttp*") .to("{{to}}"); from("direct:processCompletion") .startupOrder(1) .routeId(camelRouteId + ".completion") .choice() .when(simple("USD{exception} == null")) .log("{{messageOk}}") .otherwise() .log(LoggingLevel.ERROR, "{{messageError}}") .end(); Note that pretty much everything in the route is configured with properties. This essentially makes your RouteBuilder a template. SCR allows you to create more instances of your routes just by providing alternative configurations. More on this in section Using Camel SCR bundle as a template . 295.2. AbstractCamelRunner's lifecycle in SCR When component's configuration policy and mandatory references are satisfied SCR calls activate() . This creates and sets up a CamelContext through the following call chain: activate() prepare() createCamelContext() setupPropertiesComponent() configure() setupCamelContext() . Finally, the context is scheduled to start after a delay defined in AbstractCamelRunner.START_DELAY with runWithDelay() . When Camel components ( ComponentResolver services, to be exact) are registered in OSGi, SCR calls gotCamelComponent` ()` which reschedules/delays the CamelContext start further by the same AbstractCamelRunner.START_DELAY . This in effect makes CamelContext wait until all Camel components are loaded or there is a sufficient gap between them. The same logic will tell a failed-to-start CamelContext to try again whenever we add more Camel components. When Camel components are unregistered SCR calls lostCamelComponent` ()`. This call does nothing. When one of the requirements that caused the call to activate() is lost SCR will call deactivate() . This will shutdown the CamelContext. In (non-OSGi) unit tests you should use prepare() run() stop() instead of activate() deactivate() for more fine-grained control. Also, this allows us to avoid possible SCR specific operations in tests. 295.3. Using camel-archetype-scr The easiest way to create an Camel SCR bundle project is to use camel-archetype-scr and Maven. You can generate a project with the following steps: Generating a project USD mvn archetype:generate -Dfilter=org.apache.camel.archetypes:camel-archetype-scr Choose archetype: 1: local -> org.apache.camel.archetypes:camel-archetype-scr (Creates a new Camel SCR bundle project for Karaf) Choose a number or apply filter (format: [groupId:]artifactId, case sensitive contains): : 1 Define value for property 'groupId': : example [INFO] Using property: groupId = example Define value for property 'artifactId': : camel-scr-example Define value for property 'version': 1.0-SNAPSHOT: : Define value for property 'package': example: : [INFO] Using property: archetypeArtifactId = camel-archetype-scr [INFO] Using property: archetypeGroupId = org.apache.camel.archetypes [INFO] Using property: archetypeVersion = 2.15-SNAPSHOT Define value for property 'className': : CamelScrExample Confirm properties configuration: groupId: example artifactId: camel-scr-example version: 1.0-SNAPSHOT package: example archetypeArtifactId: camel-archetype-scr archetypeGroupId: org.apache.camel.archetypes archetypeVersion: 2.15-SNAPSHOT className: CamelScrExample Y: : Done! Now run: mvn install and the bundle is ready to be deployed. 295.4. Unit testing Camel routes Service Component is a POJO and has no special requirements for (non-OSGi) unit testing. There are however some techniques that are specific to Camel SCR or just make testing easier. Below is an example unit test, generated by camel-archetype-scr : // This file was generated from org.apache.camel.archetypes/camel-archetype-scr/2.15-SNAPSHOT package example; import java.util.List; import org.apache.camel.scr.internal.ScrHelper; import org.apache.camel.builder.AdviceWithRouteBuilder; import org.apache.camel.component.mock.MockComponent; import org.apache.camel.component.mock.MockEndpoint; import org.apache.camel.model.ModelCamelContext; import org.apache.camel.model.RouteDefinition; import org.junit.After; import org.junit.Before; import org.junit.Rule; import org.junit.Test; import org.junit.rules.TestName; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.junit.runner.RunWith; import org.junit.runners.JUnit4; @RunWith(JUnit4.class) public class CamelScrExampleTest { Logger log = LoggerFactory.getLogger(getClass()); @Rule public TestName testName = new TestName(); CamelScrExample integration; ModelCamelContext context; @Before public void setUp() throws Exception { log.info("*******************************************************************"); log.info("Test: " + testName.getMethodName()); log.info("*******************************************************************"); // Set property prefix for unit testing System.setProperty(CamelScrExample.PROPERTY_PREFIX, "unit"); // Prepare the integration integration = new CamelScrExample(); integration.prepare(null, ScrHelper.getScrProperties(integration.getClass().getName())); context = integration.getContext(); // Disable JMX for test context.disableJMX(); // Fake a component for test context.addComponent("amq", new MockComponent()); } @After public void tearDown() throws Exception { integration.stop(); } @Test public void testRoutes() throws Exception { // Adjust routes List<RouteDefinition> routes = context.getRouteDefinitions(); routes.get(0).adviceWith(context, new AdviceWithRouteBuilder() { @Override public void configure() throws Exception { // Replace "from" endpoint with direct:start replaceFromWith("direct:start"); // Mock and skip result endpoint mockEndpoints("log:*"); } }); MockEndpoint resultEndpoint = context.getEndpoint("mock:log:foo", MockEndpoint.class); // resultEndpoint.expectedMessageCount(1); // If you want to just check the number of messages resultEndpoint.expectedBodiesReceived("hello"); // If you want to check the contents // Start the integration integration.run(); // Send the test message context.createProducerTemplate().sendBody("direct:start", "hello"); resultEndpoint.assertIsSatisfied(); } } Now, let's take a look at the interesting bits one by one. Using property prefixing // Set property prefix for unit testing System.setProperty(CamelScrExample.PROPERTY_PREFIX, "unit"); This allows you to override parts of the configuration by prefixing properties with "unit.". For example, unit.from overrides from for the unit test. Prefixes can be used to handle the differences between the runtime environments where your routes might run. Moving the unchanged bundle through development, testing and production environments is a typical use case. Getting test configuration from annotations integration.prepare(null, ScrHelper.getScrProperties(integration.getClass().getName())); Here we configure the Service Component in test with the same properties that would be used in OSGi environment. Mocking components for test // Fake a component for test context.addComponent("amq", new MockComponent()); Components that are not available in test can be mocked like this to allow the route to start. Adjusting routes for test // Adjust routes List<RouteDefinition> routes = context.getRouteDefinitions(); routes.get(0).adviceWith(context, new AdviceWithRouteBuilder() { @Override public void configure() throws Exception { // Replace "from" endpoint with direct:start replaceFromWith("direct:start"); // Mock and skip result endpoint mockEndpoints("log:*"); } }); Camel's AdviceWith feature allows routes to be modified for test. Starting the routes // Start the integration integration.run(); Here we start the Service Component and along with it the routes. Sending a test message // Send the test message context.createProducerTemplate().sendBody("direct:start", "hello"); Here we send a message to a route in test. 295.5. Running the bundle in Apache Karaf Once the bundle has been built with mvn install it's ready to be deployed. To deploy the bundle on Apache Karaf perform the following steps on Karaf command line: Deploying the bundle in Apache Karaf # Add Camel feature repository karaf@root> features:chooseurl camel 2.15-SNAPSHOT # Install camel-scr feature karaf@root> features:install camel-scr # Install commons-lang, used in the example route to validate parameters karaf@root> osgi:install mvn:commons-lang/commons-lang/2.6 # Install and start your bundle karaf@root> osgi:install -s mvn:example/camel-scr-example/1.0-SNAPSHOT # See how it's running karaf@root> log:tail -n 10 Press ctrl-c to stop watching the log. 295.5.1. Overriding the default configuration By default, Service Component's configuration PID equals the fully qualified name of its class. You can change the example bundle's properties with Karaf's config:* commands: Override a property # Override 'messageOk' property karaf@root> config:propset -p example.CamelScrExample messageOk "This is better logging" Or you can change the configuration by editing property files in Karaf's etc folder. 295.5.2. Using Camel SCR bundle as a template Let's say you have a Camel SCR bundle that implements an integration pattern that you use frequently, say, from to , with success/failure logging and redelivery which also happens to be the pattern our example route implements. You probably don't want to create a separate bundle for every instance. No worries, SCR has you covered. Create a configuration PID for your Service Component, but add a tail with a dash and SCR will use that configuration to create a new instance of your component. Creating a new Service Component instance # Create a PID with a tail karaf@root> config:edit example.CamelScrExample-anotherone # Override some properties karaf@root> config:propset camelContextId my-other-context karaf@root> config:propset to "file://removeme?fileName=removemetoo.txt" # Save the PID karaf@root> config:update This will start a new CamelContext with your overridden properties. How convenient. 295.6. Notes When designing a Service Component to be a template you typically don't want it to start without a "tailed" configuration i.e. with the default configuration. To prevent your Service Component from starting with the default configuration add policy = ConfigurationPolicy.REQUIRE `to the class level `@Component annotation.
[ "@Component @References({ @Reference(name = \"camelComponent\",referenceInterface = ComponentResolver.class, cardinality = ReferenceCardinality.MANDATORY_MULTIPLE, policy = ReferencePolicy.DYNAMIC, policyOption = ReferencePolicyOption.GREEDY, bind = \"gotCamelComponent\", unbind = \"lostCamelComponent\") })", "@Override protected List<RoutesBuilder> getRouteBuilders() { List<RoutesBuilder> routesBuilders = new ArrayList<>(); routesBuilders.add(new YourRouteBuilderHere(registry)); routesBuilders.add(new AnotherRouteBuilderHere(registry)); return routesBuilders; }", "@Properties({ @Property(name = \"camelContextId\", value = \"my-test\"), @Property(name = \"active\", value = \"true\"), @Property(name = \"...\", value = \"...\"), })", "// This file was generated from org.apache.camel.archetypes/camel-archetype-scr/2.15-SNAPSHOT package example; import java.util.ArrayList; import java.util.List; import org.apache.camel.scr.AbstractCamelRunner; import example.internal.CamelScrExampleRoute; import org.apache.camel.RoutesBuilder; import org.apache.camel.spi.ComponentResolver; import org.apache.felix.scr.annotations.*; @Component(label = CamelScrExample.COMPONENT_LABEL, description = CamelScrExample.COMPONENT_DESCRIPTION, immediate = true, metatype = true) @Properties({ @Property(name = \"camelContextId\", value = \"camel-scr-example\"), @Property(name = \"camelRouteId\", value = \"foo/timer-log\"), @Property(name = \"active\", value = \"true\"), @Property(name = \"from\", value = \"timer:foo?period=5000\"), @Property(name = \"to\", value = \"log:foo?showHeaders=true\"), @Property(name = \"messageOk\", value = \"Success: {{from}} -> {{to}}\"), @Property(name = \"messageError\", value = \"Failure: {{from}} -> {{to}}\"), @Property(name = \"maximumRedeliveries\", value = \"0\"), @Property(name = \"redeliveryDelay\", value = \"5000\"), @Property(name = \"backOffMultiplier\", value = \"2\"), @Property(name = \"maximumRedeliveryDelay\", value = \"60000\") }) @References({ @Reference(name = \"camelComponent\",referenceInterface = ComponentResolver.class, cardinality = ReferenceCardinality.MANDATORY_MULTIPLE, policy = ReferencePolicy.DYNAMIC, policyOption = ReferencePolicyOption.GREEDY, bind = \"gotCamelComponent\", unbind = \"lostCamelComponent\") }) public class CamelScrExample extends AbstractCamelRunner { public static final String COMPONENT_LABEL = \"example.CamelScrExample\"; public static final String COMPONENT_DESCRIPTION = \"This is the description for camel-scr-example.\"; @Override protected List<RoutesBuilder> getRouteBuilders() { List<RoutesBuilder> routesBuilders = new ArrayList<>(); routesBuilders.add(new CamelScrExampleRoute(registry)); return routesBuilders; } }", "// This file was generated from org.apache.camel.archetypes/camel-archetype-scr/2.15-SNAPSHOT package example.internal; import org.apache.camel.LoggingLevel; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.impl.SimpleRegistry; import org.apache.commons.lang.Validate; public class CamelScrExampleRoute extends RouteBuilder { SimpleRegistry registry; // Configured fields private String camelRouteId; private Integer maximumRedeliveries; private Long redeliveryDelay; private Double backOffMultiplier; private Long maximumRedeliveryDelay; public CamelScrExampleRoute(final SimpleRegistry registry) { this.registry = registry; } @Override public void configure() throws Exception { checkProperties(); // Add a bean to Camel context registry registry.put(\"test\", \"bean\"); errorHandler(defaultErrorHandler() .retryAttemptedLogLevel(LoggingLevel.WARN) .maximumRedeliveries(maximumRedeliveries) .redeliveryDelay(redeliveryDelay) .backOffMultiplier(backOffMultiplier) .maximumRedeliveryDelay(maximumRedeliveryDelay)); from(\"{{from}}\") .startupOrder(2) .routeId(camelRouteId) .onCompletion() .to(\"direct:processCompletion\") .end() .removeHeaders(\"CamelHttp*\") .to(\"{{to}}\"); from(\"direct:processCompletion\") .startupOrder(1) .routeId(camelRouteId + \".completion\") .choice() .when(simple(\"USD{exception} == null\")) .log(\"{{messageOk}}\") .otherwise() .log(LoggingLevel.ERROR, \"{{messageError}}\") .end(); } } public void checkProperties() { Validate.notNull(camelRouteId, \"camelRouteId property is not set\"); Validate.notNull(maximumRedeliveries, \"maximumRedeliveries property is not set\"); Validate.notNull(redeliveryDelay, \"redeliveryDelay property is not set\"); Validate.notNull(backOffMultiplier, \"backOffMultiplier property is not set\"); Validate.notNull(maximumRedeliveryDelay, \"maximumRedeliveryDelay property is not set\"); } }", "// Configured fields private String camelRouteId; private Integer maximumRedeliveries; private Long redeliveryDelay; private Double backOffMultiplier; private Long maximumRedeliveryDelay;", "// Add a bean to Camel context registry registry.put(\"test\", \"bean\");", "public void checkProperties() { Validate.notNull(camelRouteId, \"camelRouteId property is not set\"); Validate.notNull(maximumRedeliveries, \"maximumRedeliveries property is not set\"); Validate.notNull(redeliveryDelay, \"redeliveryDelay property is not set\"); Validate.notNull(backOffMultiplier, \"backOffMultiplier property is not set\"); Validate.notNull(maximumRedeliveryDelay, \"maximumRedeliveryDelay property is not set\"); }", "from(\"{{from}}\") .startupOrder(2) .routeId(camelRouteId) .onCompletion() .to(\"direct:processCompletion\") .end() .removeHeaders(\"CamelHttp*\") .to(\"{{to}}\"); from(\"direct:processCompletion\") .startupOrder(1) .routeId(camelRouteId + \".completion\") .choice() .when(simple(\"USD{exception} == null\")) .log(\"{{messageOk}}\") .otherwise() .log(LoggingLevel.ERROR, \"{{messageError}}\") .end();", "mvn archetype:generate -Dfilter=org.apache.camel.archetypes:camel-archetype-scr Choose archetype: 1: local -> org.apache.camel.archetypes:camel-archetype-scr (Creates a new Camel SCR bundle project for Karaf) Choose a number or apply filter (format: [groupId:]artifactId, case sensitive contains): : 1 Define value for property 'groupId': : example [INFO] Using property: groupId = example Define value for property 'artifactId': : camel-scr-example Define value for property 'version': 1.0-SNAPSHOT: : Define value for property 'package': example: : [INFO] Using property: archetypeArtifactId = camel-archetype-scr [INFO] Using property: archetypeGroupId = org.apache.camel.archetypes [INFO] Using property: archetypeVersion = 2.15-SNAPSHOT Define value for property 'className': : CamelScrExample Confirm properties configuration: groupId: example artifactId: camel-scr-example version: 1.0-SNAPSHOT package: example archetypeArtifactId: camel-archetype-scr archetypeGroupId: org.apache.camel.archetypes archetypeVersion: 2.15-SNAPSHOT className: CamelScrExample Y: :", "mvn install", "// This file was generated from org.apache.camel.archetypes/camel-archetype-scr/2.15-SNAPSHOT package example; import java.util.List; import org.apache.camel.scr.internal.ScrHelper; import org.apache.camel.builder.AdviceWithRouteBuilder; import org.apache.camel.component.mock.MockComponent; import org.apache.camel.component.mock.MockEndpoint; import org.apache.camel.model.ModelCamelContext; import org.apache.camel.model.RouteDefinition; import org.junit.After; import org.junit.Before; import org.junit.Rule; import org.junit.Test; import org.junit.rules.TestName; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.junit.runner.RunWith; import org.junit.runners.JUnit4; @RunWith(JUnit4.class) public class CamelScrExampleTest { Logger log = LoggerFactory.getLogger(getClass()); @Rule public TestName testName = new TestName(); CamelScrExample integration; ModelCamelContext context; @Before public void setUp() throws Exception { log.info(\"*******************************************************************\"); log.info(\"Test: \" + testName.getMethodName()); log.info(\"*******************************************************************\"); // Set property prefix for unit testing System.setProperty(CamelScrExample.PROPERTY_PREFIX, \"unit\"); // Prepare the integration integration = new CamelScrExample(); integration.prepare(null, ScrHelper.getScrProperties(integration.getClass().getName())); context = integration.getContext(); // Disable JMX for test context.disableJMX(); // Fake a component for test context.addComponent(\"amq\", new MockComponent()); } @After public void tearDown() throws Exception { integration.stop(); } @Test public void testRoutes() throws Exception { // Adjust routes List<RouteDefinition> routes = context.getRouteDefinitions(); routes.get(0).adviceWith(context, new AdviceWithRouteBuilder() { @Override public void configure() throws Exception { // Replace \"from\" endpoint with direct:start replaceFromWith(\"direct:start\"); // Mock and skip result endpoint mockEndpoints(\"log:*\"); } }); MockEndpoint resultEndpoint = context.getEndpoint(\"mock:log:foo\", MockEndpoint.class); // resultEndpoint.expectedMessageCount(1); // If you want to just check the number of messages resultEndpoint.expectedBodiesReceived(\"hello\"); // If you want to check the contents // Start the integration integration.run(); // Send the test message context.createProducerTemplate().sendBody(\"direct:start\", \"hello\"); resultEndpoint.assertIsSatisfied(); } }", "// Set property prefix for unit testing System.setProperty(CamelScrExample.PROPERTY_PREFIX, \"unit\");", "integration.prepare(null, ScrHelper.getScrProperties(integration.getClass().getName()));", "// Fake a component for test context.addComponent(\"amq\", new MockComponent());", "// Adjust routes List<RouteDefinition> routes = context.getRouteDefinitions(); routes.get(0).adviceWith(context, new AdviceWithRouteBuilder() { @Override public void configure() throws Exception { // Replace \"from\" endpoint with direct:start replaceFromWith(\"direct:start\"); // Mock and skip result endpoint mockEndpoints(\"log:*\"); } });", "// Start the integration integration.run();", "// Send the test message context.createProducerTemplate().sendBody(\"direct:start\", \"hello\");", "Add Camel feature repository karaf@root> features:chooseurl camel 2.15-SNAPSHOT Install camel-scr feature karaf@root> features:install camel-scr Install commons-lang, used in the example route to validate parameters karaf@root> osgi:install mvn:commons-lang/commons-lang/2.6 Install and start your bundle karaf@root> osgi:install -s mvn:example/camel-scr-example/1.0-SNAPSHOT See how it's running karaf@root> log:tail -n 10 Press ctrl-c to stop watching the log.", "Override 'messageOk' property karaf@root> config:propset -p example.CamelScrExample messageOk \"This is better logging\"", "Create a PID with a tail karaf@root> config:edit example.CamelScrExample-anotherone Override some properties karaf@root> config:propset camelContextId my-other-context karaf@root> config:propset to \"file://removeme?fileName=removemetoo.txt\" Save the PID karaf@root> config:update" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/camel_scr_deprecated
Data Grid documentation
Data Grid documentation Documentation for Data Grid is available on the Red Hat customer portal. Data Grid 8.4 Documentation Data Grid 8.4 Component Details Supported Configurations for Data Grid 8.4 Data Grid 8 Feature Support Data Grid Deprecated Features and Functionality
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_rest_api/rhdg-docs_datagrid
Chapter 4. Resolved issues
Chapter 4. Resolved issues The following are resolved issues for this release: Issue Summary JBCS-1582 mod_proxy_cluster does not recover http member if websocket is enabled JBCS-1146 Difference between zip and rpm versions of Apachectl For details of any security fixes in this release, see the errata links in Advisories related to this release .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_release_notes/resolved_issues
Managing content in automation hub
Managing content in automation hub Red Hat Ansible Automation Platform 2.4 Create and manage collections, content and repositories in automation hub Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/managing_content_in_automation_hub/index
Chapter 1. About the automation savings planner
Chapter 1. About the automation savings planner An automation savings plan gives you the ability to plan, track, and analyze the potential efficiency and cost savings of your automation initiatives. Use Red Hat Insights for Red Hat Ansible Automation Platform to create an automation savings plan by defining a list of tasks needed to complete an automation job. You can then link your automation savings plans to an Ansible job template in order to accurately measure the time and cost savings upon completion of an automation job. To create an automation savings plan, you can utilize the automation savings planner to prioritize the various automation jobs throughout your organization and understand the potential time and cost savings for your automation initiatives. 1.1. Creating a new automation savings plan Create an automation savings plan by defining the tasks needed to complete an automation job using the automation savings planner. The details you provide when creating a savings plan, namely the number of hosts and the manual duration, will be used to calculate your savings from automating this plan. See this section for more information. Procedure From the navigation panel, select Automation Analytics Savings Planner . Click Add Plan . Provide some information about your automation job: Enter descriptive information, such as a name, description, and type of automation. Enter technical information, such as the number of hosts, the duration to manually complete this job, and how often you complete this job. Click . In the tasks section, list the tasks needed to complete this plan: Enter each task in the field, then click Add . Rearrange tasks by dragging the item up/down the tasks list. Click . Note The task list is for your planning purposes only, and does not currently factor into your automation savings calculation. Select a template to link to this plan, then click Save . Your new savings plan is now created and displayed on the automation savings planner list view. 1.2. Edit an existing savings plan Edit any information about an existing savings plan by clicking on it from the savings planner list view. Procedure From the navigation panel, select Automation Analytics Savings Planner . On the automation savings plan, click Click the More Actions icon ... , then click Edit . Make any changes to the automation plan, then click Save . 1.3. Link a savings plan to a job template You can associate a job template to a savings plan to allow Insights for Ansible Automation Platform to provide a more accurate time and cost savings estimate for completing this savings plan. Procedure From the navigation panel, select Automation Analytics Savings Planner . Click the More Actions icon ... and select Link Template . Click Save . 1.4. Review savings calculations for your automation plans The automation savings planner offers a calculation of how much time and money you can save by automating a job. Red Hat Insights for Red Hat Ansible Automation Platform takes data from the plan details and the associated job template to provide you with an accurate projection of your cost savings when you complete this savings plan. To do so, navigate to your savings planner page, click the name of an existing plan, then navigate to the Statistics tab. The statistics chart displays a projection of your monetary and time savings based on the information you provided when creating a savings plan. Primarily, the statistics chart subtracts the automated cost from the manual cost of executing the plan to provide the total resources saved upon automation. The chart then displays this data by year to show you the cumulative benefits for automating the plan over time. Click between Money and Time to view the different types of savings for automating the plan. 1.5. Filter and sort plans on the list view page Find specific types of automation savings plans by filtering or sorting your savings planner list view. Procedure From the navigation panel, select Automation Analytics Savings Planner . To filter your saving plans based on type, or sort your savings plans by a certain order, select a filter option on the horizontal toolbar.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/planning_your_automation_jobs_using_the_automation_savings_planner/assembly-automation-savings-planner
Chapter 5. Strategies for repartitioning a disk
Chapter 5. Strategies for repartitioning a disk There are different approaches to repartitioning a disk. These include: Unpartitioned free space is available. An unused partition is available. Free space in an actively used partition is available. Note The following examples are simplified for clarity and do not reflect the exact partition layout when actually installing Red Hat Enterprise Linux. 5.1. Using unpartitioned free space Partitions that are already defined and do not span the entire hard disk, leave unallocated space that is not part of any defined partition. The following diagram shows what this might look like. Figure 5.1. Disk with unpartitioned free space The first diagram represents a disk with one primary partition and an undefined partition with unallocated space. The second diagram represents a disk with two defined partitions with allocated space. An unused hard disk also falls into this category. The only difference is that all the space is not part of any defined partition. On a new disk, you can create the necessary partitions from the unused space. Most preinstalled operating systems are configured to take up all available space on a disk drive. 5.2. Using space from an unused partition In the following example, the first diagram represents a disk with an unused partition. The second diagram represents reallocating an unused partition for Linux. Figure 5.2. Disk with an unused partition To use the space allocated to the unused partition, delete the partition and then create the appropriate Linux partition instead. Alternatively, during the installation process, delete the unused partition and manually create new partitions. 5.3. Using free space from an active partition This process can be difficult to manage because an active partition, that is already in use, contains the required free space. In most cases, hard disks of computers with preinstalled software contain one larger partition holding the operating system and data. Warning If you want to use an operating system (OS) on an active partition, you must reinstall the OS. Be aware that some computers, which include pre-installed software, do not include installation media to reinstall the original OS. Check whether this applies to your OS before you destroy an original partition and the OS installation. To optimise the use of available free space, you can use the methods of destructive or non-destructive repartitioning. 5.3.1. Destructive repartitioning Destructive repartitioning destroys the partition on your hard drive and creates several smaller partitions instead. Backup any needed data from the original partition as this method deletes the complete contents. After creating a smaller partition for your existing operating system, you can: Reinstall software. Restore your data. Start your Red Hat Enterprise Linux installation. The following diagram is a simplified representation of using the destructive repartitioning method. Figure 5.3. Destructive repartitioning action on disk Warning This method deletes all data previously stored in the original partition. 5.3.2. Non-destructive repartitioning Non-destructive repartitioning resizes partitions, without any data loss. This method is reliable, however it takes longer processing time on large drives. The following is a list of methods, which can help initiate non-destructive repartitioning. Compress existing data The storage location of some data cannot be changed. This can prevent the resizing of a partition to the required size, and ultimately lead to a destructive repartition process. Compressing data in an already existing partition can help you resize your partitions as needed. It can also help to maximize the free space available. The following diagram is a simplified representation of this process. Figure 5.4. Data compression on a disk To avoid any possible data loss, create a backup before continuing with the compression process. Resize the existing partition By resizing an already existing partition, you can free up more space. Depending on your resizing software, the results may vary. In the majority of cases, you can create a new unformatted partition of the same type, as the original partition. The steps you take after resizing can depend on the software you use. In the following example, the best practice is to delete the new DOS (Disk Operating System) partition, and create a Linux partition instead. Verify what is most suitable for your disk before initiating the resizing process. Figure 5.5. Partition resizing on a disk Optional: Create new partitions Some pieces of resizing software support Linux based systems. In such cases, there is no need to delete the newly created partition after resizing. Creating a new partition afterwards depends on the software you use. The following diagram represents the disk state, before and after creating a new partition. Figure 5.6. Disk with final partition configuration
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_storage_devices/strategies-for-repartitioning-a-disk_managing-storage-devices
Chapter 1. Preparing to deploy OpenShift Data Foundation
Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps: Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Kubernetes authentication using KMS . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Create a KMIP client if one does not exist. From the user interface, select KMIP Client Profile Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP Registration Token New Registration Token . Copy the token for the step. To register the client, navigate to KMIP Registered Clients Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings Interfaces Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the certificate authority (CA) to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_amazon_web_services/preparing_to_deploy_openshift_data_foundation
Chapter 17. Kafka Exporter
Chapter 17. Kafka Exporter Kafka Exporter is an open source project to enhance monitoring of Apache Kafka brokers and clients. Kafka Exporter is provided with AMQ Streams for deployment with a Kafka cluster to extract additional metrics data from Kafka brokers related to offsets, consumer groups, consumer lag, and topics. The metrics data is used, for example, to help identify slow consumers. Lag data is exposed as Prometheus metrics, which can then be presented in Grafana for analysis. If you are already using Prometheus and Grafana for monitoring of built-in Kafka metrics, you can configure Prometheus to also scrape the Kafka Exporter Prometheus endpoint. Additional resources Kafka exposes metrics through JMX, which can then be exported as Prometheus metrics. Chapter 8, Monitoring your cluster using JMX 17.1. Consumer lag Consumer lag indicates the difference in the rate of production and consumption of messages. Specifically, consumer lag for a given consumer group indicates the delay between the last message in the partition and the message being currently picked up by that consumer. The lag reflects the position of the consumer offset in relation to the end of the partition log. This difference is sometimes referred to as the delta between the producer offset and consumer offset, the read and write positions in the Kafka broker topic partitions. Suppose a topic streams 100 messages a second. A lag of 1000 messages between the producer offset (the topic partition head) and the last offset the consumer has read means a 10-second delay. The importance of monitoring consumer lag For applications that rely on the processing of (near) real-time data, it is critical to monitor consumer lag to check that it does not become too big. The greater the lag becomes, the further the process moves from the real-time processing objective. Consumer lag, for example, might be a result of consuming too much old data that has not been purged, or through unplanned shutdowns. Reducing consumer lag Typical actions to reduce lag include: Scaling-up consumer groups by adding new consumers Increasing the retention time for a message to remain in a topic Adding more disk capacity to increase the message buffer Actions to reduce consumer lag depend on the underlying infrastructure and the use cases AMQ Streams is supporting. For instance, a lagging consumer is less likely to benefit from the broker being able to service a fetch request from its disk cache. And in certain cases, it might be acceptable to automatically drop messages until a consumer has caught up. 17.2. Kafka Exporter alerting rule examples The sample alert notification rules specific to Kafka Exporter are as follows: UnderReplicatedPartition An alert to warn that a topic is under-replicated and the broker is not replicating enough partitions. The default configuration is for an alert if there are one or more under-replicated partitions for a topic. The alert might signify that a Kafka instance is down or the Kafka cluster is overloaded. A planned restart of the Kafka broker may be required to restart the replication process. TooLargeConsumerGroupLag An alert to warn that the lag on a consumer group is too large for a specific topic partition. The default configuration is 1000 records. A large lag might indicate that consumers are too slow and are falling behind the producers. NoMessageForTooLong An alert to warn that a topic has not received messages for a period of time. The default configuration for the time period is 10 minutes. The delay might be a result of a configuration issue preventing a producer from publishing messages to the topic. You can adapt alerting rules according to your specific needs. Additional resources For more information about setting up alerting rules, see Configuration in the Prometheus documentation. 17.3. Kafka Exporter metrics Lag information is exposed by Kafka Exporter as Prometheus metrics for presentation in Grafana. Kafka Exporter exposes metrics data for brokers, topics, and consumer groups. Table 17.1. Broker metrics output Name Information kafka_brokers Number of brokers in the Kafka cluster Table 17.2. Topic metrics output Name Information kafka_topic_partitions Number of partitions for a topic kafka_topic_partition_current_offset Current topic partition offset for a broker kafka_topic_partition_oldest_offset Oldest topic partition offset for a broker kafka_topic_partition_in_sync_replica Number of in-sync replicas for a topic partition kafka_topic_partition_leader Leader broker ID of a topic partition kafka_topic_partition_leader_is_preferred Shows 1 if a topic partition is using the preferred broker kafka_topic_partition_replicas Number of replicas for this topic partition kafka_topic_partition_under_replicated_partition Shows 1 if a topic partition is under-replicated Table 17.3. Consumer group metrics output Name Information kafka_consumergroup_current_offset Current topic partition offset for a consumer group kafka_consumergroup_lag Current approximate lag for a consumer group at a topic partition 17.4. Running Kafka Exporter Kafka Exporter is provided with the download archive used for Installing AMQ Streams . You can run it to expose Prometheus metrics for presentation in a Grafana dashboard. Prerequisites AMQ Streams is installed on the host This procedure assumes you already have access to a Grafana user interface and Prometheus is deployed and added as a data source. Procedure Run the Kafka Exporter script using appropriate configuration parameter values. ./bin/kafka_exporter --kafka.server=< kafka-bootstrap-address >:9092 --kafka.version=2.7.0 --< my-other-parameters > The parameters require a double-hyphen convention, such as --kafka.server . Table 17.4. Kafka Exporter configuration parameters Option Description Default kafka.server Host/post address of the Kafka server. kafka:9092 kafka.version Kafka broker version. 1.0.0 group.filter A regular expression to specify the consumer groups to include in the metrics. .* (all) topic.filter A regular expression to specify the topics to include in the metrics. .* (all) sasl.< parameter > Parameters to enable and connect to the Kafka cluster using SASL/PLAIN authentication, with user name and password. false tls.< parameter > Parameters to enable connect to the Kafka cluster using TLS authentication, with optional certificate and key. false web.listen-address Port address to expose the metrics. :9308 web.telemetry-path Path for the exposed metrics. /metrics log.level Logging configuration, to log messages with a given severity (debug, info, warn, error, fatal) or above. info log.enable-sarama Boolean to enable Sarama logging, a Go client library used by the Kafka Exporter. false You can use kafka_exporter --help for information on the properties. Configure Prometheus to monitor the Kafka Exporter metrics. For more information on configuring Prometheus, see the Prometheus documentation . Enable Grafana to present the Kafka Exporter metrics data exposed by Prometheus. For more information, see Presenting Kafka Exporter metrics in Grafana . 17.5. Presenting Kafka Exporter metrics in Grafana Using Kafka Exporter Prometheus metrics as a data source, you can create a dashboard of Grafana charts. For example, from the metrics you can create the following Grafana charts: Message in per second (from topics) Message in per minute (from topics) Lag by consumer group Messages consumed per minute (by consumer groups) When metrics data has been collected for some time, the Kafka Exporter charts are populated. Use the Grafana charts to analyze lag and to check if actions to reduce lag are having an impact on an affected consumer group. If, for example, Kafka brokers are adjusted to reduce lag, the dashboard will show the Lag by consumer group chart going down and the Messages consumed per minute chart going up. Additional resources Example dashboard for Kafka Exporter Grafana documentation
[ "./bin/kafka_exporter --kafka.server=< kafka-bootstrap-address >:9092 --kafka.version=2.7.0 --< my-other-parameters >" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_rhel/assembly-kafka-exporter-str
Chapter 1. About the Fuse Console
Chapter 1. About the Fuse Console The Red Hat Fuse Console is a web console based on HawtIO open source software. For a list of supported browsers, go to Supported Configurations . The Fuse Console provides a central interface to examine and manage the details of one or more deployed Fuse containers. You can also monitor Red Hat Fuse and system resources, perform updates, and start or stop services. The Fuse Console is available when you install Red Hat Fuse standalone or use Fuse on OpenShift. The integrations that you can view and manage in the Fuse Console depend on the plugins that are running. Possible plugins include: Camel JMX OSGI Runtime Logs
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_springboot_standalone/fuse-console-overview-all_springboot
Chapter 4. Tracking Modifications to Directory Entries
Chapter 4. Tracking Modifications to Directory Entries In certain situations it is useful to track when changes are made to entries. There are two aspects of entry modifications that the Directory Server tracks: Using change sequence numbers to track changes to the database. This is similar to change sequence numbers used in replication and synchronization. Every normal directory operation triggers a sequence number. Assigning creation and modification information. These attributes record the names of the user who created and most recently modified an entry, as well as the timestamps of when it was created and modified. Note The entry update sequence number (USN), modify time and name, and create time and name are all operational attributes and are not returned in a regular ldapsearch . For details on running a search for operational attributes, see Section 14.4.7, "Searching for Operational Attributes" . 4.1. Tracking Modifications to the Database through Update Sequence Numbers The USN Plug-in enables LDAP clients and servers to identify if entries have been changed. 4.1.1. An Overview of the Entry Sequence Numbers When the USN Plug-in is enabled, update sequence numbers (USNs) are sequential numbers that are assigned to an entry whenever a write operation is performed against the entry. (Write operations include add, modify, modrdn, and delete operations. Internal database operations, like export operations, are not counted in the update sequence.) A USN counter keeps track of the most recently assigned USN. 4.1.1.1. Local and Global USNs The USN is evaluated globally, for the entire database, not for the single entry. The USN is similar to the change sequence number for replication and synchronization, in that it simply ticks upward to track any changes in the database or directory. However, the entry USN is maintained separately from the CSNs, and USNs are not replicated. The entry shows the change number for the last modification to that entry in the entryUSN operational attribute. For further details about operational attributes, see Section 14.4.7, "Searching for Operational Attributes" . Example 4.1. Example Entry USN To display the entryusn attribute of the uid= example ,ou=People,dc=example,dc=com user entry: The USN Plug-in has two modes, local mode and global mode: In local mode, each back end database has an instance of the USN Plug-in with a USN counter specific to that back end database. This is the default setting. In global mode, there is a global instance of the USN Plug-in with a global USN counter that applies to changes made to the entire directory. When the USN Plug-in is set to local mode, results are limited to the local back end database. When the USN Plug-in is set to global mode, the returned results are for the entire directory. The root DSE shows the most recent USN assigned to any entry in the database in the lastusn attribute. When the USN Plug-in is set to local mode, so each database has its own local USN counter, the lastUSN shows both the database which assigned the USN and the USN: For example: In global mode, when the database uses a shared USN counter, the lastUSN attribute shows the latest USN only: 4.1.1.2. Importing USN Entries When entries are imported, the USN Plug-in uses the nsslapd-entryusn-import-initval attribute to check if the entry has an assigned USN. If the value of nsslapd-entryusn-import-initval is numerical, the imported entry will use this numerical value as the entry's USN. If the value of nsslapd-entryusn-import-initval is not numerical, the USN Plug-in will use the value of the lastUSN attribute and increment it by one as the USN for the imported entry. 4.1.2. Enabling the USN Plug-in This section describes how to enable the USN plug-in to record USNs on entries. 4.1.2.1. Enabling the USN Plug-in Using the Command Line To enable the USN plug-in using the command line: Use the dsconf utility to enable the plug-in: Restart the instance: 4.1.2.2. Enabling the USN Plug-in Using the Web Console To enable the USN plug-in using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Select the Plugins menu. Select the USN plug-in. Change the status to ON to enable the plug-in. Restart the instance. See Section 1.5.2, "Starting and Stopping a Directory Server Instance Using the Web Console" . 4.1.3. Global USNs With the default settings, Directory Server uses unique update sequence numbers (USN) for each back end database. Alternatively, you can enable unique USNs across all back end databases. Note The USN plug-in must be enabled to use this feature. See Section 4.1.2, "Enabling the USN Plug-in" . 4.1.3.1. Identifying Whether Global USNs are Enabled This section describes how to identify whether USNs are enabled across all back end databases. 4.1.3.1.1. Identifying Whether Global USNs are Enabled Using the Command Line To display the current status of the global USN feature using the command line: 4.1.3.1.2. Identifying Whether Global USNs are Enabled Using the Web Console To display the current status of the global USN feature using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Select the Plugins menu. Select the USN plug-in. Verify that the USN Global switch is set to On . 4.1.3.2. Enabling Global USNs 4.1.3.2.1. Enabling Global USNs Using the Command Line To enable global USNs using the command line: Enable global USNs: Restart the instance: 4.1.3.2.2. Enabling Global USNs Using the Web Console To enable global USNs using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Plugins menu. Select the USN plug-in. Change the status of the plug-in to On . Change the USN Global status to On . Restart the instance. See Section 1.5.2, "Starting and Stopping a Directory Server Instance Using the Web Console" . 4.1.4. Cleaning up USN Tombstone Entries The USN plug-in moves entries to tombstone entries when the entry is deleted. If replication is enabled, then separate tombstone entries are kept by both the USN and Replication plug-ins. Both tombstone entries are deleted by the replication process, but for server performance, it can be beneficial to delete the USN tombstones: before converting a server to a replica to free memory for the server 4.1.4.1. Cleaning up USN Tombstone Entries Using the Command Line To remove all USN tombstone entries from the dc=example,dc=com suffix using the command line: Optionally, pass the -o max_USN option to the command to delete USN tombstone entries up to the specified value. 4.1.4.2. Cleaning up USN Tombstone Entries Using the Web Console To remove all USN tombstone entries from the dc=example,dc=com suffix using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Plugins menu. Select the USN plug-in. Press the Run Fixup Task button. Fill the fields, and press Run .
[ "ldapsearch -D \"cn=Directory Manager\" -W -H ldap://server.example.com:389 -x -b \"uid= example ,ou=People,dc=example,dc=com\" -s base -x entryusn dn: uid= example ,ou=People,dc=example,dc=com entryusn: 17653", "lastusn; database_name : USN", "lastusn;example1: 2130 lastusn;example2: 2070", "lastusn: 4200", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin usn enable", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin usn global USN global mode is disabled", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin usn global on", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin usn cleanup -s \"dc=example,dc=com\"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Tracking_Modifications_to_Directory_Entries
Chapter 8. Message Routing
Chapter 8. Message Routing Abstract The message routing patterns describe various ways of linking message channels together. This includes various algorithms that can be applied to the message stream (without modifying the body of the message). 8.1. Content-Based Router Overview A content-based router , shown in Figure 8.1, "Content-Based Router Pattern" , enables you to route messages to the appropriate destination based on the message contents. Figure 8.1. Content-Based Router Pattern Java DSL example The following example shows how to route a request from an input, seda:a , endpoint to either seda:b , queue:c , or seda:d depending on the evaluation of various predicate expressions: XML configuration example The following example shows how to configure the same route in XML: 8.2. Message Filter Overview A message filter is a processor that eliminates undesired messages based on specific criteria. In Apache Camel, the message filter pattern, shown in Figure 8.2, "Message Filter Pattern" , is implemented by the filter() Java DSL command. The filter() command takes a single predicate argument, which controls the filter. When the predicate is true , the incoming message is allowed to proceed, and when the predicate is false , the incoming message is blocked. Figure 8.2. Message Filter Pattern Java DSL example The following example shows how to create a route from endpoint, seda:a , to endpoint, seda:b , that blocks all messages except for those messages whose foo header have the value, bar : To evaluate more complex filter predicates, you can invoke one of the supported scripting languages, such as XPath, XQuery, or SQL (see Part II, "Routing Expression and Predicate Languages" ). The following example defines a route that blocks all messages except for those containing a person element whose name attribute is equal to James : XML configuration example The following example shows how to configure the route with an XPath predicate in XML (see Part II, "Routing Expression and Predicate Languages" ): Filtered endpoint required inside </filter> tag Make sure you put the endpoint you want to filter (for example, <to uri="seda:b"/> ) before the closing </filter> tag or the filter will not be applied (in 2.8+, omitting this will result in an error). Filtering with beans Here is an example of using a bean to define the filter behavior: Using stop() Available as of Camel 2.0 Stop is a special type of filter that filters out all messages. Stop is convenient to use in a content-based router when you need to stop further processing in one of the predicates. In the following example, we do not want messages with the word Bye in the message body to propagate any further in the route. We prevent this in the when() predicate using .stop() . Knowing if Exchange was filtered or not Available as of Camel 2.5 The message filter EIP will add a property on the Exchange which states if it was filtered or not. The property has the key Exchange.FILTER_MATCHED which has the String value of CamelFilterMatched . Its value is a boolean indicating true or false . If the value is true then the Exchange was routed in the filter block. 8.3. Recipient List Overview A recipient list , shown in Figure 8.3, "Recipient List Pattern" , is a type of router that sends each incoming message to multiple different destinations. In addition, a recipient list typically requires that the list of recipients be calculated at run time. Figure 8.3. Recipient List Pattern Recipient list with fixed destinations The simplest kind of recipient list is where the list of destinations is fixed and known in advance, and the exchange pattern is InOnly . In this case, you can hardwire the list of destinations into the to() Java DSL command. Note The examples given here, for the recipient list with fixed destinations, work only with the InOnly exchange pattern (similar to a pipes and filters pattern ). If you want to create a recipient list for exchange patterns with Out messages, use the multicast pattern instead. Java DSL example The following example shows how to route an InOnly exchange from a consumer endpoint, queue:a , to a fixed list of destinations: XML configuration example The following example shows how to configure the same route in XML: Recipient list calculated at run time In most cases, when you use the recipient list pattern, the list of recipients should be calculated at runtime. To do this use the recipientList() processor, which takes a list of destinations as its sole argument. Because Apache Camel applies a type converter to the list argument, it should be possible to use most standard Java list types (for example, a collection, a list, or an array). For more details about type converters, see Section 34.3, "Built-In Type Converters" . The recipients receive a copy of the same exchange instance and Apache Camel executes them sequentially. Java DSL example The following example shows how to extract the list of destinations from a message header called recipientListHeader , where the header value is a comma-separated list of endpoint URIs: In some cases, if the header value is a list type, you might be able to use it directly as the argument to recipientList() . For example: However, this example is entirely dependent on how the underlying component parses this particular header. If the component parses the header as a simple string, this example will not work. The header must be parsed into some type of Java list. XML configuration example The following example shows how to configure the preceding route in XML, where the header value is a comma-separated list of endpoint URIs: Sending to multiple recipients in parallel Available as of Camel 2.2 The recipient list pattern supports parallelProcessing , which is similar to the corresponding feature in the splitter pattern . Use the parallel processing feature to send the exchange to multiple recipients concurrently - for example: In Spring XML, the parallel processing feature is implemented as an attribute on the recipientList tag - for example: Stop on exception Available as of Camel 2.2 The recipient list supports the stopOnException feature, which you can use to stop sending to any further recipients, if any recipient fails. And in Spring XML its an attribute on the recipient list tag. In Spring XML, the stop on exception feature is implemented as an attribute on the recipientList tag - for example: Note You can combine parallelProcessing and stopOnException in the same route. Ignore invalid endpoints Available as of Camel 2.3 The recipient list pattern supports the ignoreInvalidEndpoints option, which enables the recipient list to skip invalid endpoints ( the routing slips pattern also supports this option). For example: And in Spring XML, you can enable this option by setting the ignoreInvalidEndpoints attribute on the recipientList tag, as follows Consider the case where myHeader contains the two endpoints, direct:foo,xxx:bar . The first endpoint is valid and works. The second is invalid and, therefore, ignored. Apache Camel logs at INFO level whenever an invalid endpoint is encountered. Using custom AggregationStrategy Available as of Camel 2.2 You can use a custom AggregationStrategy with the recipient list pattern , which is useful for aggregating replies from the recipients in the list. By default, Apache Camel uses the UseLatestAggregationStrategy aggregation strategy, which keeps just the last received reply. For a more sophisticated aggregation strategy, you can define your own implementation of the AggregationStrategy interface - see Section 8.5, "Aggregator" for details. For example, to apply the custom aggregation strategy, MyOwnAggregationStrategy , to the reply messages, you can define a Java DSL route as follows: In Spring XML, you can specify the custom aggregation strategy as an attribute on the recipientList tag, as follows: Using custom thread pool Available as of Camel 2.2 This is only needed when you use parallelProcessing . By default Camel uses a thread pool with 10 threads. Notice this is subject to change when we overhaul thread pool management and configuration later (hopefully in Camel 2.2). You configure this just as you would with the custom aggregation strategy. Using method call as recipient list You can use a bean integration to provide the recipients, for example: Where the MessageRouter bean is defined as follows: Bean as recipient list You can make a bean behave as a recipient list by adding the @RecipientList annotation to a methods that returns a list of recipients. For example: In this case, do not include the recipientList DSL command in the route. Define the route as follows: Using timeout Available as of Camel 2.5 If you use parallelProcessing , you can configure a total timeout value in milliseconds. Camel will then process the messages in parallel until the timeout is hit. This allows you to continue processing if one message is slow. In the example below, the recipientlist header has the value, direct:a,direct:b,direct:c , so that the message is sent to three recipients. We have a timeout of 250 milliseconds, which means only the last two messages can be completed within the timeframe. The aggregation therefore yields the string result, BC . Note This timeout feature is also supported by splitter and both multicast and recipientList . By default if a timeout occurs the AggregationStrategy is not invoked. However you can implement a specialized version This allows you to deal with the timeout in the AggregationStrategy if you really need to. Timeout is total The timeout is total, which means that after X time, Camel will aggregate the messages which has completed within the timeframe. The remainders will be cancelled. Camel will also only invoke the timeout method in the TimeoutAwareAggregationStrategy once, for the first index which caused the timeout. Apply custom processing to the outgoing messages Before recipientList sends a message to one of the recipient endpoints, it creates a message replica, which is a shallow copy of the original message. In a shallow copy, the headers and payload of the original message are copied by reference only. Each new copy does not contain its own instance of those elements. As a result, shallow copies of a message are linked and you cannot apply custom processing when routing them to different endpoints. If you want to perform some custom processing on each message replica before the replica is sent to its endpoint, you can invoke the onPrepare DSL command in the recipientList clause. The onPrepare command inserts a custom processor just after the message has been shallow-copied and just before the message is dispatched to its endpoint. For example, in the following route, the CustomProc processor is invoked on the message replica for each recipient endpoint : A common use case for the onPrepare DSL command is to perform a deep copy of some or all elements of a message. This allows each message replica to be modified independently of the others. For example, the following CustomProc processor class performs a deep copy of the message body, where the message body is presumed to be of type, BodyType , and the deep copy is performed by the method, BodyType .deepCopy() . Options The recipientList DSL command supports the following options: Name Default Value Description delimiter , Delimiter used if the Expression returned multiple endpoints. strategyRef Refers to an AggregationStrategy to be used to assemble the replies from the recipients, into a single outgoing message from the Section 8.3, "Recipient List" . By default Camel will use the last reply as the outgoing message. strategyMethodName This option can be used to explicitly specify the method name to use, when using POJOs as the AggregationStrategy . strategyMethodAllowNull false This option can be used, when using POJOs as the AggregationStrategy . If false , the aggregate method is not used, when there is no data to enrich. If true , null values are used for the oldExchange , when there is no data to enrich. parallelProcessing false Camel 2.2: If enables then sending messages to the recipients occurs concurrently. Note the caller thread will still wait until all messages has been fully processed, before it continues. Its only the sending and processing the replies from the recipients which happens concurrently. parallelAggregate false If enabled, the aggregate method on AggregationStrategy can be called concurrently. Note that this requires the implementation of AggregationStrategy to be thread-safe. By default, this option is false , which means that Camel automatically synchronizes calls to the aggregate method. In some use-cases, however, you can improve performance by implementing AggregationStrategy as thread-safe and setting this option to true . executorServiceRef Camel 2.2: Refers to a custom Thread Pool to be used for parallel processing. Notice if you set this option, then parallel processing is automatic implied, and you do not have to enable that option as well. stopOnException false Camel 2.2: Whether or not to stop continue processing immediately when an exception occurred. If disable, then Camel will send the message to all recipients regardless if one of them failed. You can deal with exceptions in the AggregationStrategy class where you have full control how to handle that. ignoreInvalidEndpoints false Camel 2.3: If an endpoint uri could not be resolved, should it be ignored. Otherwise Camel will thrown an exception stating the endpoint uri is not valid. streaming false Camel 2.5: If enabled then Camel will process replies out-of-order, eg in the order they come back. If disabled, Camel will process replies in the same order as the Expression specified. timeout Camel 2.5: Sets a total timeout specified in millis. If the Section 8.3, "Recipient List" hasn't been able to send and process all replies within the given timeframe, then the timeout triggers and the Section 8.3, "Recipient List" breaks out and continues. Notice if you provide a AggregationStrategy then the timeout method is invoked before breaking out. onPrepareRef Camel 2.8: Refers to a custom Processor to prepare the copy of the Exchange each recipient will receive. This allows you to do any custom logic, such as deep-cloning the message payload if that's needed etc. shareUnitOfWork false Camel 2.8: Whether the unit of work should be shared. See the same option on Section 8.4, "Splitter" for more details. cacheSize 0 Camel 2.13.1/2.12.4: Allows to configure the cache size for the ProducerCache which caches producers for reuse in the routing slip. Will by default use the default cache size which is 0. Setting the value to -1 allows to turn off the cache all together. Using Exchange Pattern in Recipient List By default, the Recipient List uses the current exchange pattern. However, there may be few cases where you can send a message to a recipient using a different exchange pattern. For example, you may have a route that initiates as a InOnly route. Now, If you want to use InOut exchange pattern with a recipient list, you need to configure the exchange pattern directly in the recipient endpoints. The following example illustrates the route where the new files will start as InOnly and then route to a recipient list. If you want to use InOut with the ActiveMQ (JMS) endpoint, you need to specify this using the exchangePattern equals to InOut option. However, the response form the JMS request or reply will then be continued routed, and thus the response is stored in as a file in the outbox directory. Note The InOut exchange pattern must get a response during the timeout. However, it fails if the response is not recieved. 8.4. Splitter Overview A splitter is a type of router that splits an incoming message into a series of outgoing messages. Each of the outgoing messages contains a piece of the original message. In Apache Camel, the splitter pattern, shown in Figure 8.4, "Splitter Pattern" , is implemented by the split() Java DSL command. Figure 8.4. Splitter Pattern The Apache Camel splitter actually supports two patterns, as follows: Simple splitter - implements the splitter pattern on its own. Splitter/aggregator - combines the splitter pattern with the aggregator pattern, such that the pieces of the message are recombined after they have been processed. Before the splitter separates the original message into parts, it makes a shallow copy of the original message. In a shallow copy, the headers and payload of the original message are copied as references only. Although the splitter does not itself route the resulting message parts to different endpoints, parts of the split message might undergo secondary routing. Because the message parts are shallow copies, they remain linked to the original message. As a result, they cannot be modified independently. If you want to apply custom logic to different copies of a message part before routing it to a set of endpoints, you must use the onPrepareRef DSL option in the splitter clause to make a deep copy of the original message. For information about using options, see the section called "Options" . Java DSL example The following example defines a route from seda:a to seda:b that splits messages by converting each line of an incoming message into a separate outgoing message: The splitter can use any expression language, so you can split messages using any of the supported scripting languages, such as XPath, XQuery, or SQL (see Part II, "Routing Expression and Predicate Languages" ). The following example extracts bar elements from an incoming message and insert them into separate outgoing messages: XML configuration example The following example shows how to configure a splitter route in XML, using the XPath scripting language: You can use the tokenize expression in the XML DSL to split bodies or headers using a token, where the tokenize expression is defined using the tokenize element. In the following example, the message body is tokenized using the \n separator character. To use a regular expression pattern, set regex=true in the tokenize element. Splitting into groups of lines To split a big file into chunks of 1000 lines, you can define a splitter route as follows in the Java DSL: The second argument to tokenize specifies the number of lines that should be grouped into a single chunk. The streaming() clause directs the splitter not to read the whole file at once (resulting in much better performance if the file is large). The same route can be defined in XML DSL as follows: The output when using the group option is always of java.lang.String type. Skip first item To skip the first item in the message you can use the skipFirst option. In Java DSL, make the third option in the tokenize parameter true : The same route can be defined in XML DSL as follows: Splitter reply If the exchange that enters the splitter has the InOut message-exchange pattern (that is, a reply is expected), the splitter returns a copy of the original input message as the reply message in the Out message slot. You can override this default behavior by implementing your own aggregation strategy . Parallel execution If you want to execute the resulting pieces of the message in parallel, you can enable the parallel processing option, which instantiates a thread pool to process the message pieces. For example: You can customize the underlying ThreadPoolExecutor used in the parallel splitter. For example, you can specify a custom executor in the Java DSL as follows: You can specify a custom executor in the XML DSL as follows: Using a bean to perform splitting As the splitter can use any expression to do the splitting, you can use a bean to perform splitting, by invoking the method() expression. The bean should return an iterable value such as: java.util.Collection , java.util.Iterator , or an array. The following route defines a method() expression that calls a method on the mySplitterBean bean instance: Where mySplitterBean is an instance of the MySplitterBean class, which is defined as follows: You can use a BeanIOSplitter object with the Splitter EIP to split big payloads by using a stream mode to avoid reading the entire content into memory. The following example shows how to set up a BeanIOSplitter object by using the mapping file, which is loaded from the classpath: Note The BeanIOSplitter class is new in Camel 2.18. It is not available in Camel 2.17. The following example adds an error handler: Exchange properties The following properties are set on each split exchange: header type description CamelSplitIndex int Apache Camel 2.0: A split counter that increases for each Exchange being split. The counter starts from 0. CamelSplitSize int Apache Camel 2.0: The total number of Exchanges that was split. This header is not applied for stream based splitting. CamelSplitComplete boolean Apache Camel 2.4: Whether or not this Exchange is the last. Splitter/aggregator pattern It is a common pattern for the message pieces to be aggregated back into a single exchange, after processing of the individual pieces has completed. To support this pattern, the split() DSL command lets you provide an AggregationStrategy object as the second argument. Java DSL example The following example shows how to use a custom aggregation strategy to recombine a split message after all of the message pieces have been processed: AggregationStrategy implementation The custom aggregation strategy, MyOrderStrategy , used in the preceding route is implemented as follows: Stream based processing When parallel processing is enabled, it is theoretically possible for a later message piece to be ready for aggregation before an earlier piece. In other words, the message pieces might arrive at the aggregator out of order. By default, this does not happen, because the splitter implementation rearranges the message pieces back into their original order before passing them into the aggregator. If you would prefer to aggregate the message pieces as soon as they are ready (and possibly out of order), you can enable the streaming option, as follows: You can also supply a custom iterator to use with streaming, as follows: Streaming and XPath You cannot use streaming mode in conjunction with XPath. XPath requires the complete DOM XML document in memory. Stream based processing with XML If an incoming messages is a very large XML file, you can process the message most efficiently using the tokenizeXML sub-command in streaming mode. For example, given a large XML file that contains a sequence of order elements, you can split the file into order elements using a route like the following: You can do the same thing in XML, by defining a route like the following: It is often the case that you need access to namespaces that are defined in one of the enclosing (ancestor) elements of the token elements. You can copy namespace definitions from one of the ancestor elements into the token element, by specifing which element you want to inherit namespace definitions from. In the Java DSL, you specify the ancestor element as the second argument of tokenizeXML . For example, to inherit namespace definitions from the enclosing orders element: In the XML DSL, you specify the ancestor element using the inheritNamespaceTagName attribute. For example: Options The split DSL command supports the following options: Name Default Value Description strategyRef Refers to an AggregationStrategy to be used to assemble the replies from the sub-messages, into a single outgoing message from the Section 8.4, "Splitter" . See the section titled What does the splitter return below for whats used by default. strategyMethodName This option can be used to explicitly specify the method name to use, when using POJOs as the AggregationStrategy . strategyMethodAllowNull false This option can be used, when using POJOs as the AggregationStrategy . If false , the aggregate method is not used, when there is no data to enrich. If true , null values are used for the oldExchange , when there is no data to enrich. parallelProcessing false If enables then processing the sub-messages occurs concurrently. Note the caller thread will still wait until all sub-messages has been fully processed, before it continues. parallelAggregate false If enabled, the aggregate method on AggregationStrategy can be called concurrently. Note that this requires the implementation of AggregationStrategy to be thread-safe. By default, this option is false , which means that Camel automatically synchronizes calls to the aggregate method. In some use-cases, however, you can improve performance by implementing AggregationStrategy as thread-safe and setting this option to true . executorServiceRef Refers to a custom Thread Pool to be used for parallel processing. Notice if you set this option, then parallel processing is automatic implied, and you do not have to enable that option as well. stopOnException false Camel 2.2: Whether or not to stop continue processing immediately when an exception occurred. If disable, then Camel continue splitting and process the sub-messages regardless if one of them failed. You can deal with exceptions in the AggregationStrategy class where you have full control how to handle that. streaming false If enabled then Camel will split in a streaming fashion, which means it will split the input message in chunks. This reduces the memory overhead. For example if you split big messages its recommended to enable streaming. If streaming is enabled then the sub-message replies will be aggregated out-of-order, eg in the order they come back. If disabled, Camel will process sub-message replies in the same order as they where splitted. timeout Camel 2.5: Sets a total timeout specified in millis. If the Section 8.3, "Recipient List" hasn't been able to split and process all replies within the given timeframe, then the timeout triggers and the Section 8.4, "Splitter" breaks out and continues. Notice if you provide an AggregationStrategy then the timeout method is invoked before breaking out. onPrepareRef Camel 2.8: Refers to a custom Processor to prepare the sub-message of the Exchange, before its processed. This allows you to do any custom logic, such as deep-cloning the message payload if that's needed etc. shareUnitOfWork false Camel 2.8: Whether the unit of work should be shared. See further below for more details. 8.5. Aggregator Overview The aggregator pattern, shown in Figure 8.5, "Aggregator Pattern" , enables you to combine a batch of related messages into a single message. Figure 8.5. Aggregator Pattern To control the aggregator's behavior, Apache Camel allows you to specify the properties described in Enterprise Integration Patterns , as follows: Correlation expression - Determines which messages should be aggregated together. The correlation expression is evaluated on each incoming message to produce a correlation key . Incoming messages with the same correlation key are then grouped into the same batch. For example, if you want to aggregate all incoming messages into a single message, you can use a constant expression. Completeness condition - Determines when a batch of messages is complete. You can specify this either as a simple size limit or, more generally, you can specify a predicate condition that flags when the batch is complete. Aggregation algorithm - Combines the message exchanges for a single correlation key into a single message exchange. For example, consider a stock market data system that receives 30,000 messages per second. You might want to throttle down the message flow if your GUI tool cannot cope with such a massive update rate. The incoming stock quotes can be aggregated together simply by choosing the latest quote and discarding the older prices. (You can apply a delta processing algorithm, if you prefer to capture some of the history.) Note The Aggregator now enlists in JMX using a ManagedAggregateProcessorMBean that includes more information. It enables you to use the aggregate controller to control it. How the aggregator works Figure 8.6, "Aggregator Implementation" shows an overview of how the aggregator works, assuming it is fed with a stream of exchanges that have correlation keys such as A, B, C, or D. Figure 8.6. Aggregator Implementation The incoming stream of exchanges shown in Figure 8.6, "Aggregator Implementation" is processed as follows: The correlator is responsible for sorting exchanges based on the correlation key. For each incoming exchange, the correlation expression is evaluated, yielding the correlation key. For example, for the exchange shown in Figure 8.6, "Aggregator Implementation" , the correlation key evaluates to A. The aggregation strategy is responsible for merging exchanges with the same correlation key. When a new exchange, A, comes in, the aggregator looks up the corresponding aggregate exchange , A', in the aggregation repository and combines it with the new exchange. Until a particular aggregation cycle is completed, incoming exchanges are continuously aggregated with the corresponding aggregate exchange. An aggregation cycle lasts until terminated by one of the completion mechanisms. Note From Camel 2.16, the new XSLT Aggregation Strategy allows you to merge two messages with an XSLT file. You can access the AggregationStrategies.xslt() file from the toolbox. If a completion predicate is specified on the aggregator, the aggregate exchange is tested to determine whether it is ready to be sent to the processor in the route. Processing continues as follows: If complete, the aggregate exchange is processed by the latter part of the route. There are two alternative models for this: synchronous (the default), which causes the calling thread to block, or asynchronous (if parallel processing is enabled), where the aggregate exchange is submitted to an executor thread pool (as shown in Figure 8.6, "Aggregator Implementation" ). If not complete, the aggregate exchange is saved back to the aggregation repository. In parallel with the synchronous completion tests, it is possible to enable an asynchronous completion test by enabling either the completionTimeout option or the completionInterval option. These completion tests run in a separate thread and, whenever the completion test is satisfied, the corresponding exchange is marked as complete and starts to be processed by the latter part of the route (either synchronously or asynchronously, depending on whether parallel processing is enabled or not). If parallel processing is enabled, a thread pool is responsible for processing exchanges in the latter part of the route. By default, this thread pool contains ten threads, but you have the option of customizing the pool ( the section called "Threading options" ). Java DSL example The following example aggregates exchanges with the same StockSymbol header value, using the UseLatestAggregationStrategy aggregation strategy. For a given StockSymbol value, if more than three seconds elapse since the last exchange with that correlation key was received, the aggregated exchange is deemed to be complete and is sent to the mock endpoint. XML DSL example The following example shows how to configure the same route in XML: Specifying the correlation expression In the Java DSL, the correlation expression is always passed as the first argument to the aggregate() DSL command. You are not limited to using the Simple expression language here. You can specify a correlation expression using any of the expression languages or scripting languages, such as XPath, XQuery, SQL, and so on. For exampe, to correlate exchanges using an XPath expression, you could use the following Java DSL route: If the correlation expression cannot be evaluated on a particular incoming exchange, the aggregator throws a CamelExchangeException by default. You can suppress this exception by setting the ignoreInvalidCorrelationKeys option. For example, in the Java DSL: In the XML DSL, you can set the ignoreInvalidCorrelationKeys option is set as an attribute, as follows: Specifying the aggregation strategy In Java DSL, you can either pass the aggregation strategy as the second argument to the aggregate() DSL command or specify it using the aggregationStrategy() clause. For example, you can use the aggregationStrategy() clause as follows: Apache Camel provides the following basic aggregation strategies (where the classes belong to the org.apache.camel.processor.aggregate Java package): UseLatestAggregationStrategy Return the last exchange for a given correlation key, discarding all earlier exchanges with this key. For example, this strategy could be useful for throttling the feed from a stock exchange, where you just want to know the latest price of a particular stock symbol. UseOriginalAggregationStrategy Return the first exchange for a given correlation key, discarding all later exchanges with this key. You must set the first exchange by calling UseOriginalAggregationStrategy.setOriginal() before you can use this strategy. GroupedExchangeAggregationStrategy Concatenates all of the exchanges for a given correlation key into a list, which is stored in the Exchange.GROUPED_EXCHANGE exchange property. See the section called "Grouped exchanges" . Implementing a custom aggregation strategy If you want to apply a different aggregation strategy, you can implement one of the following aggregation strategy base interfaces: org.apache.camel.processor.aggregate.AggregationStrategy The basic aggregation strategy interface. org.apache.camel.processor.aggregate.TimeoutAwareAggregationStrategy Implement this interface, if you want your implementation to receive a notification when an aggregation cycle times out. The timeout notification method has the following signature: org.apache.camel.processor.aggregate.CompletionAwareAggregationStrategy Implement this interface, if you want your implementation to receive a notification when an aggregation cycle completes normally. The notification method has the following signature: For example, the following code shows two different custom aggregation strategies, StringAggregationStrategy and ArrayListAggregationStrategy :: Note Since Apache Camel 2.0, the AggregationStrategy.aggregate() callback method is also invoked for the very first exchange. On the first invocation of the aggregate method, the oldExchange parameter is null and the newExchange parameter contains the first incoming exchange. To aggregate messages using the custom strategy class, ArrayListAggregationStrategy , define a route like the following: You can also configure a route with a custom aggregation strategy in XML, as follows: Controlling the lifecycle of a custom aggregation strategy You can implement a custom aggregation strategy so that its lifecycle is aligned with the lifecycle of the enterprise integration pattern that is controlling it. This can be useful for ensuring that the aggregation strategy can shut down gracefully. To implement an aggregation strategy with lifecycle support, you must implement the org.apache.camel.Service interface (in addition to the AggregationStrategy interface) and provide implementations of the start() and stop() lifecycle methods. For example, the following code example shows an outline of an aggregation strategy with lifecycle support: Exchange properties The following properties are set on each aggregated exchange: Header Type Description Aggregated Exchange Properties Exchange.AGGREGATED_SIZE int The total number of exchanges aggregated into this exchange. Exchange.AGGREGATED_COMPLETED_BY String Indicates the mechanism responsible for completing the aggregate exchange. Possible values are: predicate , size , timeout , interval , or consumer . The following properties are set on exchanges redelivered by the SQL Component aggregation repository (see the section called "Persistent aggregation repository" ): Header Type Description Redelivered Exchange Properties Exchange.REDELIVERY_COUNTER int Sequence number of the current redelivery attempt (starting at 1 ). Specifying a completion condition It is mandatory to specify at least one completion condition, which determines when an aggregate exchange leaves the aggregator and proceeds to the node on the route. The following completion conditions can be specified: completionPredicate Evaluates a predicate after each exchange is aggregated in order to determine completeness. A value of true indicates that the aggregate exchange is complete. Alternatively, instead of setting this option, you can define a custom AggregationStrategy that implements the Predicate interface, in which case the AggregationStrategy will be used as the completion predicate. completionSize Completes the aggregate exchange after the specified number of incoming exchanges are aggregated. completionTimeout (Incompatible with completionInterval ) Completes the aggregate exchange, if no incoming exchanges are aggregated within the specified timeout. In other words, the timeout mechanism keeps track of a timeout for each correlation key value. The clock starts ticking after the latest exchange with a particular key value is received. If another exchange with the same key value is not received within the specified timeout, the corresponding aggregate exchange is marked complete and sent to the node on the route. completionInterval (Incompatible with completionTimeout ) Completes all outstanding aggregate exchanges, after each time interval (of specified length) has elapsed. The time interval is not tailored to each aggregate exchange. This mechanism forces simultaneous completion of all outstanding aggregate exchanges. Hence, in some cases, this mechanism could complete an aggregate exchange immediately after it started aggregating. completionFromBatchConsumer When used in combination with a consumer endpoint that supports the batch consumer mechanism, this completion option automatically figures out when the current batch of exchanges is complete, based on information it receives from the consumer endpoint. See the section called "Batch consumer" . forceCompletionOnStop When this option is enabled, it forces completion of all outstanding aggregate exchanges when the current route context is stopped. The preceding completion conditions can be combined arbitrarily, except for the completionTimeout and completionInterval conditions, which cannot be simultaneously enabled. When conditions are used in combination, the general rule is that the first completion condition to trigger is the effective completion condition. Specifying the completion predicate You can specify an arbitrary predicate expression that determines when an aggregated exchange is complete. There are two possible ways of evaluating the predicate expression: On the latest aggregate exchange - this is the default behavior. On the latest incoming exchange - this behavior is selected when you enable the eagerCheckCompletion option. For example, if you want to terminate a stream of stock quotes every time you receive an ALERT message (as indicated by the value of a MsgType header in the latest incoming exchange), you can define a route like the following: The following example shows how to configure the same route using XML: Specifying a dynamic completion timeout It is possible to specify a dynamic completion timeout , where the timeout value is recalculated for every incoming exchange. For example, to set the timeout value from the timeout header in each incoming exchange, you could define a route as follows: You can configure the same route in the XML DSL, as follows: Note You can also add a fixed timeout value and Apache Camel will fall back to use this value, if the dynamic value is null or 0 . Specifying a dynamic completion size It is possible to specify a dynamic completion size , where the completion size is recalculated for every incoming exchange. For example, to set the completion size from the mySize header in each incoming exchange, you could define a route as follows: And the same example using Spring XML: Note You can also add a fixed size value and Apache Camel will fall back to use this value, if the dynamic value is null or 0 . Forcing completion of a single group from within an AggregationStrategy If you implement a custom AggregationStrategy class, there is a mechanism available to force the completion of the current message group, by setting the Exchange.AGGREGATION_COMPLETE_CURRENT_GROUP exchange property to true on the exchange returned from the AggregationStrategy.aggregate() method. This mechanism only affects the current group: other message groups (with different correlation IDs) are not forced to complete. This mechanism overrides any other completion mechanisms, such as predicate, size, timeout, and so on. For example, the following sample AggregationStrategy class completes the current group, if the message body size is larger than 5: Forcing completion of all groups with a special message It is possible to force completion of all outstanding aggregate messages, by sending a message with a special header to the route. There are two alternative header settings you can use to force completion: Exchange.AGGREGATION_COMPLETE_ALL_GROUPS Set to true , to force completion of the current aggregation cycle. This message acts purely as a signal and is not included in any aggregation cycle. After processing this signal message, the content of the message is discarded. Exchange.AGGREGATION_COMPLETE_ALL_GROUPS_INCLUSIVE Set to true , to force completion of the current aggregation cycle. This message is included in the current aggregation cycle. Using AggregateController The org.apache.camel.processor.aggregate.AggregateController enables you to control the aggregate at runtime using Java or JMX API. This can be used to force completing groups of exchanges, or query the current runtime statistics. If no custom have been configured, the aggregator provides a default implementation which you can access using the getAggregateController() method. However, it is easy to configure a controller in the route using aggregateController. Also, you can use the API on AggregateController to force completion. For example, to complete a group with key foo The number return would be the number of groups completed. Following is an API to complete all groups: Enforcing unique correlation keys In some aggregation scenarios, you might want to enforce the condition that the correlation key is unique for each batch of exchanges. In other words, when the aggregate exchange for a particular correlation key completes, you want to make sure that no further aggregate exchanges with that correlation key are allowed to proceed. For example, you might want to enforce this condition, if the latter part of the route expects to process exchanges with unique correlation key values. Depending on how the completion conditions are configured, there might be a risk of more than one aggregate exchange being generated with a particular correlation key. For example, although you might define a completion predicate that is designed to wait until all the exchanges with a particular correlation key are received, you might also define a completion timeout, which could fire before all of the exchanges with that key have arrived. In this case, the late-arriving exchanges could give rise to a second aggregate exchange with the same correlation key value. For such scenarios, you can configure the aggregator to suppress aggregate exchanges that duplicate correlation key values, by setting the closeCorrelationKeyOnCompletion option. In order to suppress duplicate correlation key values, it is necessary for the aggregator to record correlation key values in a cache. The size of this cache (the number of cached correlation keys) is specified as an argument to the closeCorrelationKeyOnCompletion() DSL command. To specify a cache of unlimited size, you can pass a value of zero or a negative integer. For example, to specify a cache size of 10000 key values: If an aggregate exchange completes with a duplicate correlation key value, the aggregator throws a ClosedCorrelationKeyException exception. Stream based processing using Simple expressions You can use Simple language expressions as the token with the tokenizeXML sub-command in streaming mode. Using Simple language expressions will enable support for dynamic tokens. For example, to use Java to split a sequence of names delineated up by the tag person , you can split the file into name elements using the tokenizeXML bean and a Simple language token. Get the input string of names delineated by <person> and set <person> as the token. List the names split from the input. Grouped exchanges You can combine all of the aggregated exchanges in an outgoing batch into a single org.apache.camel.impl.GroupedExchange holder class. To enable grouped exchanges, specify the groupExchanges() option, as shown in the following Java DSL route: The grouped exchange sent to mock:result contains the list of aggregated exchanges in the message body. The following line of code shows how a subsequent processor can access the contents of the grouped exchange in the form of a list: Note When you enable the grouped exchanges feature, you must not configure an aggregation strategy (the grouped exchanges feature is itself an aggregation strategy). Note The old approach of accessing the grouped exchanges from a property on the outgoing exchange is now deprecated and will be removed in a future release. Batch consumer The aggregator can work together with the batch consumer pattern to aggregate the total number of messages reported by the batch consumer (a batch consumer endpoint sets the CamelBatchSize , CamelBatchIndex , and CamelBatchComplete properties on the incoming exchange). For example, to aggregate all of the files found by a File consumer endpoint, you could use a route like the following: Currently, the following endpoints support the batch consumer mechanism: File, FTP, Mail, iBatis, and JPA. Persistent aggregation repository The default aggregator uses an in-memory only AggregationRepository . If you want to store pending aggregated exchanges persistently, you can use the SQL Component as a persistent aggregation repository. The SQL Component includes a JdbcAggregationRepository that persists aggregated messages on-the-fly, and ensures that you do not lose any messages. When an exchange has been successfully processed, it is marked as complete when the confirm method is invoked on the repository. This means that if the same exchange fails again, it will be retried until it is successful. Add a dependency on camel-sql To use the SQL Component, you must include a dependency on camel-sql in your project. For example, if you are using a Maven pom.xml file: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-sql</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> Create the aggregation database tables You must create separate aggregation and completed database tables for persistence. For example, the following query creates the tables for a database named my_aggregation_repo : CREATE TABLE my_aggregation_repo ( id varchar(255) NOT NULL, exchange blob NOT NULL, constraint aggregation_pk PRIMARY KEY (id) ); CREATE TABLE my_aggregation_repo_completed ( id varchar(255) NOT NULL, exchange blob NOT NULL, constraint aggregation_completed_pk PRIMARY KEY (id) ); } Configure the aggregation repository You must also configure the aggregation repository in your framework XML file (for example, Spring or Blueprint): <bean id="my_repo" class="org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository"> <property name="repositoryName" value="my_aggregation_repo"/> <property name="transactionManager" ref="my_tx_manager"/> <property name="dataSource" ref="my_data_source"/> ... </bean> The repositoryName , transactionManager , and dataSource properties are required. For details on more configuration options for the persistent aggregation repository, see SQL Component in the Apache Camel Component Reference Guide . Threading options As shown in Figure 8.6, "Aggregator Implementation" , the aggregator is decoupled from the latter part of the route, where the exchanges sent to the latter part of the route are processed by a dedicated thread pool. By default, this pool contains just a single thread. If you want to specify a pool with multiple threads, enable the parallelProcessing option, as follows: By default, this creates a pool with 10 worker threads. If you want to exercise more control over the created thread pool, specify a custom java.util.concurrent.ExecutorService instance using the executorService option (in which case it is unnecessary to enable the parallelProcessing option). Aggregating into a List A common aggregation scenario involves aggregating a series of incoming message bodies into a List object. To facilitate this scenario, Apache Camel provides the AbstractListAggregationStrategy abstract class, which you can quickly extend to create an aggregation strategy for this case. Incoming message bodies of type, T , are aggregated into a completed exchange, with a message body of type List<T> . For example, to aggregate a series of Integer message bodies into a List<Integer> object, you could use an aggregation strategy defined as follows: Aggregator options The aggregator supports the following options: Table 8.1. Aggregator Options Option Default Description correlationExpression Mandatory Expression which evaluates the correlation key to use for aggregation. The Exchange which has the same correlation key is aggregated together. If the correlation key could not be evaluated an Exception is thrown. You can disable this by using the ignoreBadCorrelationKeys option. aggregationStrategy Mandatory AggregationStrategy which is used to merge the incoming Exchange with the existing already merged exchanges. At first call the oldExchange parameter is null . On subsequent invocations the oldExchange contains the merged exchanges and newExchange is of course the new incoming Exchange. From Camel 2.9.2 onwards, the strategy can optionally be a TimeoutAwareAggregationStrategy implementation, which supports a timeout callback. From Camel 2.16 onwards, the strategy can also be a PreCompletionAwareAggregationStrategy implementation. It runs the completion check in a pre-completion mode. strategyRef A reference to lookup the AggregationStrategy in the Registry. completionSize Number of messages aggregated before the aggregation is complete. This option can be set as either a fixed value or using an Expression which allows you to evaluate a size dynamically - will use Integer as result. If both are set Camel will fallback to use the fixed value if the Expression result was null or 0 . completionTimeout Time in millis that an aggregated exchange should be inactive before its complete. This option can be set as either a fixed value or using an Expression which allows you to evaluate a timeout dynamically - will use Long as result. If both are set Camel will fallback to use the fixed value if the Expression result was null or 0 . You cannot use this option together with completionInterval, only one of the two can be used. completionInterval A repeating period in millis by which the aggregator will complete all current aggregated exchanges. Camel has a background task which is triggered every period. You cannot use this option together with completionTimeout, only one of them can be used. completionPredicate Specifies a predicate (of org.apache.camel.Predicate type), which signals when an aggregated exchange is complete. Alternatively, instead of setting this option, you can define a custom AggregationStrategy that implements the Predicate interface, in which case the AggregationStrategy will be used as the completion predicate. completionFromBatchConsumer false This option is if the exchanges are coming from a Batch Consumer. Then when enabled the Section 8.5, "Aggregator" will use the batch size determined by the Batch Consumer in the message header CamelBatchSize . See more details at Batch Consumer. This can be used to aggregate all files consumed from a see File endpoint in that given poll. eagerCheckCompletion false Whether or not to eager check for completion when a new incoming Exchange has been received. This option influences the behavior of the completionPredicate option as the Exchange being passed in changes accordingly. When false the Exchange passed in the Predicate is the aggregated Exchange which means any information you may store on the aggregated Exchange from the AggregationStrategy is available for the Predicate. When true the Exchange passed in the Predicate is the incoming Exchange, which means you can access data from the incoming Exchange. forceCompletionOnStop false If true , complete all aggregated exchanges when the current route context is stopped. groupExchanges false If enabled then Camel will group all aggregated Exchanges into a single combined org.apache.camel.impl.GroupedExchange holder class that holds all the aggregated Exchanges. And as a result only one Exchange is being sent out from the aggregator. Can be used to combine many incoming Exchanges into a single output Exchange without coding a custom AggregationStrategy yourself. ignoreInvalidCorrelationKeys false Whether or not to ignore correlation keys which could not be evaluated to a value. By default Camel will throw an Exception, but you can enable this option and ignore the situation instead. closeCorrelationKeyOnCompletion Whether or not late Exchanges should be accepted or not. You can enable this to indicate that if a correlation key has already been completed, then any new exchanges with the same correlation key be denied. Camel will then throw a closedCorrelationKeyException exception. When using this option you pass in a integer which is a number for a LRUCache which keeps that last X number of closed correlation keys. You can pass in 0 or a negative value to indicate a unbounded cache. By passing in a number you are ensured that cache wont grown too big if you use a log of different correlation keys. discardOnCompletionTimeout false Camel 2.5: Whether or not exchanges which complete due to a timeout should be discarded. If enabled, then when a timeout occurs the aggregated message will not be sent out but dropped (discarded). aggregationRepository Allows you to plug in you own implementation of org.apache.camel.spi.AggregationRepository which keeps track of the current inflight aggregated exchanges. Camel uses by default a memory based implementation. aggregationRepositoryRef Reference to lookup a aggregationRepository in the Registry. parallelProcessing false When aggregated are completed they are being send out of the aggregator. This option indicates whether or not Camel should use a thread pool with multiple threads for concurrency. If no custom thread pool has been specified then Camel creates a default pool with 10 concurrent threads. executorService If using parallelProcessing you can specify a custom thread pool to be used. In fact also if you are not using parallelProcessing this custom thread pool is used to send out aggregated exchanges as well. executorServiceRef Reference to lookup a executorService in the Registry timeoutCheckerExecutorService If using one of the completionTimeout , completionTimeoutExpression , or completionInterval options, a background thread is created to check for the completion for every aggregator. Set this option to provide a custom thread pool to be used rather than creating a new thread for every aggregator. timeoutCheckerExecutorServiceRef Reference to look up a timeoutCheckerExecutorService in the registry. completeAllOnStop When you stop the Aggregator, this option allows it to complete all pending exchanges from the aggregation repository. optimisticLocking false Turns on optimistic locking, which can be used in combination with an aggregation repository. optimisticLockRetryPolicy Configures the retry policy for optimistic locking. 8.6. Resequencer Overview The resequencer pattern, shown in Figure 8.7, "Resequencer Pattern" , enables you to resequence messages according to a sequencing expression. Messages that generate a low value for the sequencing expression are moved to the front of the batch and messages that generate a high value are moved to the back. Figure 8.7. Resequencer Pattern Apache Camel supports two resequencing algorithms: Batch resequencing - Collects messages into a batch, sorts the messages and sends them to their output. Stream resequencing - Re-orders (continuous) message streams based on the detection of gaps between messages. By default the resequencer does not support duplicate messages and will only keep the last message, in cases where a message arrives with the same message expression. However, in batch mode you can enable the resequencer to allow duplicates. Batch resequencing The batch resequencing algorithm is enabled by default. For example, to resequence a batch of incoming messages based on the value of a timestamp contained in the TimeStamp header, you can define the following route in Java DSL: By default, the batch is obtained by collecting all of the incoming messages that arrive in a time interval of 1000 milliseconds (default batch timeout ), up to a maximum of 100 messages (default batch size ). You can customize the values of the batch timeout and the batch size by appending a batch() DSL command, which takes a BatchResequencerConfig instance as its sole argument. For example, to modify the preceding route so that the batch consists of messages collected in a 4000 millisecond time window, up to a maximum of 300 messages, you can define the Java DSL route as follows: You can also specify a batch resequencer pattern using XML configuration. The following example defines a batch resequencer with a batch size of 300 and a batch timeout of 4000 milliseconds: Batch options Table 8.2, "Batch Resequencer Options" shows the options that are available in batch mode only. Table 8.2. Batch Resequencer Options Java DSL XML DSL Default Description allowDuplicates() batch-config/@allowDuplicates false If true , do not discard duplicate messages from the batch (where duplicate means that the message expression evaluates to the same value). reverse() batch-config/@reverse false If true , put the messages in reverse order (where the default ordering applied to a message expression is based on Java's string lexical ordering, as defined by String.compareTo() ). For example, if you want to resequence messages from JMS queues based on JMSPriority , you would need to combine the options, allowDuplicates and reverse , as follows: Stream resequencing To enable the stream resequencing algorithm, you must append stream() to the resequence() DSL command. For example, to resequence incoming messages based on the value of a sequence number in the seqnum header, you define a DSL route as follows: The stream-processing resequencer algorithm is based on the detection of gaps in a message stream, rather than on a fixed batch size. Gap detection, in combination with timeouts, removes the constraint of needing to know the number of messages of a sequence (that is, the batch size) in advance. Messages must contain a unique sequence number for which a predecessor and a successor is known. For example a message with the sequence number 3 has a predecessor message with the sequence number 2 and a successor message with the sequence number 4 . The message sequence 2,3,5 has a gap because the successor of 3 is missing. The resequencer therefore must retain message 5 until message 4 arrives (or a timeout occurs). By default, the stream resequencer is configured with a timeout of 1000 milliseconds, and a maximum message capacity of 100. To customize the stream's timeout and message capacity, you can pass a StreamResequencerConfig object as an argument to stream() . For example, to configure a stream resequencer with a message capacity of 5000 and a timeout of 4000 milliseconds, you define a route as follows: If the maximum time delay between successive messages (that is, messages with adjacent sequence numbers) in a message stream is known, the resequencer's timeout parameter should be set to this value. In this case, you can guarantee that all messages in the stream are delivered in the correct order to the processor. The lower the timeout value that is compared to the out-of-sequence time difference, the more likely it is that the resequencer will deliver messages out of sequence. Large timeout values should be supported by sufficiently high capacity values, where the capacity parameter is used to prevent the resequencer from running out of memory. If you want to use sequence numbers of some type other than long , you would must define a custom comparator, as follows: You can also specify a stream resequencer pattern using XML configuration. The following example defines a stream resequencer with a message capacity of 5000 and a timeout of 4000 milliseconds: Ignore invalid exchanges The resequencer EIP throws a CamelExchangeException exception, if the incoming exchange is not valid - that is, if the sequencing expression cannot be evaluated for some reason (for example, due to a missing header). You can use the ignoreInvalidExchanges option to ignore these exceptions, which means the resequencer will skip any invalid exchanges. Reject old messages The rejectOld option can be used to prevent messages being sent out of order, regardless of the mechanism used to resequence messages. When the rejectOld option is enabled, the resequencer rejects an incoming message (by throwing a MessageRejectedException exception), if the incoming messages is older (as defined by the current comparator) than the last delivered message. 8.7. Routing Slip Overview The routing slip pattern, shown in Figure 8.8, "Routing Slip Pattern" , enables you to route a message consecutively through a series of processing steps, where the sequence of steps is not known at design time and can vary for each message. The list of endpoints through which the message should pass is stored in a header field (the slip ), which Apache Camel reads at run time to construct a pipeline on the fly. Figure 8.8. Routing Slip Pattern The slip header The routing slip appears in a user-defined header, where the header value is a comma-separated list of endpoint URIs. For example, a routing slip that specifies a sequence of security tasks - decrypting, authenticating, and de-duplicating a message - might look like the following: The current endpoint property From Camel 2.5 the Routing Slip will set a property ( Exchange.SLIP_ENDPOINT ) on the exchange which contains the current endpoint as it advanced though the slip. This enables you to find out how far the exchange has progressed through the slip. The Section 8.7, "Routing Slip" will compute the slip beforehand which means, the slip is only computed once. If you need to compute the slip on-the-fly then use the Section 8.18, "Dynamic Router" pattern instead. Java DSL example The following route takes messages from the direct:a endpoint and reads a routing slip from the aRoutingSlipHeader header: You can specify the header name either as a string literal or as an expression. You can also customize the URI delimiter using the two-argument form of routingSlip() . The following example defines a route that uses the aRoutingSlipHeader header key for the routing slip and uses the # character as the URI delimiter: XML configuration example The following example shows how to configure the same route in XML: Ignore invalid endpoints The Section 8.7, "Routing Slip" now supports ignoreInvalidEndpoints , which the Section 8.3, "Recipient List" pattern also supports. You can use it to skip endpoints that are invalid. For example: In Spring XML, this feature is enabled by setting the ignoreInvalidEndpoints attribute on the <routingSlip> tag: Consider the case where myHeader contains the two endpoints, direct:foo,xxx:bar . The first endpoint is valid and works. The second is invalid and, therefore, ignored. Apache Camel logs at INFO level whenever an invalid endpoint is encountered. Options The routingSlip DSL command supports the following options: Name Default Value Description uriDelimiter , Delimiter used if the Expression returned multiple endpoints. ignoreInvalidEndpoints false If an endpoint uri could not be resolved, should it be ignored. Otherwise Camel will thrown an exception stating the endpoint uri is not valid. cacheSize 0 Camel 2.13.1/2.12.4: Allows to configure the cache size for the ProducerCache which caches producers for reuse in the routing slip. Will by default use the default cache size which is 0. Setting the value to -1 allows to turn off the cache all together. 8.8. Throttler Overview A throttler is a processor that limits the flow rate of incoming messages. You can use this pattern to protect a target endpoint from getting overloaded. In Apache Camel, you can implement the throttler pattern using the throttle() Java DSL command. Java DSL example To limit the flow rate to 100 messages per second, define a route as follows: If necessary, you can customize the time period that governs the flow rate using the timePeriodMillis() DSL command. For example, to limit the flow rate to 3 messages per 30000 milliseconds, define a route as follows: XML configuration example The following example shows how to configure the preceding route in XML: Dynamically changing maximum requests per period Available os of Camel 2.8 Since we use an Expression, you can adjust this value at runtime, for example you can provide a header with the value. At runtime Camel evaluates the expression and converts the result to a java.lang.Long type. In the example below we use a header from the message to determine the maximum requests per period. If the header is absent, then the Section 8.8, "Throttler" uses the old value. So that allows you to only provide a header if the value is to be changed: Asynchronous delaying The throttler can enable non-blocking asynchronous delaying , which means that Apache Camel schedules a task to be executed in the future. The task is responsible for processing the latter part of the route (after the throttler). This allows the caller thread to unblock and service further incoming messages. For example: Note From Camel 2.17, the Throttler will use the rolling window for time periods that give a better flow of messages. However, It will enhance the performance of a throttler. Options The throttle DSL command supports the following options: Name Default Value Description maximumRequestsPerPeriod Maximum number of requests per period to throttle. This option must be provided and a positive number. Notice, in the XML DSL, from Camel 2.8 onwards this option is configured using an Expression instead of an attribute. timePeriodMillis 1000 The time period in millis, in which the throttler will allow at most maximumRequestsPerPeriod number of messages. asyncDelayed false Camel 2.4: If enabled then any messages which is delayed happens asynchronously using a scheduled thread pool. executorServiceRef Camel 2.4: Refers to a custom Thread Pool to be used if asyncDelay has been enabled. callerRunsWhenRejected true Camel 2.4: Is used if asyncDelayed was enabled. This controls if the caller thread should execute the task if the thread pool rejected the task. 8.9. Delayer Overview A delayer is a processor that enables you to apply a relative time delay to incoming messages. Java DSL example You can use the delay() command to add a relative time delay, in units of milliseconds, to incoming messages. For example, the following route delays all incoming messages by 2 seconds: Alternatively, you can specify the time delay using an expression: The DSL commands that follow delay() are interpreted as sub-clauses of delay() . Hence, in some contexts it is necessary to terminate the sub-clauses of delay() by inserting the end() command. For example, when delay() appears inside an onException() clause, you would terminate it as follows: XML configuration example The following example demonstrates the delay in XML DSL: Creating a custom delay You can use an expression combined with a bean to determine the delay as follows: Where the bean class could be defined as follows: Asynchronous delaying You can let the delayer use non-blocking asynchronous delaying , which means that Apache Camel schedules a task to be executed in the future. The task is responsible for processing the latter part of the route (after the delayer). This allows the caller thread to unblock and service further incoming messages. For example: The same route can be written in the XML DSL, as follows: Options The delayer pattern supports the following options: Name Default Value Description asyncDelayed false Camel 2.4: If enabled then delayed messages happens asynchronously using a scheduled thread pool. executorServiceRef Camel 2.4: Refers to a custom Thread Pool to be used if asyncDelay has been enabled. callerRunsWhenRejected true Camel 2.4: Is used if asyncDelayed was enabled. This controls if the caller thread should execute the task if the thread pool rejected the task. 8.10. Load Balancer Overview The load balancer pattern allows you to delegate message processing to one of several endpoints, using a variety of different load-balancing policies. Java DSL example The following route distributes incoming messages between the target endpoints, mock:x , mock:y , mock:z , using a round robin load-balancing policy: XML configuration example The following example shows how to configure the same route in XML: Load-balancing policies The Apache Camel load balancer supports the following load-balancing policies: Round robin Random Sticky Topic Failover Weighted round robin and weighted random Custom Load Balancer Circuit Breaker Round robin The round robin load-balancing policy cycles through all of the target endpoints, sending each incoming message to the endpoint in the cycle. For example, if the list of target endpoints is, mock:x , mock:y , mock:z , then the incoming messages are sent to the following sequence of endpoints: mock:x , mock:y , mock:z , mock:x , mock:y , mock:z , and so on. You can specify the round robin load-balancing policy in Java DSL, as follows: Alternatively, you can configure the same route in XML, as follows: Random The random load-balancing policy chooses the target endpoint randomly from the specified list. You can specify the random load-balancing policy in Java DSL, as follows: Alternatively, you can configure the same route in XML, as follows: Sticky The sticky load-balancing policy directs the In message to an endpoint that is chosen by calculating a hash value from a specified expression. The advantage of this load-balancing policy is that expressions of the same value are always sent to the same server. For example, by calculating the hash value from a header that contains a username, you ensure that messages from a particular user are always sent to the same target endpoint. Another useful approach is to specify an expression that extracts the session ID from an incoming message. This ensures that all messages belonging to the same session are sent to the same target endpoint. You can specify the sticky load-balancing policy in Java DSL, as follows: Alternatively, you can configure the same route in XML, as follows: Note When you add the sticky option to the failover load balancer, the load balancer starts from the last known good endpoint. Topic The topic load-balancing policy sends a copy of each In message to all of the listed destination endpoints (effectively broadcasting the message to all of the destinations, like a JMS topic). You can use the Java DSL to specify the topic load-balancing policy, as follows: Alternatively, you can configure the same route in XML, as follows: Failover Available as of Apache Camel 2.0 The failover load balancer is capable of trying the processor in case an Exchange failed with an exception during processing. You can configure the failover with a list of specific exceptions that trigger failover. If you do not specify any exceptions, failover is triggered by any exception. The failover load balancer uses the same strategy for matching exceptions as the onException exception clause. Enable stream caching if using streams If you use streaming, you should enable Stream Caching when using the failover load balancer. This is needed so the stream can be re-read when failing over. The failover load balancer supports the following options: Option Type Default Description inheritErrorHandler boolean true Camel 2.3: Specifies whether to use the errorHandler configured on the route. If you want to fail over immediately to the endpoint, you should disable this option (value of false ). If you enable this option, Apache Camel will first attempt to process the message using the errorHandler . For example, the errorHandler might be configured to redeliver messages and use delays between attempts. Apache Camel will initially try to redeliver to the original endpoint, and only fail over to the endpoint when the errorHandler is exhausted. maximumFailoverAttempts int -1 Camel 2.3: Specifies the maximum number of attempts to fail over to a new endpoint. The value, 0 , implies that no failover attempts are made and the value, -1 , implies an infinite number of failover attempts. roundRobin boolean false Camel 2.3: Specifies whether the failover load balancer should operate in round robin mode or not. If not, it will always start from the first endpoint when a new message is to be processed. In other words it restarts from the top for every message. If round robin is enabled, it keeps state and continues with the endpoint in a round robin fashion. When using round robin it will not stick to last known good endpoint, it will always pick the endpoint to use. The following example is configured to fail over, only if an IOException exception is thrown: You can optionally specify multiple exceptions to fail over, as follows: You can configure the same route in XML, as follows: The following example shows how to fail over in round robin mode: You can configure the same route in XML, as follows: If you want to failover to the endpoint as soon as possible, you can disable the inheritErrorHandler by configuring inheritErrorHandler=false . By disabling the Error Handler you can ensure that it does not intervene. This allows the failover load balancer to handle failover as soon as possible. If you also enable the roundRobin mode, then it keeps retrying until it successes. You can then configure the maximumFailoverAttempts option to a high value to let it eventually exhaust and fail. Weighted round robin and weighted random In many enterprise environments, where server nodes of unequal processing power are hosting services, it is usually preferable to distribute the load in accordance with the individual server processing capacities. A weighted round robin algorithm or a weighted random algorithm can be used to address this problem. The weighted load balancing policy allows you to specify a processing load distribution ratio for each server with respect to the others. You can specify this value as a positive processing weight for each server. A larger number indicates that the server can handle a larger load. The processing weight is used to determine the payload distribution ratio of each processing endpoint with respect to the others. The parameters that can be used are described in the following table: Table 8.3. Weighted Options Option Type Default Description roundRobin boolean false The default value for round-robin is false . In the absence of this setting or parameter, the load-balancing algorithm used is random. distributionRatioDelimiter String , The distributionRatioDelimiter is the delimiter used to specify the distributionRatio . If this attribute is not specified, comma , is the default delimiter. The following Java DSL examples show how to define a weighted round-robin route and a weighted random route: You can configure the round-robin route in XML, as follows: Custom Load Balancer You can use a custom load balancer (eg your own implementation) also. An example using Java DSL: And the same example using XML DSL: Notice in the XML DSL above we use <custom> which is only available in Camel 2.8 onwards. In older releases you would have to do as follows instead: To implement a custom load balancer you can extend some support classes such as LoadBalancerSupport and SimpleLoadBalancerSupport . The former supports the asynchronous routing engine, and the latter does not. Here is an example: Circuit Breaker The Circuit Breaker load balancer is a stateful pattern that is used to monitor all calls for certain exceptions. Initially, the Circuit Breaker is in closed state and passes all messages. If there are failures and the threshold is reached, it moves to open state and rejects all calls until halfOpenAfter timeout is reached. After the timeout, if there is a new call, the Circuit Breaker passes all the messages. If the result is success, the Circuit Breaker moves to a closed state, if not, it moves back to open state. Java DSL example: Spring XML example: 8.11. Hystrix Overview Available as of Camel 2.18. The Hystrix pattern lets an application integrate with Netflix Hystrix, which can provide a circuit breaker in Camel routes. Hystrix is a latency and fault tolerance library designed to Isolate points of access to remote systems, services and third-party libraries Stop cascading failure Enable resilience in complex distributed systems where failure is inevitable If you use maven then add the following dependency to your pom.xml file to use Hystrix: Java DSL example Below is an example route that shows a Hystrix endpoint that protects against slow operation by falling back to the in-lined fallback route. By default, the timeout request is just 1000ms so the HTTP endpoint has to be fairly quick to succeed. XML configuration example Following is the same example but in XML: Using the Hystrix fallback feature The onFallback() method is for local processing where you can transform a message or call a bean or something else as the fallback. If you need to call an external service over the network then you should use the onFallbackViaNetwork() method, which runs in an independent HystrixCommand object that uses its own thread pool so it does not exhaust the first command object. Hystrix configuration examples Hystrix has many options as listed in the section. The example below shows the Java DSL for setting the execution timeout to 5 seconds rather than the default 1 second and for letting the circuit breaker wait 10 seconds rather than 5 seconds (the default) before attempting a request again when the state was tripped to be open. Following is the same example but in XML: Options Ths Hystrix component supports the following options. Hystrix provides the default values. Name Default Value Type Description circuitBreakerEnabled true Boolean Determines whether a circuit breaker will be used to track health and to short-circuit requests if it trips. circuitBreakerErrorThresholdPercentage 50 Integer Sets the error percentage at or above which the circuit should trip open and start short-circuiting requests to fallback logic. circuitBreakerForceClosed false Boolean A value of true forces the circuit breaker into a closed state in which it allows requests regardless of the error percentage. circuitBreakerForceOpen false Boolean A value of true forces the circuit breaker into an open (tripped) state in which it rejects all requests. circuitBreakerRequestVolumeThreshold 20 Integer Sets the minimum number of requests in a rolling window that will trip the circuit. circuitBreakerSleepWindownInMilliseconds 5000 Integer Sets the amount of time, after tripping the circuit, to reject requests. After this time elapses, request attempts are allowed to determine if the circuit should again be closed. commandKey Node ID String Identifies the Hystrix command. You cannot configure this option. it is always the node ID to make the command unique. corePoolSize 10 Integer Sets the core thread-pool size. This is the maximum number of HystrixCommand objects that can execute concurrently. executionIsolationSemaphoreMaxConcurrentRequests 10 Integer Sets the maximum number of requests that a HystrixCommand.run() method can make when you are using ExecutionIsolationStrategy.SEMAPHORE . executionIsolationStrategy THREAD String Indicates which of these isolation strategies HystrixCommand.run() executes with. THREAD executes on a separate thread and concurrent requests are limited by the number of threads in the thread-pool. SEMAPHORE executes on the calling thread and concurrent requests are limited by the semaphore count: executionIsolationThreadInterruptOnTimeout true Boolean Indicates whether the HystrixCommand.run() execution should be interrupted when a timeout occurs. executionTimeoutInMilliseconds 1000 Integer Sets the timeout in milliseconds for execution completion. executionTimeoutEnabled true Boolean Indicates whether the execution of HystrixCommand.run() should be timed. fallbackEnabled true Boolean Determines whether a call to HystrixCommand.getFallback() is attempted when failure or rejection occurs. fallbackIsolationSemaphoreMaxConcurrentRequests 10 Integer Sets the maximum number of requests that a HystrixCommand.getFallback() method can make from a calling thread. groupKey CamelHystrix String Identifies the Hystrix group being used to correlate statistics and circuit breaker properties. keepAliveTime 1 Integer Sets the keep-alive time, in minutes. maxQueueSize -1 Integer Sets the maximum queue size of the BlockingQueue implementation. metricsHealthSnapshotIntervalInMilliseconds 500 Integer Sets the time to wait, in milliseconds, between allowing health snapshots to be taken. Health snapshots calculate success and error percentages and affect circuit breaker status. metricsRollingPercentileBucketSize 100 Integer Sets the maximum number of execution times that are kept per bucket. If more executions occur during the time they will wrap around and start over-writing at the beginning of the bucket. metricsRollingPercentileEnabled true Boolean Indicates whether execution latency should be tracked. The latency is calculated as a percentile. A value of false causes summary statistics (mean, percentiles) to be returned as -1. metricsRollingPercentileWindowBuckets 6 Integer Sets the number of buckets the rollingPercentile window will be divided into. metricsRollingPercentileWindowInMilliseconds 60000 Integer Sets the duration of the rolling window in which execution times are kept to allow for percentile calculations, in milliseconds. metricsRollingStatisticalWindowBuckets 10 Integer Sets the number of buckets the rolling statistical window is divided into. metricsRollingStatisticalWindowInMilliseconds 10000 Integer This option and the following options apply to capturing metrics from HystrixCommand and HystrixObservableCommand execution. queueSizeRejectionThreshold 5 Integer Sets the queue size rejection threshold - an artificial maximum queue size at which rejections occur even ify maxQueueSize has not been reached. requestLogEnabled true Boolean Indicates whether HystrixCommand execution and events should be logged to HystrixRequestLog . threadPoolKey null String Defines which thread-pool this command should run in. By default this is using the same key as the group key. threadPoolMetricsRollingStatisticalWindowBucket 10 Integer Sets the number of buckets the rolling statistical window is divided into. threadPoolMetricsRollingStatisticalWindowInMilliseconds 10000 Integer Sets the duration of the statistical rolling window, in milliseconds. This is how long metrics are kept for the thread pool. 8.12. Service Call Overview Available as of Camel 2.18. The service call pattern lets you call remote services in a distributed system. The service to call is looked up in a service registry such as Kubernetes, Consul, etcd or Zookeeper. The pattern separates the configuration of the service registry from the calling of the service. Maven users must add a dependency for the service registry to be used. Possibilities include: camel-consul camel-etcd camel-kubenetes camel-ribbon Syntax for calling a service To call a service, refer to the name of the service as shown below: The following example shows the XML DSL for calling a service: In these examples, Camel uses the component that integrates with the service registry to look up a service with the name foo . The lookup returns a set of IP:PORT pairs that refer to a list of active servers that host the remote service. Camel then randomly selects from that list the server to use and builds a Camel URI with the chosen IP and PORT number. By default, Camel uses the HTTP component. In the example above, the call resolves to a Camel URI that is called by a dynamic toD endpoint as shown below: You can use URI parameters to call the service, for example, beer=yes : You can also provide a context path, for example: Translating service names to URIs As you can see, the service name resolves to a Camel endpoint URI. Following are a few more examples. The -> shows the resolution of the Camel URI): To fully control the resolved URI, provide an additional URI parameter that specifies the desired Camel URI. In the specified URI, you can use the service name, which resolves to IP:PORT . Here are some examples: The examples above call a service named myService . The second parameter controls the value of the resolved URI. Notice that the first example uses serviceName.host and serviceName.port to refer to either the IP or the PORT. If you specify just serviceName then it resolves to IP:PORT . Configuring the component that calls the service By default, Camel uses the HTTP component to call the service. You can configure the use of a different component, such as HTTP4 or Netty4 HTTP, as in the following example: Following is an example in XML DSL: Options shared by all implementations The following options are available for each implementation: Option Default Value Description clientProperty Specify properties that are specific to the service call implementation you are using. For example, if you are using a ribbon implementation, then client properties are defined in com.netflix.client.config.CommonClientConfigKey . component http Sets the default Camel component to use to call the remote service. You can configure the use of a component such as netty4-http, jetty, restlet or some other component. If the service does not use the HTTP protocol then you must use another component, such as mqtt, jms, amqp. If you specify a URI parameter in the service call then the component specified in this parameter is used instead of the default. loadBalancerRef Sets a reference to a custom org.apache.camel.spi.ServiceCallLoadBalancer to use. serverListStrategyRef Sets a reference to a custom org.apache.camel.spi.ServiceCallServerListStrategy to use. Service call options when using Kubernetes A Kubernetes implementation supports the following options: Option Default Value Description apiVersion Kubernetes API version when using client lookup. caCertData Sets the Certificate Authority data when using client lookup. caCertFile Sets the Certificate Authority data that are loaded from the file when using client lookup. clientCertData Sets the Client Certificate data when using client lookup. clientCertFile Sets the Client Certificate data that are loaded from the file when using client lookup. clientKeyAlgo Sets the Client Keystore algorithm, such as RSA, when using client lookup. clientKeyData Sets the Client Keystore data when using client lookup. clientKeyFile Sets the Client Keystore data that are loaded from the file when using client lookup. clientKeyPassphrase Sets the Client Keystore passphrase when using client lookup. dnsDomain Sets the DNS domain to use for dns lookup. lookup environment The choice of strategy used to look up the service. The lookup strategies include: environment - Use environment variables. dns - Use DNS domain names. client - Use a Java client to call Kubernetes master API and query which servers are actively hosting the services. masterUrl The URL for the Kubernetes master when using client lookup. namespace The Kubernetes namespace to use. By default the namespace's name is taken from the environment variable KUBERNETES_MASTER . oauthToken Sets the OAUTH token for authentication (instead of username/password) when using client lookup. password Sets the password for authentication when using client lookup. trustCerts false Sets whether to turn on trust certificate check when using client lookup. username Sets the username for authentication when using client lookup. 8.13. Multicast Overview The multicast pattern, shown in Figure 8.9, "Multicast Pattern" , is a variation of the recipient list with a fixed destination pattern, which is compatible with the InOut message exchange pattern. This is in contrast to recipient list, which is only compatible with the InOnly exchange pattern. Figure 8.9. Multicast Pattern Multicast with a custom aggregation strategy Whereas the multicast processor receives multiple Out messages in response to the original request (one from each of the recipients), the original caller is only expecting to receive a single reply. Thus, there is an inherent mismatch on the reply leg of the message exchange, and to overcome this mismatch, you must provide a custom aggregation strategy to the multicast processor. The aggregation strategy class is responsible for aggregating all of the Out messages into a single reply message. Consider the example of an electronic auction service, where a seller offers an item for sale to a list of buyers. The buyers each put in a bid for the item, and the seller automatically selects the bid with the highest price. You can implement the logic for distributing an offer to a fixed list of buyers using the multicast() DSL command, as follows: Where the seller is represented by the endpoint, cxf:bean:offer , and the buyers are represented by the endpoints, cxf:bean:Buyer1 , cxf:bean:Buyer2 , cxf:bean:Buyer3 . To consolidate the bids received from the various buyers, the multicast processor uses the aggregation strategy, HighestBidAggregationStrategy . You can implement the HighestBidAggregationStrategy in Java, as follows: Where it is assumed that the buyers insert the bid price into a header named, Bid . For more details about custom aggregation strategies, see Section 8.5, "Aggregator" . Parallel processing By default, the multicast processor invokes each of the recipient endpoints one after another (in the order listed in the to() command). In some cases, this might cause unacceptably long latency. To avoid these long latency times, you have the option of enabling parallel processing by adding the parallelProcessing() clause. For example, to enable parallel processing in the electronic auction example, define the route as follows: Where the multicast processor now invokes the buyer endpoints, using a thread pool that has one thread for each of the endpoints. If you want to customize the size of the thread pool that invokes the buyer endpoints, you can invoke the executorService() method to specify your own custom executor service. For example: Where MyExecutor is an instance of java.util.concurrent.ExecutorService type. When the exchange has an InOut pattern, an aggregation strategy is used to aggregate reply messages. The default aggregation strategy takes the latest reply message and discards earlier replies. For example, in the following route, the custom strategy, MyAggregationStrategy , is used to aggregate the replies from the endpoints, direct:a , direct:b , and direct:c : XML configuration example The following example shows how to configure a similar route in XML, where the route uses a custom aggregation strategy and a custom thread executor: Where both the parallelProcessing attribute and the threadPoolRef attribute are optional. It is only necessary to set them if you want to customize the threading behavior of the multicast processor. Apply custom processing to the outgoing messages The multicast pattern copies the source Exchange and multicasts the copy. By default, the router makes a shallow copy of the source message. In a shallow copy, the headers and payload of the original message are copied by reference only, so that resulting copies of the original message are linked. Because shallow copies of a multicast message are linked, you're unable to apply custom processing if the message body is mutable. Custom processing that you apply to a copy sent to one endpoint are also applied to copies sent to every other endpoint. Note Although the multicast syntax allows you to invoke the process DSL command in the multicast clause, this does not make sense semantically and it does not have the same effect as onPrepare (in fact, in this context, the process DSL command has no effect). Using onPrepare to execute custom logic when preparing messages If you want to apply custom processing to each message replica before it is sent to its endpoint, you can invoke the onPrepare DSL command in the multicast clause. The onPrepare command inserts a custom processor just after the message has been shallow-copied and just before the message is dispatched to its endpoint. For example, in the following route, the CustomProc processor is invoked on the message sent to direct:a and the CustomProc processor is also invoked on the message sent to direct:b . A common use case for the onPrepare DSL command is to perform a deep copy of some or all elements of a message. For example, the following CustomProc processor class performs a deep copy of the message body, where the message body is presumed to be of type, BodyType , and the deep copy is performed by the method, BodyType .deepCopy() . You can use onPrepare to implement any kind of custom logic that you want to execute before the Exchange is multicast. Note It is recommended practice to design for immutable objects. For example if you have a mutable message body as this Animal class: Then we can create a deep clone processor which clones the message body: Then we can use the AnimalDeepClonePrepare class in the multicast route using the onPrepare option as shown: And the same example in XML DSL Options The multicast DSL command supports the following options: Name Default Value Description strategyRef Refers to an AggregationStrategy to be used to assemble the replies from the multicasts, into a single outgoing message from the multicast . By default Camel will use the last reply as the outgoing message. strategyMethodName This option can be used to explicitly specify the method name to use, when using POJOs as the AggregationStrategy . strategyMethodAllowNull false This option can be used, when using POJOs as the AggregationStrategy . If false , the aggregate method is not used, when there is no data to enrich. If true , null values are used for the oldExchange , when there is no data to enrich. parallelProcessing false If enabled, sending messages to the multicasts occurs concurrently. Note the caller thread will still wait until all messages has been fully processed, before it continues. Its only the sending and processing the replies from the multicasts which happens concurrently. parallelAggregate false If enabled, the aggregate method on AggregationStrategy can be called concurrently. Note that this requires the implementation of AggregationStrategy to be thread-safe. By default, this option is false , which means that Camel automatically synchronizes calls to the aggregate method. In some use-cases, however, you can improve performance by implementing AggregationStrategy as thread-safe and setting this option to true . executorServiceRef Refers to a custom Thread Pool to be used for parallel processing. Notice if you set this option, then parallel processing is automatic implied, and you do not have to enable that option as well. stopOnException false Camel 2.2: Whether or not to stop continue processing immediately when an exception occurred. If disable, then Camel will send the message to all multicasts regardless if one of them failed. You can deal with exceptions in the AggregationStrategy class where you have full control how to handle that. streaming false If enabled then Camel will process replies out-of-order, eg in the order they come back. If disabled, Camel will process replies in the same order as multicasted. timeout Camel 2.5: Sets a total timeout specified in milliseconds. If the multicast hasn't been able to send and process all replies within the given timeframe, then the timeout triggers and the multicast breaks out and continues. Notice if you provide a TimeoutAwareAggregationStrategy then the timeout method is invoked before breaking out. onPrepareRef Camel 2.8: Refers to a custom Processor to prepare the copy of the Exchange each multicast will receive. This allows you to do any custom logic, such as deep-cloning the message payload if that's needed etc. shareUnitOfWork false Camel 2.8: Whether the unit of work should be shared. See the same option on Section 8.4, "Splitter" for more details. 8.14. Composed Message Processor Composed Message Processor The composed message processor pattern, as shown in Figure 8.10, "Composed Message Processor Pattern" , allows you to process a composite message by splitting it up, routing the sub-messages to appropriate destinations, and then re-aggregating the responses back into a single message. Figure 8.10. Composed Message Processor Pattern Java DSL example The following example checks that a multipart order can be filled, where each part of the order requires a check to be made at a different inventory: XML DSL example The preceding route can also be written in XML DSL, as follows: Processing steps Processing starts by splitting the order, using a Section 8.4, "Splitter" . The Section 8.4, "Splitter" then sends individual OrderItems to a Section 8.1, "Content-Based Router" , which routes messages based on the item type. Widget items get sent for checking in the widgetInventory bean and gadget items get sent to the gadgetInventory bean. Once these OrderItems have been validated by the appropriate bean, they are sent on to the Section 8.5, "Aggregator" which collects and re-assembles the validated OrderItems into an order again. Each received order has a header containing an order ID . We make use of the order ID during the aggregation step: the .header("orderId") qualifier on the aggregate() DSL command instructs the aggregator to use the header with the key, orderId , as the correlation expression. For full details, check the ComposedMessageProcessorTest.java example source at camel-core/src/test/java/org/apache/camel/processor . 8.15. Scatter-Gather Scatter-Gather The scatter-gather pattern , as shown in Figure 8.11, "Scatter-Gather Pattern" , enables you to route messages to a number of dynamically specified recipients and re-aggregate the responses back into a single message. Figure 8.11. Scatter-Gather Pattern Dynamic scatter-gather example The following example outlines an application that gets the best quote for beer from several different vendors. The examples uses a dynamic Section 8.3, "Recipient List" to request a quote from all vendors and an Section 8.5, "Aggregator" to pick the best quote out of all the responses. The routes for this application are defined as follows: In the first route, the Section 8.3, "Recipient List" looks at the listOfVendors header to obtain the list of recipients. Hence, the client that sends messages to this application needs to add a listOfVendors header to the message. Example 8.1, "Messaging Client Sample" shows some sample code from a messaging client that adds the relevant header data to outgoing messages. Example 8.1. Messaging Client Sample The message would be distributed to the following endpoints: bean:vendor1 , bean:vendor2 , and bean:vendor3 . These beans are all implemented by the following class: The bean instances, vendor1 , vendor2 , and vendor3 , are instantiated using Spring XML syntax, as follows: Each bean is initialized with a different price for beer (passed to the constructor argument). When a message is sent to each bean endpoint, it arrives at the MyVendor.getQuote method. This method does a simple check to see whether this quote request is for beer and then sets the price of beer on the exchange for retrieval at a later step. The message is forwarded to the step using POJO Producing (see the @Produce annotation). At the step, we want to take the beer quotes from all vendors and find out which one was the best (that is, the lowest). For this, we use an Section 8.5, "Aggregator" with a custom aggregation strategy. The Section 8.5, "Aggregator" needs to identify which messages are relevant to the current quote, which is done by correlating messages based on the value of the quoteRequestId header (passed to the correlationExpression ). As shown in Example 8.1, "Messaging Client Sample" , the correlation ID is set to quoteRequest-1 (the correlation ID should be unique). To pick the lowest quote out of the set, you can use a custom aggregation strategy like the following: Static scatter-gather example You can specify the recipients explicitly in the scatter-gather application by employing a static Section 8.3, "Recipient List" . The following example shows the routes you would use to implement a static scatter-gather scenario: 8.16. Loop Loop The loop pattern enables you to process a message multiple times. It is used mainly for testing. By default, the loop uses the same exchange throughout the looping. The result from the iteration is used for the (see Section 5.4, "Pipes and Filters" ). From Camel 2.8 on you can enable copy mode instead. See the options table for details. Exchange properties On each loop iteration, two exchange properties are set, which can optionally be read by any processors included in the loop. Property Description CamelLoopSize Apache Camel 2.0: Total number of loops CamelLoopIndex Apache Camel 2.0: Index of the current iteration (0 based) Java DSL examples The following examples show how to take a request from the direct:x endpoint and then send the message repeatedly to mock:result . The number of loop iterations is specified either as an argument to loop() or by evaluating an expression at run time, where the expression must evaluate to an int (or else a RuntimeCamelException is thrown). The following example passes the loop count as a constant: The following example evaluates a simple expression to determine the loop count: The following example evaluates an XPath expression to determine the loop count: XML configuration example You can configure the same routes in Spring XML. The following example passes the loop count as a constant: The following example evaluates a simple expression to determine the loop count: Using copy mode Now suppose we send a message to direct:start endpoint containing the letter A. The output of processing this route will be that, each mock:loop endpoint will receive AB as message. However if we do not enable copy mode then mock:loop will receive AB, ABB, ABBB messages. The equivalent example in XML DSL in copy mode is as follows: Options The loop DSL command supports the following options: Name Default Value Description copy false Camel 2.8: Whether or not copy mode is used. If false then the same Exchange is being used throughout the looping. So the result from the iteration will be visible for the iteration. Instead you can enable copy mode, and then each iteration is restarting with a fresh copy of the input the section called "Exchanges" . Do While Loop You can perform the loop until a condition is met using a do while loop. The condition will either be true or false. In DSL, the command is LoopDoWhile . The following example will perform the loop until the message body length is 5 characters or less: In XML, the command is loop doWhile . The following example also performs the loop until the message body length is 5 characters or less: 8.17. Sampling Sampling Throttler A sampling throttler allows you to extract a sample of exchanges from the traffic through a route. It is configured with a sampling period during which only a single exchange is allowed to pass through. All other exchanges will be stopped. By default, the sample period is 1 second. Java DSL example Use the sample() DSL command to invoke the sampler as follows: Spring XML example In Spring XML, use the sample element to invoke the sampler, where you have the option of specifying the sampling period using the samplePeriod and units attributes: Options The sample DSL command supports the following options: Name Default Value Description messageFrequency Samples the message every N'th message. You can only use either frequency or period. samplePeriod 1 Samples the message every N'th period. You can only use either frequency or period. units SECOND Time unit as an enum of java.util.concurrent.TimeUnit from the JDK. 8.18. Dynamic Router Dynamic Router The Dynamic Router pattern, as shown in Figure 8.12, "Dynamic Router Pattern" , enables you to route a message consecutively through a series of processing steps, where the sequence of steps is not known at design time. The list of endpoints through which the message should pass is calculated dynamically at run time . Each time the message returns from an endpoint, the dynamic router calls back on a bean to discover the endpoint in the route. Figure 8.12. Dynamic Router Pattern In Camel 2.5 we introduced a dynamicRouter in the DSL, which is like a dynamic Section 8.7, "Routing Slip" that evaluates the slip on-the-fly . Beware You must ensure that the expression used for the dynamicRouter (such as a bean), returns null to indicate the end. Otherwise, the dynamicRouter continues in an endless loop. Dynamic Router in Camel 2.5 onwards From Camel 2.5, the Section 8.18, "Dynamic Router" updates the exchange property, Exchange.SLIP_ENDPOINT , with the current endpoint as it advances through the slip. This enables you to find out how far the exchange has progressed through the slip. (It's a slip because the Section 8.18, "Dynamic Router" implementation is based on Section 8.7, "Routing Slip" ). Java DSL In Java DSL you can use the dynamicRouter as follows: Which will leverage a bean integration to compute the slip on-the-fly , which could be implemented as follows: Note The preceding example is not thread safe. You would have to store the state on the Exchange to ensure thread safety. Spring XML The same example in Spring XML would be: Options The dynamicRouter DSL command supports the following options: Name Default Value Description uriDelimiter , Delimiter used if the Part II, "Routing Expression and Predicate Languages" returned multiple endpoints. ignoreInvalidEndpoints false If an endpoint uri could not be resolved, should it be ignored. Otherwise Camel will thrown an exception stating the endpoint uri is not valid. @DynamicRouter annotation You can also use the @DynamicRouter annotation. For example: The route method is invoked repeatedly as the message progresses through the slip. The idea is to return the endpoint URI of the destination. Return null to indicate the end. You can return multiple endpoints if you like, just as the Section 8.7, "Routing Slip" , where each endpoint is separated by a delimiter.
[ "RouteBuilder builder = new RouteBuilder() { public void configure() { from(\"seda:a\").choice() .when(header(\"foo\").isEqualTo(\"bar\")).to(\"seda:b\") .when(header(\"foo\").isEqualTo(\"cheese\")).to(\"seda:c\") .otherwise().to(\"seda:d\"); } };", "<camelContext id=\"buildSimpleRouteWithChoice\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"seda:a\"/> <choice> <when> <xpath>USDfoo = 'bar'</xpath> <to uri=\"seda:b\"/> </when> <when> <xpath>USDfoo = 'cheese'</xpath> <to uri=\"seda:c\"/> </when> <otherwise> <to uri=\"seda:d\"/> </otherwise> </choice> </route> </camelContext>", "RouteBuilder builder = new RouteBuilder() { public void configure() { from(\"seda:a\").filter(header(\"foo\").isEqualTo(\"bar\")).to(\"seda:b\"); } };", "from(\"direct:start\"). filter().xpath(\"/person[@name='James']\"). to(\"mock:result\");", "<camelContext id=\"simpleFilterRoute\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"seda:a\"/> <filter> <xpath>USDfoo = 'bar'</xpath> <to uri=\"seda:b\"/> </filter> </route> </camelContext>", "from(\"direct:start\") .filter().method(MyBean.class, \"isGoldCustomer\").to(\"mock:result\").end() .to(\"mock:end\"); public static class MyBean { public boolean isGoldCustomer(@Header(\"level\") String level) { return level.equals(\"gold\"); } }", "from(\"direct:start\") .choice() .when(bodyAs(String.class).contains(\"Hello\")).to(\"mock:hello\") .when(bodyAs(String.class).contains(\"Bye\")).to(\"mock:bye\"). stop() .otherwise().to(\"mock:other\") .end() .to(\"mock:result\");", "from(\"seda:a\").to(\"seda:b\", \"seda:c\", \"seda:d\");", "<camelContext id=\"buildStaticRecipientList\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"seda:a\"/> <to uri=\"seda:b\"/> <to uri=\"seda:c\"/> <to uri=\"seda:d\"/> </route> </camelContext>", "from(\"direct:a\").recipientList(header(\"recipientListHeader\").tokenize(\",\"));", "from(\"seda:a\").recipientList(header(\"recipientListHeader\"));", "<camelContext id=\"buildDynamicRecipientList\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"seda:a\"/> <recipientList delimiter=\",\"> <header>recipientListHeader</header> </recipientList> </route> </camelContext>", "from(\"direct:a\").recipientList(header(\"myHeader\")).parallelProcessing();", "<route> <from uri=\"direct:a\"/> <recipientList parallelProcessing=\"true\"> <header>myHeader</header> </recipientList> </route>", "from(\"direct:a\").recipientList(header(\"myHeader\")).stopOnException();", "<route> <from uri=\"direct:a\"/> <recipientList stopOnException=\"true\"> <header>myHeader</header> </recipientList> </route>", "from(\"direct:a\").recipientList(header(\"myHeader\")).ignoreInvalidEndpoints();", "<route> <from uri=\"direct:a\"/> <recipientList ignoreInvalidEndpoints=\"true\"> <header>myHeader</header> </recipientList> </route>", "from(\"direct:a\") .recipientList(header(\"myHeader\")).aggregationStrategy(new MyOwnAggregationStrategy()) .to(\"direct:b\");", "<route> <from uri=\"direct:a\"/> <recipientList strategyRef=\"myStrategy\"> <header>myHeader</header> </recipientList> <to uri=\"direct:b\"/> </route> <bean id=\"myStrategy\" class=\"com.mycompany.MyOwnAggregationStrategy\"/>", "from(\"activemq:queue:test\").recipientList().method(MessageRouter.class, \"routeTo\");", "public class MessageRouter { public String routeTo() { String queueName = \"activemq:queue:test2\"; return queueName; } }", "public class MessageRouter { @RecipientList public String routeTo() { String queueList = \"activemq:queue:test1,activemq:queue:test2\"; return queueList; } }", "from(\"activemq:queue:test\").bean(MessageRouter.class, \"routeTo\");", "from(\"direct:start\") .recipientList(header(\"recipients\"), \",\") .aggregationStrategy(new AggregationStrategy() { public Exchange aggregate(Exchange oldExchange, Exchange newExchange) { if (oldExchange == null) { return newExchange; } String body = oldExchange.getIn().getBody(String.class); oldExchange.getIn().setBody(body + newExchange.getIn().getBody(String.class)); return oldExchange; } }) .parallelProcessing().timeout(250) // use end to indicate end of recipientList clause .end() .to(\"mock:result\"); from(\"direct:a\").delay(500).to(\"mock:A\").setBody(constant(\"A\")); from(\"direct:b\").to(\"mock:B\").setBody(constant(\"B\")); from(\"direct:c\").to(\"mock:C\").setBody(constant(\"C\"));", "// Java public interface TimeoutAwareAggregationStrategy extends AggregationStrategy { /** * A timeout occurred * * @param oldExchange the oldest exchange (is <tt>null</tt> on first aggregation as we only have the new exchange) * @param index the index * @param total the total * @param timeout the timeout value in millis */ void timeout(Exchange oldExchange, int index, int total, long timeout);", "from(\"direct:start\") .recipientList().onPrepare(new CustomProc());", "// Java import org.apache.camel.*; public class CustomProc implements Processor { public void process(Exchange exchange) throws Exception { BodyType body = exchange.getIn().getBody( BodyType .class); // Make a _deep_ copy of of the body object BodyType clone = BodyType .deepCopy(); exchange.getIn().setBody(clone); // Headers and attachments have already been // shallow-copied. If you need deep copies, // add some more code here. } }", "from(\"file:inbox\") // the exchange pattern is InOnly initially when using a file route .recipientList().constant(\"activemq:queue:inbox?exchangePattern=InOut\") .to(\"file:outbox\");", "RouteBuilder builder = new RouteBuilder() { public void configure() { from(\"seda:a\") .split(bodyAs(String.class).tokenize(\"\\n\")) .to(\"seda:b\"); } };", "from(\"activemq:my.queue\") .split(xpath(\"//foo/bar\")) .to(\"file://some/directory\")", "<camelContext id=\"buildSplitter\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"seda:a\"/> <split> <xpath>//foo/bar</xpath> <to uri=\"seda:b\"/> </split> </route> </camelContext>", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <split> <tokenize token=\"\\n\"/> <to uri=\"mock:result\"/> </split> </route> </camelContext>", "from(\"file:inbox\") .split().tokenize(\"\\n\", 1000).streaming() .to(\"activemq:queue:order\");", "<route> <from uri=\"file:inbox\"/> <split streaming=\"true\"> <tokenize token=\"\\n\" group=\"1000\"/> <to uri=\"activemq:queue:order\"/> </split> </route>", "from(\"direct:start\") // split by new line and group by 3, and skip the very first element .split().tokenize(\"\\n\", 3, true).streaming() .to(\"mock:group\");", "<route> <from uri=\"file:inbox\"/> <split streaming=\"true\"> <tokenize token=\"\\n\" group=\"1000\" skipFirst=\"true\" /> <to uri=\"activemq:queue:order\"/> </split> </route>", "XPathBuilder xPathBuilder = new XPathBuilder(\"//foo/bar\"); from(\"activemq:my.queue\").split(xPathBuilder).parallelProcessing().to(\"activemq:my.parts\");", "XPathBuilder xPathBuilder = new XPathBuilder(\"//foo/bar\"); ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(8, 16, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue()); from(\"activemq:my.queue\") .split(xPathBuilder) .parallelProcessing() .executorService(threadPoolExecutor) .to(\"activemq:my.parts\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:parallel-custom-pool\"/> <split executorServiceRef=\"threadPoolExecutor\"> <xpath>/invoice/lineItems</xpath> <to uri=\"mock:result\"/> </split> </route> </camelContext> <bean id=\"threadPoolExecutor\" class=\"java.util.concurrent.ThreadPoolExecutor\"> <constructor-arg index=\"0\" value=\"8\"/> <constructor-arg index=\"1\" value=\"16\"/> <constructor-arg index=\"2\" value=\"0\"/> <constructor-arg index=\"3\" value=\"MILLISECONDS\"/> <constructor-arg index=\"4\"><bean class=\"java.util.concurrent.LinkedBlockingQueue\"/></constructor-arg> </bean>", "from(\"direct:body\") // here we use a POJO bean mySplitterBean to do the split of the payload .split() .method(\"mySplitterBean\", \"splitBody\") .to(\"mock:result\"); from(\"direct:message\") // here we use a POJO bean mySplitterBean to do the split of the message // with a certain header value .split() .method(\"mySplitterBean\", \"splitMessage\") .to(\"mock:result\");", "public class MySplitterBean { /** * The split body method returns something that is iteratable such as a java.util.List. * * @param body the payload of the incoming message * @return a list containing each part split */ public List<String> splitBody(String body) { // since this is based on an unit test you can of couse // use different logic for splitting as {router} have out // of the box support for splitting a String based on comma // but this is for show and tell, since this is java code // you have the full power how you like to split your messages List<String> answer = new ArrayList<String>(); String[] parts = body.split(\",\"); for (String part : parts) { answer.add(part); } return answer; } /** * The split message method returns something that is iteratable such as a java.util.List. * * @param header the header of the incoming message with the name user * @param body the payload of the incoming message * @return a list containing each part split */ public List<Message> splitMessage(@Header(value = \"user\") String header, @Body String body) { // we can leverage the Parameter Binding Annotations // http://camel.apache.org/parameter-binding-annotations.html // to access the message header and body at same time, // then create the message that we want, splitter will // take care rest of them. // *NOTE* this feature requires {router} version >= 1.6.1 List<Message> answer = new ArrayList<Message>(); String[] parts = header.split(\",\"); for (String part : parts) { DefaultMessage message = new DefaultMessage(); message.setHeader(\"user\", part); message.setBody(body); answer.add(message); } return answer; } }", "BeanIOSplitter splitter = new BeanIOSplitter(); splitter.setMapping(\"org/apache/camel/dataformat/beanio/mappings.xml\"); splitter.setStreamName(\"employeeFile\"); // Following is a route that uses the beanio data format to format CSV data // in Java objects: from(\"direct:unmarshal\") // Here the message body is split to obtain a message for each row: .split(splitter).streaming() .to(\"log:line\") .to(\"mock:beanio-unmarshal\");", "BeanIOSplitter splitter = new BeanIOSplitter(); splitter.setMapping(\"org/apache/camel/dataformat/beanio/mappings.xml\"); splitter.setStreamName(\"employeeFile\"); splitter.setBeanReaderErrorHandlerType(MyErrorHandler.class); from(\"direct:unmarshal\") .split(splitter).streaming() .to(\"log:line\") .to(\"mock:beanio-unmarshal\");", "from(\"direct:start\") .split(body().tokenize(\"@\"), new MyOrderStrategy()) // each split message is then send to this bean where we can process it .to(\"bean:MyOrderService?method=handleOrder\") // this is important to end the splitter route as we do not want to do more routing // on each split message .end() // after we have split and handled each message we want to send a single combined // response back to the original caller, so we let this bean build it for us // this bean will receive the result of the aggregate strategy: MyOrderStrategy .to(\"bean:MyOrderService?method=buildCombinedResponse\")", "/** * This is our own order aggregation strategy where we can control * how each split message should be combined. As we do not want to * lose any message, we copy from the new to the old to preserve the * order lines as long we process them */ public static class MyOrderStrategy implements AggregationStrategy { public Exchange aggregate(Exchange oldExchange, Exchange newExchange) { // put order together in old exchange by adding the order from new exchange if (oldExchange == null) { // the first time we aggregate we only have the new exchange, // so we just return it return newExchange; } String orders = oldExchange.getIn().getBody(String.class); String newLine = newExchange.getIn().getBody(String.class); LOG.debug(\"Aggregate old orders: \" + orders); LOG.debug(\"Aggregate new order: \" + newLine); // put orders together separating by semi colon orders = orders + \";\" + newLine; // put combined order back on old to preserve it oldExchange.getIn().setBody(orders); // return old as this is the one that has all the orders gathered until now return oldExchange; } }", "from(\"direct:streaming\") .split(body().tokenize(\",\"), new MyOrderStrategy()) .parallelProcessing() .streaming() .to(\"activemq:my.parts\") .end() .to(\"activemq:all.parts\");", "// Java import static org.apache.camel.builder.ExpressionBuilder.beanExpression; from(\"direct:streaming\") .split(beanExpression(new MyCustomIteratorFactory(), \"iterator\")) .streaming().to(\"activemq:my.parts\")", "from(\"file:inbox\") .split().tokenizeXML(\"order\").streaming() .to(\"activemq:queue:order\");", "<route> <from uri=\"file:inbox\"/> <split streaming=\"true\"> <tokenize token=\"order\" xml=\"true\"/> <to uri=\"activemq:queue:order\"/> </split> </route>", "from(\"file:inbox\") .split().tokenizeXML(\"order\", \"orders\" ).streaming() .to(\"activemq:queue:order\");", "<route> <from uri=\"file:inbox\"/> <split streaming=\"true\"> <tokenize token=\"order\" xml=\"true\" inheritNamespaceTagName=\"orders\" /> <to uri=\"activemq:queue:order\"/> </split> </route>", "from(\"direct:start\") .aggregate(header(\"id\"), new UseLatestAggregationStrategy()) .completionTimeout(3000) .to(\"mock:aggregated\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <aggregate strategyRef=\"aggregatorStrategy\" completionTimeout=\"3000\"> <correlationExpression> <simple>header.StockSymbol</simple> </correlationExpression> <to uri=\"mock:aggregated\"/> </aggregate> </route> </camelContext> <bean id=\"aggregatorStrategy\" class=\"org.apache.camel.processor.aggregate.UseLatestAggregationStrategy\"/>", "from(\"direct:start\") .aggregate(xpath(\"/stockQuote/@symbol\"), new UseLatestAggregationStrategy()) .completionTimeout(3000) .to(\"mock:aggregated\");", "from(...).aggregate(...).ignoreInvalidCorrelationKeys()", "<aggregate strategyRef=\"aggregatorStrategy\" ignoreInvalidCorrelationKeys=\"true\" ...> </aggregate>", "from(\"direct:start\") .aggregate(header(\"id\")) .aggregationStrategy(new UseLatestAggregationStrategy()) .completionTimeout(3000) .to(\"mock:aggregated\");", "void timeout(Exchange oldExchange, int index, int total, long timeout)", "void onCompletion(Exchange exchange)", "//simply combines Exchange String body values using '' as a delimiter class StringAggregationStrategy implements AggregationStrategy { public Exchange aggregate(Exchange oldExchange, Exchange newExchange) { if (oldExchange == null) { return newExchange; } String oldBody = oldExchange.getIn().getBody(String.class); String newBody = newExchange.getIn().getBody(String.class); oldExchange.getIn().setBody(oldBody + \"\" + newBody); return oldExchange; } } //simply combines Exchange body values into an ArrayList<Object> class ArrayListAggregationStrategy implements AggregationStrategy { public Exchange aggregate(Exchange oldExchange, Exchange newExchange) { Object newBody = newExchange.getIn().getBody(); ArrayList<Object> list = null; if (oldExchange == null) { list = new ArrayList<Object>(); list.add(newBody); newExchange.getIn().setBody(list); return newExchange; } else { list = oldExchange.getIn().getBody(ArrayList.class); list.add(newBody); return oldExchange; } } }", "from(\"direct:start\") .aggregate(header(\"StockSymbol\"), new ArrayListAggregationStrategy()) .completionTimeout(3000) .to(\"mock:result\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <aggregate strategyRef=\"aggregatorStrategy\" completionTimeout=\"3000\"> <correlationExpression> <simple>header.StockSymbol</simple> </correlationExpression> <to uri=\"mock:aggregated\"/> </aggregate> </route> </camelContext> <bean id=\"aggregatorStrategy\" class=\"com.my_package_name.ArrayListAggregationStrategy\"/>", "// Java import org.apache.camel.processor.aggregate.AggregationStrategy; import org.apache.camel.Service; import java.lang.Exception; class MyAggStrategyWithLifecycleControl implements AggregationStrategy, Service { public Exchange aggregate(Exchange oldExchange, Exchange newExchange) { // Implementation not shown } public void start() throws Exception { // Actions to perform when the enclosing EIP starts up } public void stop() throws Exception { // Actions to perform when the enclosing EIP is stopping } }", "from(\"direct:start\") .aggregate( header(\"id\"), new UseLatestAggregationStrategy() ) .completionPredicate( header(\"MsgType\").isEqualTo(\"ALERT\") ) .eagerCheckCompletion() .to(\"mock:result\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <aggregate strategyRef=\"aggregatorStrategy\" eagerCheckCompletion=\"true\"> <correlationExpression> <simple>header.StockSymbol</simple> </correlationExpression> <completionPredicate> <simple>USDMsgType = 'ALERT'</simple> </completionPredicate> <to uri=\"mock:result\"/> </aggregate> </route> </camelContext> <bean id=\"aggregatorStrategy\" class=\"org.apache.camel.processor.aggregate.UseLatestAggregationStrategy\"/>", "from(\"direct:start\") .aggregate(header(\"StockSymbol\"), new UseLatestAggregationStrategy()) .completionTimeout(header(\"timeout\")) .to(\"mock:aggregated\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <aggregate strategyRef=\"aggregatorStrategy\"> <correlationExpression> <simple>header.StockSymbol</simple> </correlationExpression> <completionTimeout> <header>timeout</header> </completionTimeout> <to uri=\"mock:aggregated\"/> </aggregate> </route> </camelContext> <bean id=\"aggregatorStrategy\" class=\"org.apache.camel.processor.UseLatestAggregationStrategy\"/>", "from(\"direct:start\") .aggregate(header(\"StockSymbol\"), new UseLatestAggregationStrategy()) .completionSize(header(\"mySize\")) .to(\"mock:aggregated\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <aggregate strategyRef=\"aggregatorStrategy\"> <correlationExpression> <simple>header.StockSymbol</simple> </correlationExpression> <completionSize> <header>mySize</header> </completionSize> <to uri=\"mock:aggregated\"/> </aggregate> </route> </camelContext> <bean id=\"aggregatorStrategy\" class=\"org.apache.camel.processor.UseLatestAggregationStrategy\"/>", "// Java public final class MyCompletionStrategy implements AggregationStrategy { @Override public Exchange aggregate(Exchange oldExchange, Exchange newExchange) { if (oldExchange == null) { return newExchange; } String body = oldExchange.getIn().getBody(String.class) + \"+\" + newExchange.getIn().getBody(String.class); oldExchange.getIn().setBody(body); if (body.length() >= 5) { oldExchange.setProperty(Exchange.AGGREGATION_COMPLETE_CURRENT_GROUP, true); } return oldExchange; } }", "private AggregateController controller = new DefaultAggregateController(); from(\"direct:start\") .aggregate(header(\"id\"), new MyAggregationStrategy()).completionSize(10).id(\"myAggregator\") .aggregateController(controller) .to(\"mock:aggregated\");", "int groups = controller.forceCompletionOfGroup(\"foo\");", "int groups = controller.forceCompletionOfAllGroups();", "from(\"direct:start\") .aggregate(header(\"UniqueBatchID\"), new MyConcatenateStrategy()) .completionSize(header(\"mySize\")) .closeCorrelationKeyOnCompletion(10000) .to(\"mock:aggregated\");", "public void testTokenizeXMLPairSimple() throws Exception { Expression exp = TokenizeLanguage.tokenizeXML(\"USD{header.foo}\", null);", "exchange.getIn().setHeader(\"foo\", \"<person>\"); exchange.getIn().setBody(\"<persons><person>James</person><person>Claus</person><person>Jonathan</person><person>Hadrian</person></persons>\");", "List<?> names = exp.evaluate(exchange, List.class); assertEquals(4, names.size()); assertEquals(\"<person>James</person>\", names.get(0)); assertEquals(\"<person>Claus</person>\", names.get(1)); assertEquals(\"<person>Jonathan</person>\", names.get(2)); assertEquals(\"<person>Hadrian</person>\", names.get(3)); }", "from(\"direct:start\") .aggregate(header(\"StockSymbol\")) .completionTimeout(3000) .groupExchanges() .to(\"mock:result\");", "// Java List<Exchange> grouped = ex.getIn().getBody(List.class);", "from(\"file://inbox\") .aggregate(xpath(\"//order/@customerId\"), new AggregateCustomerOrderStrategy()) .completionFromBatchConsumer() .to(\"bean:processOrder\");", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-sql</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "CREATE TABLE my_aggregation_repo ( id varchar(255) NOT NULL, exchange blob NOT NULL, constraint aggregation_pk PRIMARY KEY (id) ); CREATE TABLE my_aggregation_repo_completed ( id varchar(255) NOT NULL, exchange blob NOT NULL, constraint aggregation_completed_pk PRIMARY KEY (id) ); }", "<bean id=\"my_repo\" class=\"org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository\"> <property name=\"repositoryName\" value=\"my_aggregation_repo\"/> <property name=\"transactionManager\" ref=\"my_tx_manager\"/> <property name=\"dataSource\" ref=\"my_data_source\"/> </bean>", "from(\"direct:start\") .aggregate(header(\"id\"), new UseLatestAggregationStrategy()) .completionTimeout(3000) .parallelProcessing() .to(\"mock:aggregated\");", "import org.apache.camel.processor.aggregate.AbstractListAggregationStrategy; /** * Strategy to aggregate integers into a List<Integer>. */ public final class MyListOfNumbersStrategy extends AbstractListAggregationStrategy<Integer> { @Override public Integer getValue(Exchange exchange) { // the message body contains a number, so just return that as-is return exchange.getIn().getBody(Integer.class); } }", "from(\"direct:start\").resequence(header(\"TimeStamp\")).to(\"mock:result\");", "import org.apache.camel.model.config.BatchResequencerConfig; RouteBuilder builder = new RouteBuilder() { public void configure() { from(\"direct:start\").resequence(header(\"TimeStamp\")).batch(new BatchResequencerConfig(300,4000L)).to(\"mock:result\"); } };", "<camelContext id=\"resequencerBatch\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\" /> <resequence> <!-- batch-config can be omitted for default (batch) resequencer settings --> <batch-config batchSize=\"300\" batchTimeout=\"4000\" /> <simple>header.TimeStamp</simple> <to uri=\"mock:result\" /> </resequence> </route> </camelContext>", "from(\"jms:queue:foo\") // sort by JMSPriority by allowing duplicates (message can have same JMSPriority) // and use reverse ordering so 9 is first output (most important), and 0 is last // use batch mode and fire every 3th second .resequence(header(\"JMSPriority\")).batch().timeout(3000).allowDuplicates().reverse() .to(\"mock:result\");", "from(\"direct:start\").resequence(header(\"seqnum\")).stream().to(\"mock:result\");", "// Java import org.apache.camel.model.config.StreamResequencerConfig; RouteBuilder builder = new RouteBuilder() { public void configure() { from(\"direct:start\").resequence(header(\"seqnum\")). stream(new StreamResequencerConfig(5000, 4000L)). to(\"mock:result\"); } };", "// Java ExpressionResultComparator<Exchange> comparator = new MyComparator(); StreamResequencerConfig config = new StreamResequencerConfig(5000, 4000L, comparator); from(\"direct:start\").resequence(header(\"seqnum\")).stream(config).to(\"mock:result\");", "<camelContext id=\"resequencerStream\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <resequence> <stream-config capacity=\"5000\" timeout=\"4000\"/> <simple>header.seqnum</simple> <to uri=\"mock:result\" /> </resequence> </route> </camelContext>", "from(\"direct:start\") .resequence(header(\"seqno\")).batch().timeout(1000) // ignore invalid exchanges (they are discarded) . ignoreInvalidExchanges() .to(\"mock:result\");", "from(\"direct:start\") .onException(MessageRejectedException.class).handled(true).to(\"mock:error\").end() .resequence(header(\"seqno\")).stream().timeout(1000) .rejectOld() .to(\"mock:result\");", "cxf:bean:decrypt,cxf:bean:authenticate,cxf:bean:dedup", "from(\"direct:b\").routingSlip(\"aRoutingSlipHeader\");", "from(\"direct:c\").routingSlip(\"aRoutingSlipHeader\", \"#\");", "<camelContext id=\"buildRoutingSlip\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:c\"/> <routingSlip uriDelimiter=\"#\"> <headerName>aRoutingSlipHeader</headerName> </routingSlip> </route> </camelContext>", "from(\"direct:a\").routingSlip(\"myHeader\").ignoreInvalidEndpoints();", "<route> <from uri=\"direct:a\"/> <routingSlip ignoreInvalidEndpoints=\"true\"> <headerName>myHeader</headerName> </routingSlip> </route>", "from(\"seda:a\").throttle(100).to(\"seda:b\");", "from(\"seda:a\").throttle(3).timePeriodMillis(30000).to(\"mock:result\");", "<camelContext id=\"throttleRoute\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"seda:a\"/> <!-- throttle 3 messages per 30 sec --> <throttle timePeriodMillis=\"30000\"> <constant>3</constant> <to uri=\"mock:result\"/> </throttle> </route> </camelContext>", "<camelContext id=\"throttleRoute\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:expressionHeader\"/> <throttle timePeriodMillis=\"500\"> <!-- use a header to determine how many messages to throttle per 0.5 sec --> <header>throttleValue</header> <to uri=\"mock:result\"/> </throttle> </route> </camelContext>", "from(\"seda:a\").throttle(100).asyncDelayed().to(\"seda:b\");", "from(\"seda:a\").delay(2000).to(\"mock:result\");", "from(\"seda:a\").delay(header(\"MyDelay\")).to(\"mock:result\");", "from(\"direct:start\") .onException(Exception.class) .maximumRedeliveries(2) .backOffMultiplier(1.5) .handled(true) .delay(1000) .log(\"Halting for some time\") .to(\"mock:halt\") .end() .end() .to(\"mock:result\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"seda:a\"/> <delay> <header>MyDelay</header> </delay> <to uri=\"mock:result\"/> </route> <route> <from uri=\"seda:b\"/> <delay> <constant>1000</constant> </delay> <to uri=\"mock:result\"/> </route> </camelContext>", "from(\"activemq:foo\"). delay().expression().method(\"someBean\", \"computeDelay\"). to(\"activemq:bar\");", "public class SomeBean { public long computeDelay() { long delay = 0; // use java code to compute a delay value in millis return delay; } }", "from(\"activemq:queue:foo\") .delay(1000) .asyncDelayed() .to(\"activemq:aDelayedQueue\");", "<route> <from uri=\"activemq:queue:foo\"/> <delay asyncDelayed=\"true\"> <constant>1000</constant> </delay> <to uri=\"activemq:aDealyedQueue\"/> </route>", "from(\"direct:start\").loadBalance().roundRobin().to(\"mock:x\", \"mock:y\", \"mock:z\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <loadBalance> <roundRobin/> <to uri=\"mock:x\"/> <to uri=\"mock:y\"/> <to uri=\"mock:z\"/> </loadBalance> </route> </camelContext>", "from(\"direct:start\").loadBalance().roundRobin().to(\"mock:x\", \"mock:y\", \"mock:z\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <loadBalance> <roundRobin/> <to uri=\"mock:x\"/> <to uri=\"mock:y\"/> <to uri=\"mock:z\"/> </loadBalance> </route> </camelContext>", "from(\"direct:start\").loadBalance().random().to(\"mock:x\", \"mock:y\", \"mock:z\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <loadBalance> <random/> <to uri=\"mock:x\"/> <to uri=\"mock:y\"/> <to uri=\"mock:z\"/> </loadBalance> </route> </camelContext>", "from(\"direct:start\").loadBalance().sticky(header(\"username\")).to(\"mock:x\", \"mock:y\", \"mock:z\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <loadBalance> <sticky> <correlationExpression> <simple>header.username</simple> </correlationExpression> </sticky> <to uri=\"mock:x\"/> <to uri=\"mock:y\"/> <to uri=\"mock:z\"/> </loadBalance> </route> </camelContext>", "from(\"direct:start\").loadBalance().topic().to(\"mock:x\", \"mock:y\", \"mock:z\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <loadBalance> <topic/> <to uri=\"mock:x\"/> <to uri=\"mock:y\"/> <to uri=\"mock:z\"/> </loadBalance> </route> </camelContext>", "from(\"direct:start\") // here we will load balance if IOException was thrown // any other kind of exception will result in the Exchange as failed // to failover over any kind of exception we can just omit the exception // in the failOver DSL .loadBalance().failover(IOException.class) .to(\"direct:x\", \"direct:y\", \"direct:z\");", "// enable redelivery so failover can react errorHandler(defaultErrorHandler().maximumRedeliveries(5)); from(\"direct:foo\") .loadBalance() .failover(IOException.class, MyOtherException.class) .to(\"direct:a\", \"direct:b\");", "<route errorHandlerRef=\"myErrorHandler\"> <from uri=\"direct:foo\"/> <loadBalance> <failover> <exception>java.io.IOException</exception> <exception>com.mycompany.MyOtherException</exception> </failover> <to uri=\"direct:a\"/> <to uri=\"direct:b\"/> </loadBalance> </route>", "from(\"direct:start\") // Use failover load balancer in stateful round robin mode, // which means it will fail over immediately in case of an exception // as it does NOT inherit error handler. It will also keep retrying, as // it is configured to retry indefinitely. .loadBalance().failover(-1, false, true) .to(\"direct:bad\", \"direct:bad2\", \"direct:good\", \"direct:good2\");", "<route> <from uri=\"direct:start\"/> <loadBalance> <!-- failover using stateful round robin, which will keep retrying the 4 endpoints indefinitely. You can set the maximumFailoverAttempt to break out after X attempts --> <failover roundRobin=\"true\"/> <to uri=\"direct:bad\"/> <to uri=\"direct:bad2\"/> <to uri=\"direct:good\"/> <to uri=\"direct:good2\"/> </loadBalance> </route>", "// Java // round-robin from(\"direct:start\") .loadBalance().weighted(true, \"4:2:1\" distributionRatioDelimiter=\":\") .to(\"mock:x\", \"mock:y\", \"mock:z\"); //random from(\"direct:start\") .loadBalance().weighted(false, \"4,2,1\") .to(\"mock:x\", \"mock:y\", \"mock:z\");", "<!-- round-robin --> <route> <from uri=\"direct:start\"/> <loadBalance> <weighted roundRobin=\"true\" distributionRatio=\"4:2:1\" distributionRatioDelimiter=\":\" /> <to uri=\"mock:x\"/> <to uri=\"mock:y\"/> <to uri=\"mock:z\"/> </loadBalance> </route>", "from(\"direct:start\") // using our custom load balancer .loadBalance(new MyLoadBalancer()) .to(\"mock:x\", \"mock:y\", \"mock:z\");", "<!-- this is the implementation of our custom load balancer --> <bean id=\"myBalancer\" class=\"org.apache.camel.processor.CustomLoadBalanceTestUSDMyLoadBalancer\"/> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <loadBalance> <!-- refer to my custom load balancer --> <custom ref=\"myBalancer\"/> <!-- these are the endpoints to balancer --> <to uri=\"mock:x\"/> <to uri=\"mock:y\"/> <to uri=\"mock:z\"/> </loadBalance> </route> </camelContext>", "<loadBalance ref=\"myBalancer\"> <!-- these are the endpoints to balancer --> <to uri=\"mock:x\"/> <to uri=\"mock:y\"/> <to uri=\"mock:z\"/> </loadBalance>", "public static class MyLoadBalancer extends LoadBalancerSupport { public boolean process(Exchange exchange, AsyncCallback callback) { String body = exchange.getIn().getBody(String.class); try { if (\"x\".equals(body)) { getProcessors().get(0).process(exchange); } else if (\"y\".equals(body)) { getProcessors().get(1).process(exchange); } else { getProcessors().get(2).process(exchange); } } catch (Throwable e) { exchange.setException(e); } callback.done(true); return true; } }", "from(\"direct:start\").loadBalance() .circuitBreaker(2, 1000L, MyCustomException.class) .to(\"mock:result\");", "<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <loadBalance> <circuitBreaker threshold=\"2\" halfOpenAfter=\"1000\"> <exception>MyCustomException</exception> </circuitBreaker> <to uri=\"mock:result\"/> </loadBalance> </route> </camelContext>", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-hystrix</artifactId> <version>x.x.x</version> <!-- Specify the same version as your Camel core version. --> </dependency>", "from(\"direct:start\") .hystrix() .to(\"http://fooservice.com/slow\") .onFallback() .transform().constant(\"Fallback message\") .end() .to(\"mock:result\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <hystrix> <to uri=\"http://fooservice.com/slow\"/> <onFallback> <transform> <constant>Fallback message</constant> </transform> </onFallback> </hystrix> <to uri=\"mock:result\"/> </route> </camelContext>", "from(\"direct:start\") .hystrix() .hystrixConfiguration() .executionTimeoutInMilliseconds(5000).circuitBreakerSleepWindowInMilliseconds(10000) .end() .to(\"http://fooservice.com/slow\") .onFallback() .transform().constant(\"Fallback message\") .end() .to(\"mock:result\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <hystrix> <hystrixConfiguration executionTimeoutInMilliseconds=\"5000\" circuitBreakerSleepWindowInMilliseconds=\"10000\"/> <to uri=\"http://fooservice.com/slow\"/> <onFallback> <transform> <constant>Fallback message</constant> </transform> </onFallback> </hystrix> <to uri=\"mock:result\"/> </route> </camelContext>", "You can also configure Hystrix globally and then refer to that configuration. For example:", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <!-- This is a shared config that you can refer to from all Hystrix patterns. --> <hystrixConfiguration id=\"sharedConfig\" executionTimeoutInMilliseconds=\"5000\" circuitBreakerSleepWindowInMilliseconds=\"10000\"/> <route> <from uri=\"direct:start\"/> <hystrix hystrixConfigurationRef=\"sharedConfig\"> <to uri=\"http://fooservice.com/slow\"/> <onFallback> <transform> <constant>Fallback message</constant> </transform> </onFallback> </hystrix> <to uri=\"mock:result\"/> </route> </camelContext>", "from(\"direct:start\") .serviceCall(\"foo\") .to(\"mock:result\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <serviceCall name=\"foo\"/> <to uri=\"mock:result\"/> </route> </camelContext>", "toD(\"http://IP:PORT\")", "<toD uri=\"http:IP:port\"/>", "serviceCall(\"foo?beer=yes\")", "<serviceCall name=\"foo?beer=yes\"/>", "serviceCall(\"foo/beverage?beer=yes\")", "<serviceCall name=\"foo/beverage?beer=yes\"/>", "serviceCall(\"myService\") -> http://hostname:port serviceCall(\"myService/foo\") -> http://hostname:port/foo serviceCall(\"http:myService/foo\") -> http:hostname:port/foo", "<serviceCall name=\"myService\"/> -> http://hostname:port <serviceCall name=\"myService/foo\"/> -> http://hostname:port/foo <serviceCall name=\"http:myService/foo\"/> -> http:hostname:port/foo", "serviceCall(\"myService\", \"http:myService.host:myService.port/foo\") -> http:hostname:port/foo serviceCall(\"myService\", \"netty4:tcp:myService?connectTimeout=1000\") -> netty:tcp:hostname:port?connectTimeout=1000", "<serviceCall name=\"myService\" uri=\"http:myService.host:myService.port/foo\"/> -> http:hostname:port/foo <serviceCall name=\"myService\" uri=\"netty4:tcp:myService?connectTimeout=1000\"/> -> netty:tcp:hostname:port?connectTimeout=1000", "KubernetesConfigurationDefinition config = new KubernetesConfigurationDefinition(); config.setComponent(\"netty4-http\"); // Register the service call configuration: context.setServiceCallConfiguration(config); from(\"direct:start\") .serviceCall(\"foo\") .to(\"mock:result\");", "&lt;camelContext xmlns=\"http://camel.apache.org/schema/spring\"> &lt;kubernetesConfiguration id=\"kubernetes\" component=\"netty4-http\"/> &lt;route> &lt;from uri=\"direct:start\"/> &lt;serviceCall name=\"foo\"/> &lt;to uri=\"mock:result\"/> &lt;/route> &lt;/camelContext>", "from(\"cxf:bean:offer\").multicast(new HighestBidAggregationStrategy()). to(\"cxf:bean:Buyer1\", \"cxf:bean:Buyer2\", \"cxf:bean:Buyer3\");", "// Java import org.apache.camel.processor.aggregate.AggregationStrategy; import org.apache.camel.Exchange; public class HighestBidAggregationStrategy implements AggregationStrategy { public Exchange aggregate(Exchange oldExchange, Exchange newExchange) { float oldBid = oldExchange.getOut().getHeader(\"Bid\", Float.class); float newBid = newExchange.getOut().getHeader(\"Bid\", Float.class); return (newBid > oldBid) ? newExchange : oldExchange; } }", "from(\"cxf:bean:offer\") .multicast(new HighestBidAggregationStrategy()) .parallelProcessing() .to(\"cxf:bean:Buyer1\", \"cxf:bean:Buyer2\", \"cxf:bean:Buyer3\");", "from(\"cxf:bean:offer\") .multicast(new HighestBidAggregationStrategy()) .executorService( MyExecutor ) .to(\"cxf:bean:Buyer1\", \"cxf:bean:Buyer2\", \"cxf:bean:Buyer3\");", "from(\"direct:start\") .multicast(new MyAggregationStrategy()) .parallelProcessing() .timeout(500) .to(\"direct:a\", \"direct:b\", \"direct:c\") .end() .to(\"mock:result\");", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd \"> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"cxf:bean:offer\"/> <multicast strategyRef=\"highestBidAggregationStrategy\" parallelProcessing=\"true\" threadPoolRef=\"myThreadExcutor\"> <to uri=\"cxf:bean:Buyer1\"/> <to uri=\"cxf:bean:Buyer2\"/> <to uri=\"cxf:bean:Buyer3\"/> </multicast> </route> </camelContext> <bean id=\"highestBidAggregationStrategy\" class=\"com.acme.example.HighestBidAggregationStrategy\"/> <bean id=\"myThreadExcutor\" class=\"com.acme.example.MyThreadExcutor\"/> </beans>", "from(\"direct:start\") .multicast().onPrepare(new CustomProc()) .to(\"direct:a\").to(\"direct:b\");", "// Java import org.apache.camel.*; public class CustomProc implements Processor { public void process(Exchange exchange) throws Exception { BodyType body = exchange.getIn().getBody( BodyType .class); // Make a _deep_ copy of of the body object BodyType clone = BodyType .deepCopy(); exchange.getIn().setBody(clone); // Headers and attachments have already been // shallow-copied. If you need deep copies, // add some more code here. } }", "public class Animal implements Serializable { private int id; private String name; public Animal() { } public Animal(int id, String name) { this.id = id; this.name = name; } public Animal deepClone() { Animal clone = new Animal(); clone.setId(getId()); clone.setName(getName()); return clone; } public int getId() { return id; } public void setId(int id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } @Override public String toString() { return id + \" \" + name; } }", "public class AnimalDeepClonePrepare implements Processor { public void process(Exchange exchange) throws Exception { Animal body = exchange.getIn().getBody(Animal.class); // do a deep clone of the body which wont affect when doing multicasting Animal clone = body.deepClone(); exchange.getIn().setBody(clone); } }", "from(\"direct:start\") .multicast().onPrepare(new AnimalDeepClonePrepare()).to(\"direct:a\").to(\"direct:b\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <!-- use on prepare with multicast --> <multicast onPrepareRef=\"animalDeepClonePrepare\"> <to uri=\"direct:a\"/> <to uri=\"direct:b\"/> </multicast> </route> <route> <from uri=\"direct:a\"/> <process ref=\"processorA\"/> <to uri=\"mock:a\"/> </route> <route> <from uri=\"direct:b\"/> <process ref=\"processorB\"/> <to uri=\"mock:b\"/> </route> </camelContext> <!-- the on prepare Processor which performs the deep cloning --> <bean id=\"animalDeepClonePrepare\" class=\"org.apache.camel.processor.AnimalDeepClonePrepare\"/> <!-- processors used for the last two routes, as part of unit test --> <bean id=\"processorA\" class=\"org.apache.camel.processor.MulticastOnPrepareTestUSDProcessorA\"/> <bean id=\"processorB\" class=\"org.apache.camel.processor.MulticastOnPrepareTestUSDProcessorB\"/>", "// split up the order so individual OrderItems can be validated by the appropriate bean from(\"direct:start\") .split().body() .choice() .when().method(\"orderItemHelper\", \"isWidget\") .to(\"bean:widgetInventory\") .otherwise() .to(\"bean:gadgetInventory\") .end() .to(\"seda:aggregate\"); // collect and re-assemble the validated OrderItems into an order again from(\"seda:aggregate\") .aggregate(new MyOrderAggregationStrategy()) .header(\"orderId\") .completionTimeout(1000L) .to(\"mock:result\");", "<route> <from uri=\"direct:start\"/> <split> <simple>body</simple> <choice> <when> <method bean=\"orderItemHelper\" method=\"isWidget\"/> <to uri=\"bean:widgetInventory\"/> </when> <otherwise> <to uri=\"bean:gadgetInventory\"/> </otherwise> </choice> <to uri=\"seda:aggregate\"/> </split> </route> <route> <from uri=\"seda:aggregate\"/> <aggregate strategyRef=\"myOrderAggregatorStrategy\" completionTimeout=\"1000\"> <correlationExpression> <simple>header.orderId</simple> </correlationExpression> <to uri=\"mock:result\"/> </aggregate> </route>", "<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <recipientList> <header>listOfVendors</header> </recipientList> </route> <route> <from uri=\"seda:quoteAggregator\"/> <aggregate strategyRef=\"aggregatorStrategy\" completionTimeout=\"1000\"> <correlationExpression> <header>quoteRequestId</header> </correlationExpression> <to uri=\"mock:result\"/> </aggregate> </route> </camelContext>", "Map<String, Object> headers = new HashMap<String, Object>(); headers.put(\"listOfVendors\", \"bean:vendor1, bean:vendor2, bean:vendor3\"); headers.put(\"quoteRequestId\", \"quoteRequest-1\"); template.sendBodyAndHeaders(\"direct:start\", \"<quote_request item=\\\"beer\\\"/>\", headers);", "public class MyVendor { private int beerPrice; @Produce(uri = \"seda:quoteAggregator\") private ProducerTemplate quoteAggregator; public MyVendor(int beerPrice) { this.beerPrice = beerPrice; } public void getQuote(@XPath(\"/quote_request/@item\") String item, Exchange exchange) throws Exception { if (\"beer\".equals(item)) { exchange.getIn().setBody(beerPrice); quoteAggregator.send(exchange); } else { throw new Exception(\"No quote available for \" + item); } } }", "<bean id=\"aggregatorStrategy\" class=\"org.apache.camel.spring.processor.scattergather.LowestQuoteAggregationStrategy\"/> <bean id=\"vendor1\" class=\"org.apache.camel.spring.processor.scattergather.MyVendor\"> <constructor-arg> <value>1</value> </constructor-arg> </bean> <bean id=\"vendor2\" class=\"org.apache.camel.spring.processor.scattergather.MyVendor\"> <constructor-arg> <value>2</value> </constructor-arg> </bean> <bean id=\"vendor3\" class=\"org.apache.camel.spring.processor.scattergather.MyVendor\"> <constructor-arg> <value>3</value> </constructor-arg> </bean>", "public class LowestQuoteAggregationStrategy implements AggregationStrategy { public Exchange aggregate(Exchange oldExchange, Exchange newExchange) { // the first time we only have the new exchange if (oldExchange == null) { return newExchange; } if (oldExchange.getIn().getBody(int.class) < newExchange.getIn().getBody(int.class)) { return oldExchange; } else { return newExchange; } } }", "from(\"direct:start\").multicast().to(\"seda:vendor1\", \"seda:vendor2\", \"seda:vendor3\"); from(\"seda:vendor1\").to(\"bean:vendor1\").to(\"seda:quoteAggregator\"); from(\"seda:vendor2\").to(\"bean:vendor2\").to(\"seda:quoteAggregator\"); from(\"seda:vendor3\").to(\"bean:vendor3\").to(\"seda:quoteAggregator\"); from(\"seda:quoteAggregator\") .aggregate(header(\"quoteRequestId\"), new LowestQuoteAggregationStrategy()).to(\"mock:result\")", "from(\"direct:a\").loop(8).to(\"mock:result\");", "from(\"direct:b\").loop(header(\"loop\")).to(\"mock:result\");", "from(\"direct:c\").loop().xpath(\"/hello/@times\").to(\"mock:result\");", "<route> <from uri=\"direct:a\"/> <loop> <constant>8</constant> <to uri=\"mock:result\"/> </loop> </route>", "<route> <from uri=\"direct:b\"/> <loop> <header>loop</header> <to uri=\"mock:result\"/> </loop> </route>", "from(\"direct:start\") // instruct loop to use copy mode, which mean it will use a copy of the input exchange // for each loop iteration, instead of keep using the same exchange all over .loop(3).copy() .transform(body().append(\"B\")) .to(\"mock:loop\") .end() .to(\"mock:result\");", "from(\"direct:start\") // by default loop will keep using the same exchange so on the 2nd and 3rd iteration its // the same exchange that was previous used that are being looped all over .loop(3) .transform(body().append(\"B\")) .to(\"mock:loop\") .end() .to(\"mock:result\");", "<route> <from uri=\"direct:start\"/> <!-- enable copy mode for loop eip --> <loop copy=\"true\"> <constant>3</constant> <transform> <simple>USD{body}B</simple> </transform> <to uri=\"mock:loop\"/> </loop> <to uri=\"mock:result\"/> </route>", "from(\"direct:start\") .loopDoWhile(simple(\"USD{body.length} <= 5\")) .to(\"mock:loop\") .transform(body().append(\"A\")) .end() .to(\"mock:result\");", "<route> <from uri=\"direct:start\"/> <loop doWhile=\"true\"> <simple>USD{body.length} <= 5</simple> <to uri=\"mock:loop\"/> <transform> <simple>AUSD{body}</simple> </transform> </loop> <to uri=\"mock:result\"/> </route>", "// Sample with default sampling period (1 second) from(\"direct:sample\") .sample() .to(\"mock:result\"); // Sample with explicitly specified sample period from(\"direct:sample-configured\") .sample(1, TimeUnit.SECONDS) .to(\"mock:result\"); // Alternative syntax for specifying sampling period from(\"direct:sample-configured-via-dsl\") .sample().samplePeriod(1).timeUnits(TimeUnit.SECONDS) .to(\"mock:result\"); from(\"direct:sample-messageFrequency\") .sample(10) .to(\"mock:result\"); from(\"direct:sample-messageFrequency-via-dsl\") .sample().sampleMessageFrequency(5) .to(\"mock:result\");", "<route> <from uri=\"direct:sample\"/> <sample samplePeriod=\"1\" units=\"seconds\"> <to uri=\"mock:result\"/> </sample> </route> <route> <from uri=\"direct:sample-messageFrequency\"/> <sample messageFrequency=\"10\"> <to uri=\"mock:result\"/> </sample> </route> <route> <from uri=\"direct:sample-messageFrequency-via-dsl\"/> <sample messageFrequency=\"5\"> <to uri=\"mock:result\"/> </sample> </route>", "from(\"direct:start\") // use a bean as the dynamic router .dynamicRouter(bean(DynamicRouterTest.class, \"slip\"));", "// Java /** * Use this method to compute dynamic where we should route next. * * @param body the message body * @return endpoints to go, or <tt>null</tt> to indicate the end */ public String slip(String body) { bodies.add(body); invoked++; if (invoked == 1) { return \"mock:a\"; } else if (invoked == 2) { return \"mock:b,mock:c\"; } else if (invoked == 3) { return \"direct:foo\"; } else if (invoked == 4) { return \"mock:result\"; } // no more so return null return null; }", "<bean id=\"mySlip\" class=\"org.apache.camel.processor.DynamicRouterTest\"/> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <dynamicRouter> <!-- use a method call on a bean as dynamic router --> <method ref=\"mySlip\" method=\"slip\"/> </dynamicRouter> </route> <route> <from uri=\"direct:foo\"/> <transform><constant>Bye World</constant></transform> <to uri=\"mock:foo\"/> </route> </camelContext>", "// Java public class MyDynamicRouter { @Consume(uri = \"activemq:foo\") @DynamicRouter public String route(@XPath(\"/customer/id\") String customerId, @Header(\"Location\") String location, Document body) { // query a database to find the best match of the endpoint based on the input parameteres // return the next endpoint uri, where to go. Return null to indicate the end. } }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/msgrout
probe::kprocess.release
probe::kprocess.release Name probe::kprocess.release - Process released Synopsis Values pid PID of the process being released task A task handle to the process being released Context The context of the parent, if it wanted notification of this process' termination, else the context of the process itself. Description Fires when a process is released from the kernel. This always follows a kprocess.exit, though it may be delayed somewhat if the process waits in a zombie state.
[ "kprocess.release" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-kprocess-release
Chapter 10. Deploying installer-provisioned clusters on bare metal
Chapter 10. Deploying installer-provisioned clusters on bare metal 10.1. Overview Installer-provisioned installation on bare metal nodes deploys and configures the infrastructure that a OpenShift Container Platform cluster runs on. This guide provides a methodology to achieving a successful installer-provisioned bare-metal installation. The following diagram illustrates the installation environment in phase 1 of deployment: The provisioning node can be removed after the installation. Provisioner : A physical machine that runs the installation program and hosts the bootstrap VM that deploys the controller of a new OpenShift Container Platform cluster. Bootstrap VM : A virtual machine used in the process of deploying an OpenShift Container Platform cluster. Network bridges : The bootstrap VM connects to the bare metal network and to the provisioning network, if present, via network bridges, eno1 and eno2 . In phase 2 of the deployment, the provisioner destroys the bootstrap VM automatically and moves the virtual IP addresses (VIPs) to the appropriate nodes. The API VIP moves to the control plane nodes and the Ingress VIP moves to the worker nodes. The following diagram illustrates phase 2 of deployment: Important The provisioning network is optional, but it is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . 10.2. Prerequisites Installer-provisioned installation of OpenShift Container Platform requires: One provisioner node with Red Hat Enterprise Linux (RHEL) 8.x installed. The provisioning node can be removed after installation. Three control plane nodes. Baseboard Management Controller (BMC) access to each node. At least one network: One required routable network One optional network for provisioning nodes; and, One optional management network. Before starting an installer-provisioned installation of OpenShift Container Platform, ensure the hardware environment meets the following requirements. 10.2.1. Node requirements Installer-provisioned installation involves a number of hardware node requirements: CPU architecture: All nodes must use x86_64 CPU architecture. Similar nodes: Red Hat recommends nodes have an identical configuration per role. That is, Red Hat recommends nodes be the same brand and model with the same CPU, memory, and storage configuration. Baseboard Management Controller: The provisioner node must be able to access the baseboard management controller (BMC) of each OpenShift Container Platform cluster node. You may use IPMI, Redfish, or a proprietary protocol. Latest generation: Nodes must be of the most recent generation. Installer-provisioned installation relies on BMC protocols, which must be compatible across nodes. Additionally, RHEL 8 ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support RHEL 8 for the provisioner node and RHCOS 8 for the control plane and worker nodes. Registry node: (Optional) If setting up a disconnected mirrored registry, it is recommended the registry reside in its own node. Provisioner node: Installer-provisioned installation requires one provisioner node. Control plane: Installer-provisioned installation requires three control plane nodes for high availability. You can deploy an OpenShift Container Platform cluster with only three control plane nodes, making the control plane nodes schedulable as worker nodes. Smaller clusters are more resource efficient for administrators and developers during development, production, and testing. Worker nodes: While not required, a typical production cluster has two or more worker nodes. Important Do not deploy a cluster with only one worker node, because the cluster will deploy with routers and ingress traffic in a degraded state. Network interfaces: Each node must have at least one network interface for the routable baremetal network. Each node must have one network interface for a provisioning network when using the provisioning network for deployment. Using the provisioning network is the default configuration. Unified Extensible Firmware Interface (UEFI): Installer-provisioned installation requires UEFI boot on all OpenShift Container Platform nodes when using IPv6 addressing on the provisioning network. In addition, UEFI Device PXE Settings must be set to use the IPv6 protocol on the provisioning network NIC, but omitting the provisioning network removes this requirement. Important When starting the installation from virtual media such as an ISO image, delete all old UEFI boot table entries. If the boot table includes entries that are not generic entries provided by the firmware, the installation might fail. Secure Boot: Many production scenarios require nodes with Secure Boot enabled to verify the node only boots with trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. You may deploy with Secure Boot manually or managed. Manually: To deploy an OpenShift Container Platform cluster with Secure Boot manually, you must enable UEFI boot mode and Secure Boot on each control plane node and each worker node. Red Hat supports Secure Boot with manually enabled UEFI and Secure Boot only when installer-provisioned installations use Redfish virtual media. See "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section for additional details. Managed: To deploy an OpenShift Container Platform cluster with managed Secure Boot, you must set the bootMode value to UEFISecureBoot in the install-config.yaml file. Red Hat only supports installer-provisioned installation with managed Secure Boot on 10th generation HPE hardware and 13th generation Dell hardware running firmware version 2.75.75.75 or greater. Deploying with managed Secure Boot does not require Redfish virtual media. See "Configuring managed Secure Boot" in the "Setting up the environment for an OpenShift installation" section for details. Note Red Hat does not support Secure Boot with self-generated keys. 10.2.2. Planning a bare metal cluster for OpenShift Virtualization If you will use OpenShift Virtualization, it is important to be aware of several requirements before you install your bare metal cluster. If you want to use live migration features, you must have multiple worker nodes at the time of cluster installation . This is because live migration requires the cluster-level high availability (HA) flag to be set to true. The HA flag is set when a cluster is installed and cannot be changed afterwards. If there are fewer than two worker nodes defined when you install your cluster, the HA flag is set to false for the life of the cluster. Note You can install OpenShift Virtualization on a single-node cluster, but single-node OpenShift does not support high availability. Live migration requires shared storage. Storage for OpenShift Virtualization must support and use the ReadWriteMany (RWX) access mode. If you plan to use Single Root I/O Virtualization (SR-IOV), ensure that your network interface controllers (NICs) are supported by OpenShift Container Platform. Additional resources Preparing your cluster for OpenShift Virtualization About Single Root I/O Virtualization (SR-IOV) hardware networks Connecting a virtual machine to an SR-IOV network 10.2.3. Firmware requirements for installing with virtual media The installer for installer-provisioned OpenShift Container Platform clusters validates the hardware and firmware compatibility with Redfish virtual media. The following table lists the minimum firmware versions tested and verified to work for installer-provisioned OpenShift Container Platform clusters deployed by using Redfish virtual media. Table 10.1. Firmware compatibility for Redfish virtual media Hardware Model Management Firmware versions HP 10th Generation iLO5 2.63 or later Dell 14th Generation iDRAC 9 v4.20.20.20 - v4.40.00.00 only 13th Generation iDRAC 8 v2.75.75.75 or later Note Red Hat does not test every combination of firmware, hardware, or other third-party components. For further information about third-party support, see Red Hat third-party support policy . See the hardware documentation for the nodes or contact the hardware vendor for information about updating the firmware. For HP servers, Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media. For Dell servers, ensure the OpenShift Container Platform cluster nodes have AutoAttach Enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . With iDRAC 9 firmware version 04.40.00.00 , the Virtual Console plugin defaults to eHTML5 , which causes problems with the InsertVirtualMedia workflow. Set the plug-in to HTML5 to avoid this issue. The menu path is: Configuration Virtual console Plug-in Type HTML5 . Important The installer will not initiate installation on a node if the node firmware is below the foregoing versions when installing with virtual media. 10.2.4. Network requirements Installer-provisioned installation of OpenShift Container Platform involves several network requirements. First, installer-provisioned installation involves an optional non-routable provisioning network for provisioning the operating system on each bare metal node. Second, installer-provisioned installation involves a routable baremetal network. 10.2.4.1. Increase the network MTU Before deploying OpenShift Container Platform, increase the network maximum transmission unit (MTU) to 1500 or more. If the MTU is lower than 1500, the Ironic image that is used to boot the node might fail to communicate with the Ironic inspector pod, and inspection will fail. If this occurs, installation stops because the nodes are not available for installation. 10.2.4.2. Configuring NICs OpenShift Container Platform deploys with two networks: provisioning : The provisioning network is an optional non-routable network used for provisioning the underlying operating system on each node that is a part of the OpenShift Container Platform cluster. The network interface for the provisioning network on each cluster node must have the BIOS or UEFI configured to PXE boot. The provisioningNetworkInterface configuration setting specifies the provisioning network NIC name on the control plane nodes, which must be identical on the control plane nodes. The bootMACAddress configuration setting provides a means to specify a particular NIC on each node for the provisioning network. The provisioning network is optional, but it is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . baremetal : The baremetal network is a routable network. You can use any NIC to interface with the baremetal network provided the NIC is not configured to use the provisioning network. Important When using a VLAN, each NIC must be on a separate VLAN corresponding to the appropriate network. 10.2.4.3. DNS requirements Clients access the OpenShift Container Platform cluster nodes over the baremetal network. A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name. <cluster_name>.<base_domain> For example: test-cluster.example.com OpenShift Container Platform includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS. In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard ingress API A/AAAA records are used for name resolution and PTR records are used for reverse name resolution. Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records or DHCP to set the hostnames for all the nodes. Installer-provisioned installation includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 10.2. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. An A/AAAA record, and a PTR record, identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Routes *.apps.<cluster_name>.<base_domain>. The wildcard A/AAAA record refers to the application ingress load balancer. The application ingress load balancer targets the nodes that run the Ingress Controller pods. The Ingress Controller pods run on the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Tip You can use the dig command to verify DNS resolution. 10.2.4.4. Dynamic Host Configuration Protocol (DHCP) requirements By default, installer-provisioned installation deploys ironic-dnsmasq with DHCP enabled for the provisioning network. No other DHCP servers should be running on the provisioning network when the provisioningNetwork configuration setting is set to managed , which is the default value. If you have a DHCP server running on the provisioning network, you must set the provisioningNetwork configuration setting to unmanaged in the install-config.yaml file. Network administrators must reserve IP addresses for each node in the OpenShift Container Platform cluster for the baremetal network on an external DHCP server. 10.2.4.5. Reserving IP addresses for nodes with the DHCP server For the baremetal network, a network administrator must reserve a number of IP addresses, including: Two unique virtual IP addresses. One virtual IP address for the API endpoint. One virtual IP address for the wildcard ingress endpoint. One IP address for the provisioner node. One IP address for each control plane (master) node. One IP address for each worker node, if applicable. Reserving IP addresses so they become static IP addresses Some administrators prefer to use static IP addresses so that each node's IP address remains constant in the absence of a DHCP server. To use static IP addresses in the OpenShift Container Platform cluster, reserve the IP addresses with an infinite lease. During deployment, the installer will reconfigure the NICs from DHCP assigned addresses to static IP addresses. NICs with DHCP leases that are not infinite will remain configured to use DHCP. Setting IP addresses with an infinite lease is incompatible with network configuration deployed by using the Machine Config Operator. Ensuring that your DHCP server can provide infinite leases Your DHCP server must provide a DHCP expiration time of 4294967295 seconds to properly set an infinite lease as specified by rfc2131 . If a lesser value is returned for the DHCP infinite lease time, the node reports an error and a permanent IP is not set for the node. In RHEL 8, dhcpd does not provide infinite leases. If you want to use the provisioner node to serve dynamic IP addresses with infinite lease times, use dnsmasq rather than dhcpd . Networking between external load balancers and control plane nodes External load balancing services and the control plane nodes must run on the same L2 network, and on the same VLAN when using VLANs to route traffic between the load balancing services and the control plane nodes. Do not change IP addresses manually after deployment Do not change a worker node's IP address manually after deployment. To change the IP address of a worker node after deployment, you must mark the worker node unschedulable, evacuate the pods, delete the node, and recreate it with the new IP address. See "Working with nodes" for additional details. To change the IP address of a control plane node after deployment, contact support. The storage interface requires a DHCP reservation. The following table provides an exemplary embodiment of fully qualified domain names. The API and Nameserver addresses begin with canonical name extensions. The hostnames of the control plane and worker nodes are exemplary, so you can use any host naming convention you prefer. Usage Host Name IP API api.<cluster_name>.<base_domain> <ip> Ingress LB (apps) *.apps.<cluster_name>.<base_domain> <ip> Provisioner node provisioner.<cluster_name>.<base_domain> <ip> Master-0 openshift-master-0.<cluster_name>.<base_domain> <ip> Master-1 openshift-master-1.<cluster_name>-.<base_domain> <ip> Master-2 openshift-master-2.<cluster_name>.<base_domain> <ip> Worker-0 openshift-worker-0.<cluster_name>.<base_domain> <ip> Worker-1 openshift-worker-1.<cluster_name>.<base_domain> <ip> Worker-n openshift-worker-n.<cluster_name>.<base_domain> <ip> Note If you do not create DHCP reservations, the installer requires reverse DNS resolution to set the hostnames for the Kubernetes API node, the provisioner node, the control plane nodes, and the worker nodes. 10.2.4.6. Network Time Protocol (NTP) Each OpenShift Container Platform node in the cluster must have access to an NTP server. OpenShift Container Platform nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync. Important Define a consistent clock date and time format in each cluster node's BIOS settings, or installation might fail. You can reconfigure the control plane nodes to act as NTP servers on disconnected clusters, and reconfigure worker nodes to retrieve time from the control plane nodes. 10.2.4.7. State-driven network configuration requirements (Technology Preview) OpenShift Container Platform supports additional post-installation state-driven network configuration on the secondary network interfaces of cluster nodes using kubernetes-nmstate . For example, system administrators might configure a secondary network interface on cluster nodes after installation for a storage network. Note Configuration must occur before scheduling pods. State-driven network configuration requires installing kubernetes-nmstate , and also requires Network Manager running on the cluster nodes. See OpenShift Virtualization > Kubernetes NMState (Tech Preview) for additional details. 10.2.4.8. Port access for the out-of-band management IP address The out-of-band management IP address is on a separate network from the node. To ensure that the out-of-band management can communicate with the provisioning node during installation, the out-of-band management IP address must be granted access to port 80 on the bootstrap host and port 6180 on the OpenShift Container Platform control plane hosts. 10.2.5. Configuring nodes Configuring nodes when using the provisioning network Each node in the cluster requires the following configuration for proper installation. Warning A mismatch between nodes will cause an installation failure. While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs. In the following table, NIC1 is a non-routable network ( provisioning ) that is only used for the installation of the OpenShift Container Platform cluster. NIC Network VLAN NIC1 provisioning <provisioning_vlan> NIC2 baremetal <baremetal_vlan> The Red Hat Enterprise Linux (RHEL) 8.x installation process on the provisioner node might vary. To install Red Hat Enterprise Linux (RHEL) 8.x using a local Satellite server or a PXE server, PXE-enable NIC2. PXE Boot order NIC1 PXE-enabled provisioning network 1 NIC2 baremetal network. PXE-enabled is optional. 2 Note Ensure PXE is disabled on all other NICs. Configure the control plane and worker nodes as follows: PXE Boot order NIC1 PXE-enabled (provisioning network) 1 Configuring nodes without the provisioning network The installation process requires one NIC: NIC Network VLAN NICx baremetal <baremetal_vlan> NICx is a routable network ( baremetal ) that is used for the installation of the OpenShift Container Platform cluster, and routable to the internet. Important The provisioning network is optional, but it is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . Configuring nodes for Secure Boot manually Secure Boot prevents a node from booting unless it verifies the node is using only trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. Note Red Hat only supports manually configured Secure Boot when deploying with Redfish virtual media. To enable Secure Boot manually, refer to the hardware guide for the node and execute the following: Procedure Boot the node and enter the BIOS menu. Set the node's boot mode to UEFI Enabled. Enable Secure Boot. Important Red Hat does not support Secure Boot with self-generated keys. Configuring the Compatibility Support Module for Fujitsu iRMC The Compatibility Support Module (CSM) configuration provides support for legacy BIOS backward compatibility with UEFI systems. You must configure the CSM when you deploy a cluster with Fujitsu iRMC, otherwise the installation might fail. Note For information about configuring the CSM for your specific node type, refer to the hardware guide for the node. Prerequisites Ensure that you have disabled Secure Boot Control. You can disable the feature under Security Secure Boot Configuration Secure Boot Control . Procedure Boot the node and select the BIOS menu. Under the Advanced tab, select CSM Configuration from the list. Enable the Launch CSM option and set the following values: Item Value Boot option filter UEFI and Legacy Launch PXE OpROM Policy UEFI only Launch Storage OpROM policy UEFI only Other PCI device ROM priority UEFI only 10.2.6. Out-of-band management Nodes will typically have an additional NIC used by the Baseboard Management Controllers (BMCs). These BMCs must be accessible from the provisioner node. Each node must be accessible via out-of-band management. When using an out-of-band management network, the provisioner node requires access to the out-of-band management network for a successful OpenShift Container Platform 4 installation. The out-of-band management setup is out of scope for this document. We recommend setting up a separate management network for out-of-band management. However, using the provisioning network or the baremetal network are valid options. 10.2.7. Required data for installation Prior to the installation of the OpenShift Container Platform cluster, gather the following information from all cluster nodes: Out-of-band management IP Examples Dell (iDRAC) IP HP (iLO) IP Fujitsu (iRMC) IP When using the provisioning network NIC ( provisioning ) MAC address NIC ( baremetal ) MAC address When omitting the provisioning network NIC ( baremetal ) MAC address 10.2.8. Validation checklist for nodes When using the provisioning network ❏ NIC1 VLAN is configured for the provisioning network. ❏ NIC1 for the provisioning network is PXE-enabled on the provisioner, control plane (master), and worker nodes. ❏ NIC2 VLAN is configured for the baremetal network. ❏ PXE has been disabled on all other NICs. ❏ DNS is configured with API and Ingress endpoints. ❏ Control plane and worker nodes are configured. ❏ All nodes accessible via out-of-band management. ❏ (Optional) A separate management network has been created. ❏ Required data for installation. When omitting the provisioning network ❏ NIC1 VLAN is configured for the baremetal network. ❏ DNS is configured with API and Ingress endpoints. ❏ Control plane and worker nodes are configured. ❏ All nodes accessible via out-of-band management. ❏ (Optional) A separate management network has been created. ❏ Required data for installation. 10.3. Setting up the environment for an OpenShift installation 10.3.1. Installing RHEL on the provisioner node With the networking configuration complete, the step is to install RHEL 8.x on the provisioner node. The installer uses the provisioner node as the orchestrator while installing the OpenShift Container Platform cluster. For the purposes of this document, installing RHEL on the provisioner node is out of scope. However, options include but are not limited to using a RHEL Satellite server, PXE, or installation media. 10.3.2. Preparing the provisioner node for OpenShift Container Platform installation Perform the following steps to prepare the environment. Procedure Log in to the provisioner node via ssh . Create a non-root user ( kni ) and provide that user with sudo privileges: # useradd kni # passwd kni # echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni # chmod 0440 /etc/sudoers.d/kni Create an ssh key for the new user: # su - kni -c "ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''" Log in as the new user on the provisioner node: # su - kni USD Use Red Hat Subscription Manager to register the provisioner node: USD sudo subscription-manager register --username=<user> --password=<pass> --auto-attach USD sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms Note For more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager . Install the following packages: USD sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool Modify the user to add the libvirt group to the newly created user: USD sudo usermod --append --groups libvirt <user> Restart firewalld and enable the http service: USD sudo systemctl start firewalld USD sudo firewall-cmd --zone=public --add-service=http --permanent USD sudo firewall-cmd --reload Start and enable the libvirtd service: USD sudo systemctl enable libvirtd --now Create the default storage pool and start it: USD sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images USD sudo virsh pool-start default USD sudo virsh pool-autostart default Configure networking. Note You can also configure networking from the web console. Export the baremetal network NIC name: USD export PUB_CONN=<baremetal_nic_name> Configure the baremetal network: USD sudo nohup bash -c " nmcli con down \"USDPUB_CONN\" nmcli con delete \"USDPUB_CONN\" # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists nmcli con down \"System USDPUB_CONN\" nmcli con delete \"System USDPUB_CONN\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no nmcli con add type bridge-slave ifname \"USDPUB_CONN\" master baremetal pkill dhclient;dhclient baremetal " If you are deploying with a provisioning network, export the provisioning network NIC name: USD export PROV_CONN=<prov_nic_name> If you are deploying with a provisioning network, configure the provisioning network: USD sudo nohup bash -c " nmcli con down \"USDPROV_CONN\" nmcli con delete \"USDPROV_CONN\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \"USDPROV_CONN\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning " Note The ssh connection might disconnect after executing these steps. The IPv6 address can be any address as long as it is not routable via the baremetal network. Ensure that UEFI is enabled and UEFI PXE settings are set to the IPv6 protocol when using IPv6 addressing. Configure the IPv4 address on the provisioning network connection. USD nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual ssh back into the provisioner node (if required). # ssh kni@provisioner.<cluster-name>.<domain> Verify the connection bridges have been properly created. USD sudo nmcli con show NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2 Create a pull-secret.txt file. USD vim pull-secret.txt In a web browser, navigate to Install OpenShift on Bare Metal with installer-provisioned infrastructure , and scroll down to the Downloads section. Click Copy pull secret . Paste the contents into the pull-secret.txt file and save the contents in the kni user's home directory. 10.3.3. Retrieving the OpenShift Container Platform installer Use the stable-4.x version of the installer to deploy the generally available stable version of OpenShift Container Platform: USD export VERSION=stable-4.9 export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}') 10.3.4. Extracting the OpenShift Container Platform installer After retrieving the installer, the step is to extract it. Procedure Set the environment variables: USD export cmd=openshift-baremetal-install USD export pullsecret_file=~/pull-secret.txt USD export extract_dir=USD(pwd) Get the oc binary: USD curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc Extract the installer: USD sudo cp oc /usr/local/bin USD oc adm release extract --registry-config "USD{pullsecret_file}" --command=USDcmd --to "USD{extract_dir}" USD{RELEASE_IMAGE} USD sudo cp openshift-baremetal-install /usr/local/bin 10.3.5. Creating an RHCOS images cache (optional) To employ image caching, you must download two images: the Red Hat Enterprise Linux CoreOS (RHCOS) image used by the bootstrap VM and the RHCOS image used by the installer to provision the different nodes. Image caching is optional, but especially useful when running the installer on a network with limited bandwidth. If you are running the installer on a network with limited bandwidth and the RHCOS images download takes more than 15 to 20 minutes, the installer will timeout. Caching images on a web server will help in such scenarios. Install a container that contains the images. Procedure Install podman : USD sudo dnf install -y podman Open firewall port 8080 to be used for RHCOS image caching: USD sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent USD sudo firewall-cmd --reload Create a directory to store the bootstraposimage and clusterosimage : USD mkdir /home/kni/rhcos_image_cache Set the appropriate SELinux context for the newly created directory: USD sudo semanage fcontext -a -t httpd_sys_content_t "/home/kni/rhcos_image_cache(/.*)?" USD sudo restorecon -Rv /home/kni/rhcos_image_cache/ Get the URI for the RHCOS image that the installation program will deploy on the bootstrap VM: USD export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.qemu.formats["qcow2.gz"].disk.location') Get the name of the image that the installation program will deploy on the bootstrap VM: USD export export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/} Get the SHA hash for the RHCOS image that will be deployed on the nodes: USD export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.qemu.formats["qcow2.gz"].disk["uncompressed-sha256"]') Get the URI for the image that the installation program will deploy on the cluster nodes: USD export RHCOS_OPENSTACK_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.openstack.formats["qcow2.gz"].disk.location') Get the name of the image that the installation program will deploy on the cluster nodes: USD export RHCOS_OPENSTACK_NAME=USD{RHCOS_OPENSTACK_URI##*/} Get the SHA hash for the image that the installation program will deploy on the cluster nodes: USD export RHCOS_OPENSTACK_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.openstack.formats["qcow2.gz"].disk["uncompressed-sha256"]') Download the images and place them in the /home/kni/rhcos_image_cache directory: USD curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME} USD curl -L USD{RHCOS_OPENSTACK_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_OPENSTACK_NAME} Confirm SELinux type is of httpd_sys_content_t for the newly created files: USD ls -Z /home/kni/rhcos_image_cache Create the pod: USD podman run -d --name rhcos_image_cache \ -v /home/kni/rhcos_image_cache:/var/www/html \ -p 8080:8080/tcp \ quay.io/centos7/httpd-24-centos7:latest The above command creates a caching webserver with the name rhcos_image_cache , which serves the images for deployment. The first image USD{RHCOS_PATH}USD{RHCOS_QEMU_URI}?sha256=USD{RHCOS_QEMU_SHA_UNCOMPRESSED} is the bootstrapOSImage and the second image USD{RHCOS_PATH}USD{RHCOS_OPENSTACK_URI}?sha256=USD{RHCOS_OPENSTACK_SHA_COMPRESSED} is the clusterOSImage in the install-config.yaml file. Generate the bootstrapOSImage and clusterOSImage configuration: USD export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d"/" -f1) USD export BOOTSTRAP_OS_IMAGE="http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}" USD export CLUSTER_OS_IMAGE="http://USD{BAREMETAL_IP}:8080/USD{RHCOS_OPENSTACK_NAME}?sha256=USD{RHCOS_OPENSTACK_UNCOMPRESSED_SHA256}" USD echo " bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}" USD echo " clusterOSImage=USD{CLUSTER_OS_IMAGE}" Add the required configuration to the install-config.yaml file under platform.baremetal : platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1 clusterOSImage: <cluster_os_image> 2 1 Replace <bootstrap_os_image> with the value of USDBOOTSTRAP_OS_IMAGE . 2 Replace <cluster_os_image> with the value of USDCLUSTER_OS_IMAGE . See the "Configuration files" section for additional details. 10.3.6. Configuration files 10.3.6.1. Configuring the install-config.yaml file The install-config.yaml file requires some additional details. Most of the information is teaching the installer and the resulting cluster enough about the available hardware so that it is able to fully manage it. Configure install-config.yaml . Change the appropriate variables to match the environment, including pullSecret and sshKey . apiVersion: v1 baseDomain: <domain> metadata: name: <cluster-name> networking: machineNetwork: - cidr: <public-cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIP: <api-ip> ingressVIP: <wildcard-ip> provisioningNetworkCIDR: <CIDR> hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> 2 username: <user> password: <password> bootMACAddress: <NIC1-mac-address> rootDeviceHints: deviceName: "/dev/disk/by-id/<disk_id>" 3 - name: <openshift-master-1> role: master bmc: address: ipmi://<out-of-band-ip> 4 username: <user> password: <password> bootMACAddress: <NIC1-mac-address> rootDeviceHints: deviceName: "/dev/disk/by-id/<disk_id>" 5 - name: <openshift-master-2> role: master bmc: address: ipmi://<out-of-band-ip> 6 username: <user> password: <password> bootMACAddress: <NIC1-mac-address> rootDeviceHints: deviceName: "/dev/disk/by-id/<disk_id>" 7 - name: <openshift-worker-0> role: worker bmc: address: ipmi://<out-of-band-ip> 8 username: <user> password: <password> bootMACAddress: <NIC1-mac-address> - name: <openshift-worker-1> role: worker bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password> bootMACAddress: <NIC1-mac-address> rootDeviceHints: deviceName: "/dev/disk/by-id/<disk_id>" 9 pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>' 1 Scale the worker machines based on the number of worker nodes that are part of the OpenShift Container Platform cluster. Valid options for the replicas value are 0 and integers greater than or equal to 2 . Set the number of replicas to 0 to deploy a three-node cluster, which contains only three control plane machines. A three-node cluster is a smaller, more resource-efficient cluster that can be used for testing, development, and production. You cannot install the cluster with only one worker. 2 4 6 8 See the BMC addressing sections for more options. 3 5 7 9 Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2 . Create a directory to store cluster configs. USD mkdir ~/clusterconfigs USD cp install-config.yaml ~/clusterconfigs Ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster. USD ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off Remove old bootstrap resources if any are left over from a deployment attempt. for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done 10.3.6.2. Setting proxy settings within the install-config.yaml file (optional) To deploy an OpenShift Container Platform cluster using a proxy, make the following changes to the install-config.yaml file. apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR> The following is an example of noProxy with values. noProxy: .example.com,172.22.0.0/24,10.10.0.0/24 With a proxy enabled, set the appropriate values of the proxy in the corresponding key/value pair. Key considerations: If the proxy does not have an HTTPS proxy, change the value of httpsProxy from https:// to http:// . If using a provisioning network, include it in the noProxy setting, otherwise the installer will fail. Set all of the proxy settings as environment variables within the provisioner node. For example, HTTP_PROXY , HTTPS_PROXY , and NO_PROXY . Note When provisioning with IPv6, you cannot define a CIDR address block in the noProxy settings. You must define each address separately. 10.3.6.3. Modifying the install-config.yaml file for no provisioning network (optional) To deploy an OpenShift Container Platform cluster without a provisioning network, make the following changes to the install-config.yaml file. platform: baremetal: apiVIP: <api_VIP> ingressVIP: <ingress_VIP> provisioningNetwork: "Disabled" 1 1 Add the provisioningNetwork configuration setting, if needed, and set it to Disabled . Important The provisioning network is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details. 10.3.6.4. Modifying the install-config.yaml file for dual-stack network (optional) To deploy an OpenShift Container Platform cluster with dual-stack networking, edit the machineNetwork , clusterNetwork , and serviceNetwork configuration settings in the install-config.yaml file. Each setting must have two CIDR entries each. Ensure the first CIDR entry is the IPv4 setting and the second CIDR entry is the IPv6 setting. machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112 Important The API VIP IP address and the Ingress VIP address must be of the primary IP address family when using dual-stack networking. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. However, Red Hat does support dual-stack networking with IPv4 as the primary IP address family. Therefore, the IPv4 entries must go before the IPv6 entries. 10.3.6.5. Configuring managed Secure Boot in the install-config.yaml file (optional) You can enable managed Secure Boot when deploying an installer-provisioned cluster using Redfish BMC addressing, such as redfish , redfish-virtualmedia , or idrac-virtualmedia . To enable managed Secure Boot, add the bootMode configuration setting to each node: Example hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "/dev/sda" bootMode: UEFISecureBoot 2 1 Ensure the bmc.address setting uses redfish , redfish-virtualmedia , or idrac-virtualmedia as the protocol. See "BMC addressing for HPE iLO" or "BMC addressing for Dell iDRAC" for additional details. 2 The bootMode setting is UEFI by default. Change it to UEFISecureBoot to enable managed Secure Boot. Note See "Configuring nodes" in the "Prerequisites" to ensure the nodes can support managed Secure Boot. If the nodes do not support managed Secure Boot, see "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section. Configuring Secure Boot manually requires Redfish virtual media. Note Red Hat does not support Secure Boot with IPMI, because IPMI does not provide Secure Boot management facilities. 10.3.6.6. Additional install-config parameters See the following tables for the required parameters, the hosts parameter, and the bmc parameter for the install-config.yaml file. Table 10.3. Required parameters Parameters Default Description baseDomain The domain name for the cluster. For example, example.com . bootMode UEFI The boot mode for a node. Options are legacy , UEFI , and UEFISecureBoot . If bootMode is not set, Ironic sets it while inspecting the node. sshKey The sshKey configuration setting contains the key in the ~/.ssh/id_rsa.pub file required to access the control plane nodes and worker nodes. Typically, this key is from the provisioner node. pullSecret The pullSecret configuration setting contains a copy of the pull secret downloaded from the Install OpenShift on Bare Metal page when preparing the provisioner node. The name to be given to the OpenShift Container Platform cluster. For example, openshift . The public CIDR (Classless Inter-Domain Routing) of the external network. For example, 10.0.0.0/24 . The OpenShift Container Platform cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes. Replicas sets the number of worker (or compute) nodes in the OpenShift Container Platform cluster. The OpenShift Container Platform cluster requires a name for control plane (master) nodes. Replicas sets the number of control plane (master) nodes included as part of the OpenShift Container Platform cluster. provisioningNetworkInterface The name of the network interface on nodes connected to the provisioning network. For OpenShift Container Platform 4.9 and later releases, use the bootMACAddress configuration setting to enable Ironic to identify the IP address of the NIC instead of using the provisioningNetworkInterface configuration setting to identify the name of the NIC. defaultMachinePlatform The default configuration used for machine pools without a platform configuration. apiVIP (Optional) The virtual IP address for Kubernetes API communication. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or pre-configured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the apiVIP configuration setting in the install-config.yaml file. The IP address must be from the primary IPv4 network when using dual stack networking. If not set, the installer uses api.<cluster_name>.<base_domain> to derive the IP address from the DNS. disableCertificateVerification False redfish and redfish-virtualmedia need this parameter to manage BMC addresses. The value should be True when using a self-signed certificate for BMC addresses. ingressVIP (Optional) The virtual IP address for ingress traffic. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or pre-configured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the ingressVIP configuration setting in the install-config.yaml file. The IP address must be from the primary IPv4 network when using dual stack networking. If not set, the installer uses test.apps.<cluster_name>.<base_domain> to derive the IP address from the DNS. Table 10.4. Optional Parameters Parameters Default Description provisioningDHCPRange 172.22.0.10,172.22.0.100 Defines the IP range for nodes on the provisioning network. provisioningNetworkCIDR 172.22.0.0/24 The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network. clusterProvisioningIP The third IP address of the provisioningNetworkCIDR . The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, 172.22.0.3 . bootstrapProvisioningIP The second IP address of the provisioningNetworkCIDR . The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the provisioning subnet. For example, 172.22.0.2 or 2620:52:0:1307::2 . externalBridge baremetal The name of the baremetal bridge of the hypervisor attached to the baremetal network. provisioningBridge provisioning The name of the provisioning bridge on the provisioner host attached to the provisioning network. defaultMachinePlatform The default configuration used for machine pools without a platform configuration. bootstrapOSImage A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: https://mirror.openshift.com/rhcos-<version>-qemu.qcow2.gz?sha256=<uncompressed_sha256> . clusterOSImage A URL to override the default operating system for cluster nodes. The URL must include a SHA-256 hash of the image. For example, https://mirror.openshift.com/images/rhcos-<version>-openstack.qcow2.gz?sha256=<compressed_sha256> . provisioningNetwork The provisioningNetwork configuration setting determines whether the cluster uses the provisioning network. If it does, the configuration setting also determines if the cluster manages the network. Disabled : Set this parameter to Disabled to disable the requirement for a provisioning network. When set to Disabled , you must only use virtual media based provisioning, or bring up the cluster using the assisted installer. If Disabled and using power management, BMCs must be accessible from the baremetal network. If Disabled , you must provide two IP addresses on the baremetal network that are used for the provisioning services. Managed : Set this parameter to Managed , which is the default, to fully manage the provisioning network, including DHCP, TFTP, and so on. Unmanaged : Set this parameter to Unmanaged to enable the provisioning network but take care of manual configuration of DHCP. Virtual media provisioning is recommended but PXE is still available if required. httpProxy Set this parameter to the appropriate HTTP proxy used within your environment. httpsProxy Set this parameter to the appropriate HTTPS proxy used within your environment. noProxy Set this parameter to the appropriate list of exclusions for proxy usage within your environment. Hosts The hosts parameter is a list of separate bare metal assets used to build the cluster. Table 10.5. Hosts Name Default Description name The name of the BareMetalHost resource to associate with the details. For example, openshift-master-0 . role The role of the bare metal node. Either master or worker . bmc Connection details for the baseboard management controller. See the BMC addressing section for additional details. bootMACAddress The MAC address of the NIC that the host uses for the provisioning network. Ironic retrieves the IP address using the bootMACAddress configuration setting. Then, it binds to the host. Note You must provide a valid MAC address from the host if you disabled the provisioning network. 10.3.6.7. BMC addressing Most vendors support Baseboard Management Controller (BMC) addressing with the Intelligent Platform Management Interface (IPMI). IPMI does not encrypt communications. It is suitable for use within a data center over a secured or dedicated management network. Check with your vendor to see if they support Redfish network boot. Redfish delivers simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). Redfish is human readable and machine capable, and leverages common internet and web services standards to expose information directly to the modern tool chain. If your hardware does not support Redfish network boot, use IPMI. IPMI Hosts using IPMI use the ipmi://<out-of-band-ip>:<port> address format, which defaults to port 623 if not specified. The following example demonstrates an IPMI configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password> Important The provisioning network is required when PXE booting using IPMI for BMC addressing. It is not possible to PXE boot hosts without a provisioning network. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details. Redfish network boot To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True 10.3.6.8. BMC addressing for Dell iDRAC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Dell hardware, Red Hat supports integrated Dell Remote Access Controller (iDRAC) virtual media, Redfish network boot, and IPMI. BMC address formats for Dell iDRAC Protocol Address Format iDRAC virtual media idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 Redfish network boot redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 IPMI ipmi://<out-of-band-ip> Important Use idrac-virtualmedia as the protocol for Redfish virtual media. redfish-virtualmedia will not work on Dell hardware. Dell's idrac-virtualmedia uses the Redfish standard with Dell's OEM extensions. See the following sections for additional details. Redfish virtual media for Dell iDRAC For Redfish virtual media on Dell servers, use idrac-virtualmedia:// in the address setting. Using redfish-virtualmedia:// will not work. The following example demonstrates using iDRAC virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True Note Currently, Redfish is only supported on Dell with iDRAC firmware versions 4.20.20.20 through 04.40.00.00 for installer-provisioned installations on bare metal deployments. There is a known issue with version 04.40.00.00 . With iDRAC 9 firmware version 04.40.00.00 , the Virtual Console plugin defaults to eHTML5 , which causes problems with the InsertVirtualMedia workflow. Set the plugin to HTML5 to avoid this issue. The menu path is: Configuration Virtual console Plug-in Type HTML5 . Ensure the OpenShift Container Platform cluster nodes have AutoAttach Enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . Use idrac-virtualmedia:// as the protocol for Redfish virtual media. Using redfish-virtualmedia:// will not work on Dell hardware, because the idrac-virtualmedia:// protocol corresponds to the idrac hardware type and the Redfish protocol in Ironic. Dell's idrac-virtualmedia:// protocol uses the Redfish standard with Dell's OEM extensions. Ironic also supports the idrac type with the WSMAN protocol. Therefore, you must specify idrac-virtualmedia:// to avoid unexpected behavior when electing to use Redfish with virtual media on Dell hardware. Redfish network boot for iDRAC To enable Redfish, use redfish:// or redfish+http:// to disable transport layer security (TLS). The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True Note Currently, Redfish is only supported on Dell hardware with iDRAC firmware versions 4.20.20.20 through 04.40.00.00 for installer-provisioned installations on bare metal deployments. There is a known issue with version 04.40.00.00 . With iDRAC 9 firmware version 04.40.00.00 , the Virtual Console plugin defaults to eHTML5 , which causes problems with the InsertVirtualMedia workflow. Set the plugin to HTML5 to avoid this issue. The menu path is: Configuration Virtual console Plug-in Type HTML5 . Ensure the OpenShift Container Platform cluster nodes have AutoAttach Enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . The redfish:// URL protocol corresponds to the redfish hardware type in Ironic. 10.3.6.9. BMC addressing for HPE iLO The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For HPE integrated Lights Out (iLO), Red Hat supports Redfish virtual media, Redfish network boot, and IPMI. Table 10.6. BMC address formats for HPE iLO Protocol Address Format Redfish virtual media redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 Redfish network boot redfish://<out-of-band-ip>/redfish/v1/Systems/1 IPMI ipmi://<out-of-band-ip> See the following sections for additional details. Redfish virtual media for HPE iLO To enable Redfish virtual media for HPE servers, use redfish-virtualmedia:// in the address setting. The following example demonstrates using Redfish virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True Note Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media. Redfish network boot for HPE iLO To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True 10.3.6.10. BMC addressing for Fujitsu iRMC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Fujitsu hardware, Red Hat supports integrated Remote Management Controller (iRMC) and IPMI. Table 10.7. BMC address formats for Fujitsu iRMC Protocol Address Format iRMC irmc://<out-of-band-ip> IPMI ipmi://<out-of-band-ip> iRMC Fujitsu nodes can use irmc://<out-of-band-ip> and defaults to port 443 . The following example demonstrates an iRMC configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password> Note Currently Fujitsu supports iRMC S5 firmware version 3.05P and above for installer-provisioned installation on bare metal. 10.3.6.11. Root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 10.8. Subfields Subfield Description deviceName A string containing a Linux device name like /dev/vda . The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. wwnWithExtension A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. wwnVendorExtension A string containing the unique vendor storage identifier. The hint must match the actual value exactly. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: "/dev/sda" 10.3.6.12. Creating the OpenShift Container Platform manifests Create the OpenShift Container Platform manifests. USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated 10.3.6.13. Configuring NTP for disconnected clusters (optional) OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. OpenShift Container Platform nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server. Procedure Create a Butane config, 99-master-chrony-conf-override.bu , including the contents of the chrony.conf file for the control plane nodes. Note See "Creating machine configs with Butane" for information about Butane. Butane config example variant: openshift version: 4.9.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml , containing the configuration to be delivered to the control plane nodes: USD butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml Create a Butane config, 99-worker-chrony-conf-override.bu , including the contents of the chrony.conf file for the worker nodes that references the NTP servers on the control plane nodes. Butane config example variant: openshift version: 4.9.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml , containing the configuration to be delivered to the worker nodes: USD butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml 10.3.6.14. (Optional) Configure network components to run on the control plane You can configure networking components to run exclusively on the control plane nodes. By default, OpenShift Container Platform allows any node in the machine config pool to host the ingressVIP virtual IP address. However, some environments deploy worker nodes in separate subnets from the control plane nodes. When deploying remote workers in separate subnets, you must place the ingressVIP virtual IP address exclusively with the control plane nodes. Procedure Change to the directory storing the install-config.yaml file: USD cd ~/clusterconfigs Switch to the manifests subdirectory: USD cd manifests Create a file named cluster-network-avoid-workers-99-config.yaml : USD touch cluster-network-avoid-workers-99-config.yaml Open the cluster-network-avoid-workers-99-config.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:, This manifest places the ingressVIP virtual IP address on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only: openshift-ingress-operator keepalived Save the cluster-network-avoid-workers-99-config.yaml file. Create a manifests/cluster-ingress-default-ingresscontroller.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: "" Consider backing up the manifests directory. The installer deletes the manifests/ directory when creating the cluster. Modify the cluster-scheduler-02-config.yml manifest to make the control plane nodes schedulable by setting the mastersSchedulable field to true . Control plane nodes are not schedulable by default. For example: Note If control plane nodes are not schedulable after completing this procedure, deploying the cluster will fail. 10.3.6.15. Configuring BIOS for worker node The following procedure configures BIOS for the worker node during the installation process. Procedure Create manifests. Modify the BMH file corresponding to the worker: Add the BIOS configuration to the spec section of the BMH file: Note Red Hat supports three BIOS configurations. See the BMH documentation for details. Only servers with bmc type irmc are supported. Other types of servers are currently not supported. Create cluster. 10.3.7. Creating a disconnected registry (optional) In some cases, you might want to install an OpenShift Container Platform cluster using a local copy of the installation registry. This could be for enhancing network efficiency because the cluster nodes are on a network that does not have access to the internet. A local, or mirrored, copy of the registry requires the following: A certificate for the registry node. This can be a self-signed certificate. A web server that a container on a system will serve. An updated pull secret that contains the certificate and local repository information. Note Creating a disconnected registry on a registry node is optional. The subsequent sections indicate that they are optional since they are steps you need to execute only when creating a disconnected registry on a registry node. You should execute all of the subsequent sub-sections labeled "(optional)" when creating a disconnected registry on a registry node. 10.3.7.1. Preparing the registry node to host the mirrored registry (optional) Make the following changes to the registry node. Procedure Open the firewall port on the registry node. USD sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent USD sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent USD sudo firewall-cmd --reload Install the required packages for the registry node. USD sudo yum -y install python3 podman httpd httpd-tools jq Create the directory structure where the repository information will be held. USD sudo mkdir -p /opt/registry/{auth,certs,data} 10.3.7.2. Generating the self-signed certificate (optional) Generate a self-signed certificate for the registry node and put it in the /opt/registry/certs directory. Procedure Adjust the certificate information as appropriate. USD host_fqdn=USD( hostname --long ) USD cert_c="<Country Name>" # Country Name (C, 2 letter code) USD cert_s="<State>" # Certificate State (S) USD cert_l="<Locality>" # Certificate Locality (L) USD cert_o="<Organization>" # Certificate Organization (O) USD cert_ou="<Org Unit>" # Certificate Organizational Unit (OU) USD cert_cn="USD{host_fqdn}" # Certificate Common Name (CN) USD openssl req \ -newkey rsa:4096 \ -nodes \ -sha256 \ -keyout /opt/registry/certs/domain.key \ -x509 \ -days 365 \ -out /opt/registry/certs/domain.crt \ -addext "subjectAltName = DNS:USD{host_fqdn}" \ -subj "/C=USD{cert_c}/ST=USD{cert_s}/L=USD{cert_l}/O=USD{cert_o}/OU=USD{cert_ou}/CN=USD{cert_cn}" Note When replacing <Country Name> , ensure that it only contains two letters. For example, US . Update the registry node's ca-trust with the new certificate. USD sudo cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ USD sudo update-ca-trust extract 10.3.7.3. Creating the registry podman container (optional) The registry container uses the /opt/registry directory for certificates, authentication files, and to store its data files. The registry container uses httpd and needs an htpasswd file for authentication. Procedure Create an htpasswd file in /opt/registry/auth for the container to use. USD htpasswd -bBc /opt/registry/auth/htpasswd <user> <passwd> Replace <user> with the user name and <passwd> with the password. Create and start the registry container. USD podman create \ --name ocpdiscon-registry \ -p 5000:5000 \ -e "REGISTRY_AUTH=htpasswd" \ -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry" \ -e "REGISTRY_HTTP_SECRET=ALongRandomSecretForRegistry" \ -e "REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" \ -e "REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt" \ -e "REGISTRY_HTTP_TLS_KEY=/certs/domain.key" \ -e "REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true" \ -v /opt/registry/data:/var/lib/registry:z \ -v /opt/registry/auth:/auth:z \ -v /opt/registry/certs:/certs:z \ docker.io/library/registry:2 USD podman start ocpdiscon-registry 10.3.7.4. Copy and update the pull-secret (optional) Copy the pull secret file from the provisioner node to the registry node and modify it to include the authentication information for the new registry node. Procedure Copy the pull-secret.txt file. USD scp kni@provisioner:/home/kni/pull-secret.txt pull-secret.txt Update the host_fqdn environment variable with the fully qualified domain name of the registry node. USD host_fqdn=USD( hostname --long ) Update the b64auth environment variable with the base64 encoding of the http credentials used to create the htpasswd file. USD b64auth=USD( echo -n '<username>:<passwd>' | openssl base64 ) Replace <username> with the user name and <passwd> with the password. Set the AUTHSTRING environment variable to use the base64 authorization string. The USDUSER variable is an environment variable containing the name of the current user. USD AUTHSTRING="{\"USDhost_fqdn:5000\": {\"auth\": \"USDb64auth\",\"email\": \"[email protected]\"}}" Update the pull-secret.txt file. USD jq ".auths += USDAUTHSTRING" < pull-secret.txt > pull-secret-update.txt 10.3.7.5. Mirroring the repository (optional) Procedure Copy the oc binary from the provisioner node to the registry node. USD sudo scp kni@provisioner:/usr/local/bin/oc /usr/local/bin Set the required environment variables. Set the release version: USD VERSION=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.9 . Set the local registry name and host port: USD LOCAL_REG='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Set the local repository name: USD LOCAL_REPO='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Mirror the remote install images to the local repository. USD /usr/local/bin/oc adm release mirror \ -a pull-secret-update.txt \ --from=USDUPSTREAM_REPO \ --to-release-image=USDLOCAL_REG/USDLOCAL_REPO:USD{VERSION} \ --to=USDLOCAL_REG/USDLOCAL_REPO 10.3.7.6. Modify the install-config.yaml file to use the disconnected registry (optional) On the provisioner node, the install-config.yaml file should use the newly created pull-secret from the pull-secret-update.txt file. The install-config.yaml file must also contain the disconnected registry node's certificate and registry information. Procedure Add the disconnected registry node's certificate to the install-config.yaml file. The certificate should follow the "additionalTrustBundle: |" line and be properly indented, usually by two spaces. USD echo "additionalTrustBundle: |" >> install-config.yaml USD sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml Add the mirror information for the registry to the install-config.yaml file. USD echo "imageContentSources:" >> install-config.yaml USD echo "- mirrors:" >> install-config.yaml USD echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml USD echo " source: quay.io/openshift-release-dev/ocp-release" >> install-config.yaml USD echo "- mirrors:" >> install-config.yaml USD echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml USD echo " source: quay.io/openshift-release-dev/ocp-v4.0-art-dev" >> install-config.yaml Note Replace registry.example.com with the registry's fully qualified domain name. 10.3.8. Deploying routers on worker nodes During installation, the installer deploys router pods on worker nodes. By default, the installer installs two router pods. If the initial cluster has only one worker node, or if a deployed cluster requires additional routers to handle external traffic loads destined for services within the OpenShift Container Platform cluster, you can create a yaml file to set an appropriate number of router replicas. Note By default, the installer deploys two routers. If the cluster has at least two worker nodes, you can skip this section. Note If the cluster has no worker nodes, the installer deploys the two routers on the control plane nodes by default. If the cluster has no worker nodes, you can skip this section. Procedure Create a router-replicas.yaml file. apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" Note Replace <num-of-router-pods> with an appropriate value. If working with just one worker node, set replicas: to 1 . If working with more than 3 worker nodes, you can increase replicas: from the default value 2 as appropriate. Save and copy the router-replicas.yaml file to the clusterconfigs/openshift directory. cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml 10.3.9. Validation checklist for installation ❏ OpenShift Container Platform installer has been retrieved. ❏ OpenShift Container Platform installer has been extracted. ❏ Required parameters for the install-config.yaml have been configured. ❏ The hosts parameter for the install-config.yaml has been configured. ❏ The bmc parameter for the install-config.yaml has been configured. ❏ Conventions for the values configured in the bmc address field have been applied. ❏ Created a disconnected registry (optional). ❏ (optional) Validate disconnected registry settings if in use. ❏ (optional) Deployed routers on worker nodes. 10.3.10. Deploying the cluster via the OpenShift Container Platform installer Run the OpenShift Container Platform installer: USD ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster 10.3.11. Following the installation During the deployment process, you can check the installation's overall status by issuing the tail command to the .openshift_install.log log file in the install directory folder. USD tail -f /path/to/install-dir/.openshift_install.log 10.3.12. Verifying static IP address configuration If the DHCP reservation for a cluster node specifies an infinite lease, after the installer successfully provisions the node, the dispatcher script checks the node's network configuration. If the script determines that the network configuration contains an infinite DHCP lease, it creates a new connection using the IP address of the DHCP lease as a static IP address. Note The dispatcher script might run on successfully provisioned nodes while the provisioning of other nodes in the cluster is ongoing. Verify the network configuration is working properly. Procedure Check the network interface configuration on the node. Turn off the DHCP server and reboot the OpenShift Container Platform node and ensure that the network configuration works properly. Additional resources See OpenShift Container Platform upgrade channels and releases for an explanation of the different release channels. 10.4. Installer-provisioned post-installation configuration After successfully deploying an installer-provisioned cluster, consider the following post-installation procedures. 10.4.1. Configuring NTP for disconnected clusters (optional) OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. Use the following procedure to configure NTP servers on the control plane nodes and configure worker nodes as NTP clients of the control plane nodes after a successful deployment. OpenShift Container Platform nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server. Procedure Create a Butane config, 99-master-chrony-conf-override.bu , including the contents of the chrony.conf file for the control plane nodes. Note See "Creating machine configs with Butane" for information about Butane. Butane config example variant: openshift version: 4.9.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml , containing the configuration to be delivered to the control plane nodes: USD butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml Create a Butane config, 99-worker-chrony-conf-override.bu , including the contents of the chrony.conf file for the worker nodes that references the NTP servers on the control plane nodes. Butane config example variant: openshift version: 4.9.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml , containing the configuration to be delivered to the worker nodes: USD butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml Apply the 99-master-chrony-conf-override.yaml policy to the control plane nodes. USD oc apply -f 99-master-chrony-conf-override.yaml Example output machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created Apply the 99-worker-chrony-conf-override.yaml policy to the worker nodes. USD oc apply -f 99-worker-chrony-conf-override.yaml Example output machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created Check the status of the applied NTP settings. USD oc describe machineconfigpool 10.4.2. Enabling a provisioning network after installation The assisted installer and installer-provisioned installation for bare metal clusters provide the ability to deploy a cluster without a provisioning network. This capability is for scenarios such as proof-of-concept clusters or deploying exclusively with Redfish virtual media when each node's baseboard management controller is routable via the baremetal network. You can enable a provisioning network after installation using the Cluster Baremetal Operator (CBO). Prerequisites A dedicated physical network must exist, connected to all worker and control plane nodes. You must isolate the native, untagged physical network. The network cannot have a DHCP server when the provisioningNetwork configuration setting is set to Managed . You can omit the provisioningInterface setting in OpenShift Container Platform 4.9 to use the bootMACAddress configuration setting. Procedure When setting the provisioningInterface setting, first identify the provisioning interface name for the cluster nodes. For example, eth0 or eno1 . Enable the Preboot eXecution Environment (PXE) on the provisioning network interface of the cluster nodes. Retrieve the current state of the provisioning network and save it to a provisioning custom resource (CR) file: USD oc get provisioning -o yaml > enable-provisioning-nw.yaml Modify the provisioning CR file: USD vim ~/enable-provisioning-nw.yaml Scroll down to the provisioningNetwork configuration setting and change it from Disabled to Managed . Then, add the provisioningOSDownloadURL , provisioningIP , provisioningNetworkCIDR , provisioningDHCPRange , provisioningInterface , and watchAllNameSpaces configuration settings after the provisioningNetwork setting. Provide appropriate values for each setting. apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: 1 provisioningOSDownloadURL: 2 provisioningIP: 3 provisioningNetworkCIDR: 4 provisioningDHCPRange: 5 provisioningInterface: 6 watchAllNameSpaces: 7 1 The provisioningNetwork is one of Managed , Unmanaged , or Disabled . When set to Managed , Metal3 manages the provisioning network and the CBO deploys the Metal3 pod with a configured DHCP server. When set to Unmanaged , the system administrator configures the DHCP server manually. 2 The provisioningOSDownloadURL is a valid HTTPS URL with a valid sha256 checksum that enables the Metal3 pod to download a qcow2 operating system image ending in .qcow2.gz or .qcow2.xz . This field is required whether the provisioning network is Managed , Unmanaged , or Disabled . For example: http://192.168.0.1/images/rhcos- <version> .x86_64.qcow2.gz?sha256= <sha> . 3 The provisioningIP is the static IP address that the DHCP server and ironic use to provision the network. This static IP address must be within the provisioning subnet, and outside of the DHCP range. If you configure this setting, it must have a valid IP address even if the provisioning network is Disabled . The static IP address is bound to the metal3 pod. If the metal3 pod fails and moves to another server, the static IP address also moves to the new server. 4 The Classless Inter-Domain Routing (CIDR) address. If you configure this setting, it must have a valid CIDR address even if the provisioning network is Disabled . For example: 192.168.0.1/24 . 5 The DHCP range. This setting is only applicable to a Managed provisioning network. Omit this configuration setting if the provisioning network is Disabled . For example: 192.168.0.64, 192.168.0.253 . 6 The NIC name for the provisioning interface on cluster nodes. The provisioningInterface setting is only applicable to Managed and Unmanaged provisioning networks. Omit the provisioningInterface configuration setting if the provisioning network is Disabled . Omit the provisioningInterface configuration setting to use the bootMACAddress configuration setting instead. 7 Set this setting to true if you want metal3 to watch namespaces other than the default openshift-machine-api namespace. The default value is false . Save the changes to the provisioning CR file. Apply the provisioning CR file to the cluster: USD oc apply -f enable-provisioning-nw.yaml 10.4.3. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Prerequisites On your load balancer, TCP over ports 6443, 443, and 80 must be available to any users of your system. Load balance the API port, 6443, between each of the control plane nodes. Load balance the application ports, 443 and 80, between all of the compute nodes. On your load balancer, port 22623, which is used to serve ignition startup configurations to nodes, is not exposed outside of the cluster. Your load balancer must be able to access every machine in your cluster. Methods to allow this access include: Attaching the load balancer to the cluster's machine subnet. Attaching floating IP addresses to machines that use the load balancer. Important External load balancing services and the control plane nodes must run on the same L2 network, and on the same VLAN when using VLANs to route traffic between the load balancing services and the control plane nodes. Procedure Enable access to the cluster from your load balancer on ports 6443, 443, and 80. As an example, note this HAProxy configuration: A section of a sample HAProxy configuration ... listen my-cluster-api-6443 bind 0.0.0.0:6443 mode tcp balance roundrobin server my-cluster-master-2 192.0.2.2:6443 check server my-cluster-master-0 192.0.2.3:6443 check server my-cluster-master-1 192.0.2.1:6443 check listen my-cluster-apps-443 bind 0.0.0.0:443 mode tcp balance roundrobin server my-cluster-worker-0 192.0.2.6:443 check server my-cluster-worker-1 192.0.2.5:443 check server my-cluster-worker-2 192.0.2.4:443 check listen my-cluster-apps-80 bind 0.0.0.0:80 mode tcp balance roundrobin server my-cluster-worker-0 192.0.2.7:80 check server my-cluster-worker-1 192.0.2.9:80 check server my-cluster-worker-2 192.0.2.8:80 check Add records to your DNS server for the cluster API and apps over the load balancer. For example: <load_balancer_ip_address> api.<cluster_name>.<base_domain> <load_balancer_ip_address> apps.<cluster_name>.<base_domain> From a command line, use curl to verify that the external load balancer and DNS configuration are operational. Verify that the cluster API is accessible: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that cluster applications are accessible: Note You can also verify application accessibility by opening the OpenShift Container Platform console in a web browser. USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, you receive an HTTP response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private 10.5. Expanding the cluster After deploying an installer-provisioned OpenShift Container Platform cluster, you can use the following procedures to expand the number of worker nodes. Ensure that each prospective worker node meets the prerequisites. Note Expanding the cluster using RedFish Virtual Media involves meeting minimum firmware requirements. See Firmware requirements for installing with virtual media in the Prerequisites section for additional details when expanding the cluster using RedFish Virtual Media. 10.5.1. Preparing the bare metal node Expanding the cluster requires a DHCP server. Each node must have a DHCP reservation. Reserving IP addresses so they become static IP addresses Some administrators prefer to use static IP addresses so that each node's IP address remains constant in the absence of a DHCP server. To use static IP addresses in the OpenShift Container Platform cluster, reserve the IP addresses in the DHCP server with an infinite lease . After the installer provisions the node successfully, the dispatcher script will check the node's network configuration. If the dispatcher script finds that the network configuration contains a DHCP infinite lease, it will recreate the connection as a static IP connection using the IP address from the DHCP infinite lease. NICs without DHCP infinite leases will remain unmodified. Setting IP addresses with an infinite lease is incompatible with network configuration deployed by using the Machine Config Operator. Preparing the bare metal node requires executing the following procedure from the provisioner node. Procedure Get the oc binary, if needed. It should already exist on the provisioner node. USD curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux-USDVERSION.tar.gz | tar zxvf - oc USD sudo cp oc /usr/local/bin Power off the bare metal node by using the baseboard management controller, and ensure it is off. Retrieve the user name and password of the bare metal node's baseboard management controller. Then, create base64 strings from the user name and password: USD echo -ne "root" | base64 USD echo -ne "password" | base64 Create a configuration file for the bare metal node. USD vim bmh.yaml --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret type: Opaque data: username: <base64-of-uid> password: <base64-of-pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> spec: online: true bootMACAddress: <NIC1-mac-address> bmc: address: <protocol>://<bmc-ip> credentialsName: openshift-worker-<num>-bmc-secret Replace <num> for the worker number of the bare metal node in the two name fields and the credentialsName field. Replace <base64-of-uid> with the base64 string of the user name. Replace <base64-of-pwd> with the base64 string of the password. Replace <NIC1-mac-address> with the MAC address of the bare metal node's first NIC. See the BMC addressing section for additional BMC configuration options. Replace <protocol> with the BMC protocol, such as IPMI, RedFish, or others. Replace <bmc-ip> with the IP address of the bare metal node's baseboard management controller. Note If the MAC address of an existing bare metal node matches the MAC address of a bare metal host that you are attempting to provision, then the Ironic installation will fail. If the host enrollment, inspection, cleaning, or other Ironic steps fail, the Bare Metal Operator retries the installation continuously. See Diagnosing a host duplicate MAC address for more information. Create the bare metal node. USD oc -n openshift-machine-api create -f bmh.yaml secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created Where <num> will be the worker number. Power up and inspect the bare metal node. USD oc -n openshift-machine-api get bmh openshift-worker-<num> Where <num> is the worker node number. NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> ready true 10.5.2. Replacing a bare-metal control plane node Use the following procedure to replace an installer-provisioned OpenShift Container Platform control plane node. Important If you reuse the BareMetalHost object definition from an existing control plane host, do not leave the externallyProvisioned field set to true . Existing control plane BareMetalHost objects may have the externallyProvisioned flag set to true if they were provisioned by the OpenShift Container Platform installation program. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Important Take an etcd backup before performing this procedure so that you can restore your cluster if you encounter any issues. For more information about taking an etcd backup, see the Additional resources section. Procedure Ensure that the Bare Metal Operator is available: USD oc get clusteroperator baremetal Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.9.0 True False False 3d15h Remove the old BareMetalHost and Machine objects: USD oc delete bmh -n openshift-machine-api <host_name> USD oc delete machine -n openshift-machine-api <machine_name> Replace <host_name> with the name of the host and <machine_name> with the name of the machine. The machine name appears under the CONSUMER field. After you remove the BareMetalHost and Machine objects, then the machine controller automatically deletes the Node object. Create the new BareMetalHost object and the secret to store the BMC credentials: USD cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: control-plane-<num>-bmc-secret 1 namespace: openshift-machine-api data: username: <base64_of_uid> 2 password: <base64_of_pwd> 3 type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: control-plane-<num> 4 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: <protocol>://<bmc_ip> 5 credentialsName: control-plane-<num>-bmc-secret 6 bootMACAddress: <NIC1_mac_address> 7 bootMode: UEFI externallyProvisioned: false hardwareProfile: unknown online: true EOF 1 4 6 Replace <num> for the control plane number of the bare metal node in the name fields and the credentialsName field. 2 Replace <base64_of_uid> with the base64 string of the user name. 3 Replace <base64_of_pwd> with the base64 string of the password. 5 Replace <protocol> with the BMC protocol, such as redfish , redfish-virtualmedia , idrac-virtualmedia , or others. Replace <bmc_ip> with the IP address of the bare metal node's baseboard management controller. For additional BMC configuration options, see "BMC addressing" in the Additional resources section. 7 Replace <NIC1_mac_address> with the MAC address of the bare metal node's first NIC. After the inspection is complete, the BareMetalHost object is created and available to be provisioned. View available BareMetalHost objects: USD oc get bmh -n openshift-machine-api Example output NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com available control-plane-1 true 1h10m control-plane-2.example.com externally provisioned control-plane-2 true 4h53m control-plane-3.example.com externally provisioned control-plane-3 true 4h53m compute-1.example.com provisioned compute-1-ktmmx true 4h53m compute-1.example.com provisioned compute-2-l2zmb true 4h53m There are no MachineSet objects for control plane nodes, so you must create a Machine object instead. You can copy the providerSpec from another control plane Machine object. Create a Machine object: USD cat <<EOF | oc apply -f - apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: metal3.io/BareMetalHost: openshift-machine-api/control-plane-<num> 1 labels: machine.openshift.io/cluster-api-cluster: control-plane-<num> 2 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: control-plane-<num> 3 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: "" url: "" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed EOF 1 2 3 Replace <num> for the control plane number of the bare metal node in the name , labels and annotations fields. To view the BareMetalHost objects, run the following command: USD oc get bmh -A Example output NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com provisioned control-plane-1 true 2h53m control-plane-2.example.com externally provisioned control-plane-2 true 5h53m control-plane-3.example.com externally provisioned control-plane-3 true 5h53m compute-1.example.com provisioned compute-1-ktmmx true 5h53m compute-2.example.com provisioned compute-2-l2zmb true 5h53m After the RHCOS installation, verify that the BareMetalHost is added to the cluster: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane-1.example.com available master 4m2s v1.18.2 control-plane-2.example.com available master 141m v1.18.2 control-plane-3.example.com available master 141m v1.18.2 compute-1.example.com available worker 87m v1.18.2 compute-2.example.com available worker 87m v1.18.2 Note After replacement of the new control plane node, the etcd pod running in the new node is in crashloopback status. See "Replacing an unhealthy etcd member" in the Additional resources section for more information. Additional resources Replacing an unhealthy etcd member Backing up etcd BMC addressing 10.5.3. Preparing to deploy with Virtual Media on the baremetal network If the provisioning network is enabled and you want to expand the cluster using Virtual Media on the baremetal network, use the following procedure. Prerequisites There is an existing cluster with a baremetal network and a provisioning network. Procedure Edit the provisioning custom resource (CR) to enable deploying with Virtual Media on the baremetal network: oc edit provisioning apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: creationTimestamp: "2021-08-05T18:51:50Z" finalizers: - provisioning.metal3.io generation: 8 name: provisioning-configuration resourceVersion: "551591" uid: f76e956f-24c6-4361-aa5b-feaf72c5b526 spec: preProvisioningOSDownloadURLs: {} provisioningDHCPRange: 172.22.0.10,172.22.0.254 provisioningIP: 172.22.0.3 provisioningInterface: enp1s0 provisioningNetwork: Managed provisioningNetworkCIDR: 172.22.0.0/24 provisioningOSDownloadURL: http://192.168.111.1/images/rhcos-<version>.x86_64.qcow2.gz?sha256=<sha256> virtualMediaViaExternalNetwork: true 1 status: generations: - group: apps hash: "" lastGeneration: 7 name: metal3 namespace: openshift-machine-api resource: deployments - group: apps hash: "" lastGeneration: 1 name: metal3-image-cache namespace: openshift-machine-api resource: daemonsets observedGeneration: 8 readyReplicas: 0 1 Add virtualMediaViaExternalNetwork: true to the provisioning CR. Edit the machineset to use the API VIP address: oc edit machineset apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: "2021-08-05T18:51:52Z" generation: 11 labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: ostest-hwmdt-worker-0 namespace: openshift-machine-api resourceVersion: "551513" uid: fad1c6e0-b9da-4d4a-8d73-286f78788931 spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.x86_64.qcow2.<md5sum> 1 url: http://172.22.0.3:6181/images/rhcos-<version>.x86_64.qcow2 2 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data status: availableReplicas: 2 fullyLabeledReplicas: 2 observedGeneration: 11 readyReplicas: 2 replicas: 2 1 Edit the checksum URL to use the API VIP address. 2 Edit the url URL to use the API VIP address. 10.5.4. Diagnosing a duplicate MAC address when provisioning a new host in the cluster If the MAC address of an existing bare-metal node in the cluster matches the MAC address of a bare-metal host you are attempting to add to the cluster, the Bare Metal Operator associates the host with the existing node. If the host enrollment, inspection, cleaning, or other Ironic steps fail, the Bare Metal Operator retries the installation continuously. A registration error is displayed for the failed bare-metal host. You can diagnose a duplicate MAC address by examining the bare-metal hosts that are running in the openshift-machine-api namespace. Prerequisites Install an OpenShift Container Platform cluster on bare metal. Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Procedure To determine whether a bare-metal host that fails provisioning has the same MAC address as an existing node, do the following: Get the bare-metal hosts running in the openshift-machine-api namespace: USD oc get bmh -n openshift-machine-api Example output NAME STATUS PROVISIONING STATUS CONSUMER openshift-master-0 OK externally provisioned openshift-zpwpq-master-0 openshift-master-1 OK externally provisioned openshift-zpwpq-master-1 openshift-master-2 OK externally provisioned openshift-zpwpq-master-2 openshift-worker-0 OK provisioned openshift-zpwpq-worker-0-lv84n openshift-worker-1 OK provisioned openshift-zpwpq-worker-0-zd8lm openshift-worker-2 error registering To see more detailed information about the status of the failing host, run the following command replacing <bare_metal_host_name> with the name of the host: USD oc get -n openshift-machine-api bmh <bare_metal_host_name> -o yaml Example output ... status: errorCount: 12 errorMessage: MAC address b4:96:91:1d:7c:20 conflicts with existing node openshift-worker-1 errorType: registration error ... 10.5.5. Provisioning the bare metal node Provisioning the bare metal node requires executing the following procedure from the provisioner node. Procedure Ensure the STATE is ready before provisioning the bare metal node. USD oc -n openshift-machine-api get bmh openshift-worker-<num> Where <num> is the worker node number. NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> ready true Get a count of the number of worker nodes. USD oc get nodes NAME STATUS ROLES AGE VERSION provisioner.openshift.example.com Ready master 30h v1.22.1 openshift-master-1.openshift.example.com Ready master 30h v1.22.1 openshift-master-2.openshift.example.com Ready master 30h v1.22.1 openshift-master-3.openshift.example.com Ready master 30h v1.22.1 openshift-worker-0.openshift.example.com Ready master 30h v1.22.1 openshift-worker-1.openshift.example.com Ready master 30h v1.22.1 Get the machine set. USD oc get machinesets -n openshift-machine-api NAME DESIRED CURRENT READY AVAILABLE AGE ... openshift-worker-0.example.com 1 1 1 1 55m openshift-worker-1.example.com 1 1 1 1 55m Increase the number of worker nodes by one. USD oc scale --replicas=<num> machineset <machineset> -n openshift-machine-api Replace <num> with the new number of worker nodes. Replace <machineset> with the name of the machine set from the step. Check the status of the bare metal node. USD oc -n openshift-machine-api get bmh openshift-worker-<num> Where <num> is the worker node number. The STATE changes from ready to provisioning . NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioning openshift-worker-<num>-65tjz true The provisioning status remains until the OpenShift Container Platform cluster provisions the node. This can take 30 minutes or more. After the node is provisioned, the state will change to provisioned . NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioning openshift-worker-<num>-65tjz true After provisioning completes, ensure the bare metal node is ready. USD oc get nodes NAME STATUS ROLES AGE VERSION provisioner.openshift.example.com Ready master 30h v1.22.1 openshift-master-1.openshift.example.com Ready master 30h v1.22.1 openshift-master-2.openshift.example.com Ready master 30h v1.22.1 openshift-master-3.openshift.example.com Ready master 30h v1.22.1 openshift-worker-0.openshift.example.com Ready master 30h v1.22.1 openshift-worker-1.openshift.example.com Ready master 30h v1.22.1 openshift-worker-<num>.openshift.example.com Ready worker 3m27s v1.22.1 You can also check the kubelet. USD ssh openshift-worker-<num> [kni@openshift-worker-<num>]USD journalctl -fu kubelet 10.6. Troubleshooting 10.6.1. Troubleshooting the installer workflow Prior to troubleshooting the installation environment, it is critical to understand the overall flow of the installer-provisioned installation on bare metal. The diagrams below provide a troubleshooting flow with a step-by-step breakdown for the environment. Workflow 1 of 4 illustrates a troubleshooting workflow when the install-config.yaml file has errors or the Red Hat Enterprise Linux CoreOS (RHCOS) images are inaccessible. Troubleshooting suggestions can be found at Troubleshooting install-config.yaml . Workflow 2 of 4 illustrates a troubleshooting workflow for bootstrap VM issues , bootstrap VMs that cannot boot up the cluster nodes , and inspecting logs . When installing an OpenShift Container Platform cluster without the provisioning network, this workflow does not apply. Workflow 3 of 4 illustrates a troubleshooting workflow for cluster nodes that will not PXE boot . If installing using RedFish Virtual Media, each node must meet minimum firmware requirements for the installer to deploy the node. See Firmware requirements for installing with virtual media in the Prerequisites section for additional details. Workflow 4 of 4 illustrates a troubleshooting workflow from a non-accessible API to a validated installation . 10.6.2. Troubleshooting install-config.yaml The install-config.yaml configuration file represents all of the nodes that are part of the OpenShift Container Platform cluster. The file contains the necessary options consisting of but not limited to apiVersion , baseDomain , imageContentSources and virtual IP addresses. If errors occur early in the deployment of the OpenShift Container Platform cluster, the errors are likely in the install-config.yaml configuration file. Procedure Use the guidelines in YAML-tips . Verify the YAML syntax is correct using syntax-check . Verify the Red Hat Enterprise Linux CoreOS (RHCOS) QEMU images are properly defined and accessible via the URL provided in the install-config.yaml . For example: USD curl -s -o /dev/null -I -w "%{http_code}\n" http://webserver.example.com:8080/rhcos-44.81.202004250133-0-qemu.x86_64.qcow2.gz?sha256=7d884b46ee54fe87bbc3893bf2aa99af3b2d31f2e19ab5529c60636fbd0f1ce7 If the output is 200 , there is a valid response from the webserver storing the bootstrap VM image. 10.6.3. Bootstrap VM issues The OpenShift Container Platform installation program spawns a bootstrap node virtual machine, which handles provisioning the OpenShift Container Platform cluster nodes. Procedure About 10 to 15 minutes after triggering the installation program, check to ensure the bootstrap VM is operational using the virsh command: USD sudo virsh list Id Name State -------------------------------------------- 12 openshift-xf6fq-bootstrap running Note The name of the bootstrap VM is always the cluster name followed by a random set of characters and ending in the word "bootstrap." If the bootstrap VM is not running after 10-15 minutes, troubleshoot why it is not running. Possible issues include: Verify libvirtd is running on the system: USD systemctl status libvirtd ● libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2020-03-03 21:21:07 UTC; 3 weeks 5 days ago Docs: man:libvirtd(8) https://libvirt.org Main PID: 9850 (libvirtd) Tasks: 20 (limit: 32768) Memory: 74.8M CGroup: /system.slice/libvirtd.service ├─ 9850 /usr/sbin/libvirtd If the bootstrap VM is operational, log in to it. Use the virsh console command to find the IP address of the bootstrap VM: USD sudo virsh console example.com Connected to domain example.com Escape character is ^] Red Hat Enterprise Linux CoreOS 43.81.202001142154.0 (Ootpa) 4.3 SSH host key: SHA256:BRWJktXZgQQRY5zjuAV0IKZ4WM7i4TiUyMVanqu9Pqg (ED25519) SSH host key: SHA256:7+iKGA7VtG5szmk2jB5gl/5EZ+SNcJ3a2g23o0lnIio (ECDSA) SSH host key: SHA256:DH5VWhvhvagOTaLsYiVNse9ca+ZSW/30OOMed8rIGOc (RSA) ens3: fd35:919d:4042:2:c7ed:9a9f:a9ec:7 ens4: 172.22.0.2 fe80::1d05:e52e:be5d:263f localhost login: Important When deploying an OpenShift Container Platform cluster without the provisioning network, you must use a public IP address and not a private IP address like 172.22.0.2 . After you obtain the IP address, log in to the bootstrap VM using the ssh command: Note In the console output of the step, you can use the IPv6 IP address provided by ens3 or the IPv4 IP provided by ens4 . USD ssh [email protected] If you are not successful logging in to the bootstrap VM, you have likely encountered one of the following scenarios: You cannot reach the 172.22.0.0/24 network. Verify the network connectivity between the provisioner and the provisioning network bridge. This issue might occur if you are using a provisioning network. ` You cannot reach the bootstrap VM through the public network. When attempting to SSH via baremetal network, verify connectivity on the provisioner host specifically around the baremetal network bridge. You encountered Permission denied (publickey,password,keyboard-interactive) . When attempting to access the bootstrap VM, a Permission denied error might occur. Verify that the SSH key for the user attempting to log into the VM is set within the install-config.yaml file. 10.6.3.1. Bootstrap VM cannot boot up the cluster nodes During the deployment, it is possible for the bootstrap VM to fail to boot the cluster nodes, which prevents the VM from provisioning the nodes with the RHCOS image. This scenario can arise due to: A problem with the install-config.yaml file. Issues with out-of-band network access when using the baremetal network. To verify the issue, there are three containers related to ironic : ironic-api ironic-conductor ironic-inspector Procedure Log in to the bootstrap VM: USD ssh [email protected] To check the container logs, execute the following: [core@localhost ~]USD sudo podman logs -f <container-name> Replace <container-name> with one of ironic-api , ironic-conductor , or ironic-inspector . If you encounter an issue where the control plane nodes are not booting up via PXE, check the ironic-conductor pod. The ironic-conductor pod contains the most detail about the attempt to boot the cluster nodes, because it attempts to log in to the node over IPMI. Potential reason The cluster nodes might be in the ON state when deployment started. Solution Power off the OpenShift Container Platform cluster nodes before you begin the installation over IPMI: USD ipmitool -I lanplus -U root -P <password> -H <out-of-band-ip> power off 10.6.3.2. Inspecting logs When experiencing issues downloading or accessing the RHCOS images, first verify that the URL is correct in the install-config.yaml configuration file. Example of internal webserver hosting RHCOS images bootstrapOSImage: http://<ip:port>/rhcos-43.81.202001142154.0-qemu.x86_64.qcow2.gz?sha256=9d999f55ff1d44f7ed7c106508e5deecd04dc3c06095d34d36bf1cd127837e0c clusterOSImage: http://<ip:port>/rhcos-43.81.202001142154.0-openstack.x86_64.qcow2.gz?sha256=a1bda656fa0892f7b936fdc6b6a6086bddaed5dafacedcd7a1e811abb78fe3b0 The ipa-downloader and coreos-downloader containers download resources from a webserver or the external quay.io registry, whichever the install-config.yaml configuration file specifies. Verify the following two containers are up and running and inspect their logs as needed: ipa-downloader coreos-downloader Procedure Log in to the bootstrap VM: USD ssh [email protected] Check the status of the ipa-downloader and coreos-downloader containers within the bootstrap VM: [core@localhost ~]USD sudo podman logs -f ipa-downloader [core@localhost ~]USD sudo podman logs -f coreos-downloader If the bootstrap VM cannot access the URL to the images, use the curl command to verify that the VM can access the images. To inspect the bootkube logs that indicate if all the containers launched during the deployment phase, execute the following: [core@localhost ~]USD journalctl -xe [core@localhost ~]USD journalctl -b -f -u bootkube.service Verify all the pods, including dnsmasq , mariadb , httpd , and ironic , are running: [core@localhost ~]USD sudo podman ps If there are issues with the pods, check the logs of the containers with issues. To check the log of the ironic-api , execute the following: [core@localhost ~]USD sudo podman logs <ironic-api> 10.6.4. Cluster nodes will not PXE boot When OpenShift Container Platform cluster nodes will not PXE boot, execute the following checks on the cluster nodes that will not PXE boot. This procedure does not apply when installing an OpenShift Container Platform cluster without the provisioning network. Procedure Check the network connectivity to the provisioning network. Ensure PXE is enabled on the NIC for the provisioning network and PXE is disabled for all other NICs. Verify that the install-config.yaml configuration file has the proper hardware profile and boot MAC address for the NIC connected to the provisioning network. For example: control plane node settings Worker node settings 10.6.5. The API is not accessible When the cluster is running and clients cannot access the API, domain name resolution issues might impede access to the API. Procedure Hostname Resolution: Check the cluster nodes to ensure they have a fully qualified domain name, and not just localhost.localdomain . For example: USD hostname If a hostname is not set, set the correct hostname. For example: USD hostnamectl set-hostname <hostname> Incorrect Name Resolution: Ensure that each node has the correct name resolution in the DNS server using dig and nslookup . For example: USD dig api.<cluster-name>.example.com ; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el8 <<>> api.<cluster-name>.example.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37551 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: 866929d2f8e8563582af23f05ec44203d313e50948d43f60 (good) ;; QUESTION SECTION: ;api.<cluster-name>.example.com. IN A ;; ANSWER SECTION: api.<cluster-name>.example.com. 10800 IN A 10.19.13.86 ;; AUTHORITY SECTION: <cluster-name>.example.com. 10800 IN NS <cluster-name>.example.com. ;; ADDITIONAL SECTION: <cluster-name>.example.com. 10800 IN A 10.19.14.247 ;; Query time: 0 msec ;; SERVER: 10.19.14.247#53(10.19.14.247) ;; WHEN: Tue May 19 20:30:59 UTC 2020 ;; MSG SIZE rcvd: 140 The output in the foregoing example indicates that the appropriate IP address for the api.<cluster-name>.example.com VIP is 10.19.13.86 . This IP address should reside on the baremetal network. 10.6.6. Cleaning up installations In the event of a failed deployment, remove the artifacts from the failed attempt before attempting to deploy OpenShift Container Platform again. Procedure Power off all bare metal nodes prior to installing the OpenShift Container Platform cluster: USD ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off Remove all old bootstrap resources if any are left over from a deployment attempt: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done Remove the following from the clusterconfigs directory to prevent Terraform from failing: USD rm -rf ~/clusterconfigs/auth ~/clusterconfigs/terraform* ~/clusterconfigs/tls ~/clusterconfigs/metadata.json 10.6.7. Issues with creating the registry When creating a disconnected registry, you might encounter a "User Not Authorized" error when attempting to mirror the registry. This error might occur if you fail to append the new authentication to the existing pull-secret.txt file. Procedure Check to ensure authentication is successful: USD /usr/local/bin/oc adm release mirror \ -a pull-secret-update.json --from=USDUPSTREAM_REPO \ --to-release-image=USDLOCAL_REG/USDLOCAL_REPO:USD{VERSION} \ --to=USDLOCAL_REG/USDLOCAL_REPO Note Example output of the variables used to mirror the install images: UPSTREAM_REPO=USD{RELEASE_IMAGE} LOCAL_REG=<registry_FQDN>:<registry_port> LOCAL_REPO='ocp4/openshift4' The values of RELEASE_IMAGE and VERSION were set during the Retrieving OpenShift Installer step of the Setting up the environment for an OpenShift installation section. After mirroring the registry, confirm that you can access it in your disconnected environment: USD curl -k -u <user>:<password> https://registry.example.com:<registry-port>/v2/_catalog {"repositories":["<Repo-Name>"]} 10.6.8. Miscellaneous issues 10.6.8.1. Addressing the runtime network not ready error After the deployment of a cluster you might receive the following error: The Cluster Network Operator is responsible for deploying the networking components in response to a special object created by the installer. It runs very early in the installation process, after the control plane (master) nodes have come up, but before the bootstrap control plane has been torn down. It can be indicative of more subtle installer issues, such as long delays in bringing up control plane (master) nodes or issues with apiserver communication. Procedure Inspect the pods in the openshift-network-operator namespace: USD oc get all -n openshift-network-operator NAME READY STATUS RESTARTS AGE pod/network-operator-69dfd7b577-bg89v 0/1 ContainerCreating 0 149m On the provisioner node, determine that the network configuration exists: USD kubectl get network.config.openshift.io cluster -oyaml apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNetwork: - 172.30.0.0/16 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networkType: OpenShiftSDN If it does not exist, the installer did not create it. To determine why the installer did not create it, execute the following: USD openshift-install create manifests Check that the network-operator is running: USD kubectl -n openshift-network-operator get pods Retrieve the logs: USD kubectl -n openshift-network-operator logs -l "name=network-operator" On high availability clusters with three or more control plane (master) nodes, the Operator will perform leader election and all other Operators will sleep. For additional details, see Troubleshooting . 10.6.8.2. Cluster nodes not getting the correct IPv6 address over DHCP If the cluster nodes are not getting the correct IPv6 address over DHCP, check the following: Ensure the reserved IPv6 addresses reside outside the DHCP range. In the IP address reservation on the DHCP server, ensure the reservation specifies the correct DHCP Unique Identifier (DUID). For example: # This is a dnsmasq dhcp reservation, 'id:00:03:00:01' is the client id and '18:db:f2:8c:d5:9f' is the MAC Address for the NIC id:00:03:00:01:18:db:f2:8c:d5:9f,openshift-master-1,[2620:52:0:1302::6] Ensure that route announcements are working. Ensure that the DHCP server is listening on the required interfaces serving the IP address ranges. 10.6.8.3. Cluster nodes not getting the correct hostname over DHCP During IPv6 deployment, cluster nodes must get their hostname over DHCP. Sometimes the NetworkManager does not assign the hostname immediately. A control plane (master) node might report an error such as: This error indicates that the cluster node likely booted without first receiving a hostname from the DHCP server, which causes kubelet to boot with a localhost.localdomain hostname. To address the error, force the node to renew the hostname. Procedure Retrieve the hostname : [core@master-X ~]USD hostname If the hostname is localhost , proceed with the following steps. Note Where X is the control plane node number. Force the cluster node to renew the DHCP lease: [core@master-X ~]USD sudo nmcli con up "<bare-metal-nic>" Replace <bare-metal-nic> with the wired connection corresponding to the baremetal network. Check hostname again: [core@master-X ~]USD hostname If the hostname is still localhost.localdomain , restart NetworkManager : [core@master-X ~]USD sudo systemctl restart NetworkManager If the hostname is still localhost.localdomain , wait a few minutes and check again. If the hostname remains localhost.localdomain , repeat the steps. Restart the nodeip-configuration service: [core@master-X ~]USD sudo systemctl restart nodeip-configuration.service This service will reconfigure the kubelet service with the correct hostname references. Reload the unit files definition since the kubelet changed in the step: [core@master-X ~]USD sudo systemctl daemon-reload Restart the kubelet service: [core@master-X ~]USD sudo systemctl restart kubelet.service Ensure kubelet booted with the correct hostname: [core@master-X ~]USD sudo journalctl -fu kubelet.service If the cluster node is not getting the correct hostname over DHCP after the cluster is up and running, such as during a reboot, the cluster will have a pending csr . Do not approve a csr , or other issues might arise. Addressing a csr Get CSRs on the cluster: USD oc get csr Verify if a pending csr contains Subject Name: localhost.localdomain : USD oc get csr <pending_csr> -o jsonpath='{.spec.request}' | base64 --decode | openssl req -noout -text Remove any csr that contains Subject Name: localhost.localdomain : USD oc delete csr <wrong_csr> 10.6.8.4. Routes do not reach endpoints During the installation process, it is possible to encounter a Virtual Router Redundancy Protocol (VRRP) conflict. This conflict might occur if a previously used OpenShift Container Platform node that was once part of a cluster deployment using a specific cluster name is still running but not part of the current OpenShift Container Platform cluster deployment using that same cluster name. For example, a cluster was deployed using the cluster name openshift , deploying three control plane (master) nodes and three worker nodes. Later, a separate install uses the same cluster name openshift , but this redeployment only installed three control plane (master) nodes, leaving the three worker nodes from a deployment in an ON state. This might cause a Virtual Router Identifier (VRID) conflict and a VRRP conflict. Get the route: USD oc get route oauth-openshift Check the service endpoint: USD oc get svc oauth-openshift NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE oauth-openshift ClusterIP 172.30.19.162 <none> 443/TCP 59m Attempt to reach the service from a control plane (master) node: [core@master0 ~]USD curl -k https://172.30.19.162 { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", "reason": "Forbidden", "details": { }, "code": 403 Identify the authentication-operator errors from the provisioner node: USD oc logs deployment/authentication-operator -n openshift-authentication-operator Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"225c5bd5-b368-439b-9155-5fd3c0459d98", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 2 endpoints for oauth-server are reporting" Solution Ensure that the cluster name for every deployment is unique, ensuring no conflict. Turn off all the rogue nodes which are not part of the cluster deployment that are using the same cluster name. Otherwise, the authentication pod of the OpenShift Container Platform cluster might never start successfully. 10.6.8.5. Failed Ignition during Firstboot During the Firstboot, the Ignition configuration may fail. Procedure Connect to the node where the Ignition configuration failed: Failed Units: 1 machine-config-daemon-firstboot.service Restart the machine-config-daemon-firstboot service: [core@worker-X ~]USD sudo systemctl restart machine-config-daemon-firstboot.service 10.6.8.6. NTP out of sync The deployment of OpenShift Container Platform clusters depends on NTP synchronized clocks among the cluster nodes. Without synchronized clocks, the deployment may fail due to clock drift if the time difference is greater than two seconds. Procedure Check for differences in the AGE of the cluster nodes. For example: USD oc get nodes NAME STATUS ROLES AGE VERSION master-0.cloud.example.com Ready master 145m v1.22.1 master-1.cloud.example.com Ready master 135m v1.22.1 master-2.cloud.example.com Ready master 145m v1.22.1 worker-2.cloud.example.com Ready worker 100m v1.22.1 Check for inconsistent timing delays due to clock drift. For example: USD oc get bmh -n openshift-machine-api master-1 error registering master-1 ipmi://<out-of-band-ip> USD sudo timedatectl Local time: Tue 2020-03-10 18:20:02 UTC Universal time: Tue 2020-03-10 18:20:02 UTC RTC time: Tue 2020-03-10 18:36:53 Time zone: UTC (UTC, +0000) System clock synchronized: no NTP service: active RTC in local TZ: no Addressing clock drift in existing clusters Create a Butane config file including the contents of the chrony.conf file to be delivered to the nodes. In the following example, create 99-master-chrony.bu to add the file to the control plane nodes. You can modify the file for worker nodes or repeat this procedure for the worker role. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.9.0 metadata: name: 99-master-chrony labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | server <NTP-server> iburst 1 stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony 1 Replace <NTP-server> with the IP address of the NTP server. Use Butane to generate a MachineConfig object file, 99-master-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-master-chrony.bu -o 99-master-chrony.yaml Apply the MachineConfig object file: USD oc apply -f 99-master-chrony.yaml Ensure the System clock synchronized value is yes : USD sudo timedatectl Local time: Tue 2020-03-10 19:10:02 UTC Universal time: Tue 2020-03-10 19:10:02 UTC RTC time: Tue 2020-03-10 19:36:53 Time zone: UTC (UTC, +0000) System clock synchronized: yes NTP service: active RTC in local TZ: no To setup clock synchronization prior to deployment, generate the manifest files and add this file to the openshift directory. For example: USD cp chrony-masters.yaml ~/clusterconfigs/openshift/99_masters-chrony-configuration.yaml Then, continue to create the cluster. 10.6.9. Reviewing the installation After installation, ensure the installer deployed the nodes and pods successfully. Procedure When the OpenShift Container Platform cluster nodes are installed appropriately, the following Ready state is seen within the STATUS column: USD oc get nodes NAME STATUS ROLES AGE VERSION master-0.example.com Ready master,worker 4h v1.22.1 master-1.example.com Ready master,worker 4h v1.22.1 master-2.example.com Ready master,worker 4h v1.22.1 Confirm the installer deployed all pods successfully. The following command removes any pods that are still running or have completed as part of the output. USD oc get pods --all-namespaces | grep -iv running | grep -iv complete
[ "<cluster_name>.<base_domain>", "test-cluster.example.com", "useradd kni passwd kni echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni chmod 0440 /etc/sudoers.d/kni", "su - kni -c \"ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''\"", "su - kni USD", "sudo subscription-manager register --username=<user> --password=<pass> --auto-attach sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms", "sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool", "sudo usermod --append --groups libvirt <user>", "sudo systemctl start firewalld sudo firewall-cmd --zone=public --add-service=http --permanent sudo firewall-cmd --reload", "sudo systemctl enable libvirtd --now", "sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images sudo virsh pool-start default sudo virsh pool-autostart default", "export PUB_CONN=<baremetal_nic_name>", "sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal pkill dhclient;dhclient baremetal \"", "export PROV_CONN=<prov_nic_name>", "sudo nohup bash -c \" nmcli con down \\\"USDPROV_CONN\\\" nmcli con delete \\\"USDPROV_CONN\\\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \\\"USDPROV_CONN\\\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning \"", "nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual", "ssh kni@provisioner.<cluster-name>.<domain>", "sudo nmcli con show", "NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2", "vim pull-secret.txt", "export VERSION=stable-4.9 export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')", "export cmd=openshift-baremetal-install export pullsecret_file=~/pull-secret.txt export extract_dir=USD(pwd)", "curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc", "sudo cp oc /usr/local/bin oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE} sudo cp openshift-baremetal-install /usr/local/bin", "sudo dnf install -y podman", "sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent", "sudo firewall-cmd --reload", "mkdir /home/kni/rhcos_image_cache", "sudo semanage fcontext -a -t httpd_sys_content_t \"/home/kni/rhcos_image_cache(/.*)?\"", "sudo restorecon -Rv /home/kni/rhcos_image_cache/", "export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk.location')", "export export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/}", "export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk[\"uncompressed-sha256\"]')", "export RHCOS_OPENSTACK_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.openstack.formats[\"qcow2.gz\"].disk.location')", "export RHCOS_OPENSTACK_NAME=USD{RHCOS_OPENSTACK_URI##*/}", "export RHCOS_OPENSTACK_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.openstack.formats[\"qcow2.gz\"].disk[\"uncompressed-sha256\"]')", "curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME}", "curl -L USD{RHCOS_OPENSTACK_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_OPENSTACK_NAME}", "ls -Z /home/kni/rhcos_image_cache", "podman run -d --name rhcos_image_cache -v /home/kni/rhcos_image_cache:/var/www/html -p 8080:8080/tcp quay.io/centos7/httpd-24-centos7:latest", "export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d\"/\" -f1)", "export BOOTSTRAP_OS_IMAGE=\"http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}\"", "export CLUSTER_OS_IMAGE=\"http://USD{BAREMETAL_IP}:8080/USD{RHCOS_OPENSTACK_NAME}?sha256=USD{RHCOS_OPENSTACK_UNCOMPRESSED_SHA256}\"", "echo \" bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}\"", "echo \" clusterOSImage=USD{CLUSTER_OS_IMAGE}\"", "platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1 clusterOSImage: <cluster_os_image> 2", "apiVersion: v1 baseDomain: <domain> metadata: name: <cluster-name> networking: machineNetwork: - cidr: <public-cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIP: <api-ip> ingressVIP: <wildcard-ip> provisioningNetworkCIDR: <CIDR> hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> 2 username: <user> password: <password> bootMACAddress: <NIC1-mac-address> rootDeviceHints: deviceName: \"/dev/disk/by-id/<disk_id>\" 3 - name: <openshift-master-1> role: master bmc: address: ipmi://<out-of-band-ip> 4 username: <user> password: <password> bootMACAddress: <NIC1-mac-address> rootDeviceHints: deviceName: \"/dev/disk/by-id/<disk_id>\" 5 - name: <openshift-master-2> role: master bmc: address: ipmi://<out-of-band-ip> 6 username: <user> password: <password> bootMACAddress: <NIC1-mac-address> rootDeviceHints: deviceName: \"/dev/disk/by-id/<disk_id>\" 7 - name: <openshift-worker-0> role: worker bmc: address: ipmi://<out-of-band-ip> 8 username: <user> password: <password> bootMACAddress: <NIC1-mac-address> - name: <openshift-worker-1> role: worker bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password> bootMACAddress: <NIC1-mac-address> rootDeviceHints: deviceName: \"/dev/disk/by-id/<disk_id>\" 9 pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'", "mkdir ~/clusterconfigs cp install-config.yaml ~/clusterconfigs", "ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off", "for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done", "apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR>", "noProxy: .example.com,172.22.0.0/24,10.10.0.0/24", "platform: baremetal: apiVIP: <api_VIP> ingressVIP: <ingress_VIP> provisioningNetwork: \"Disabled\" 1", "machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112", "hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"/dev/sda\" bootMode: UEFISecureBoot 2", "metadata: name:", "networking: machineNetwork: - cidr:", "compute: - name: worker", "compute: replicas: 2", "controlPlane: name: master", "controlPlane: replicas: 3", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password>", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True", "platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True", "platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True", "platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password>", "- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"", "./openshift-baremetal-install --dir ~/clusterconfigs create manifests", "INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated", "variant: openshift version: 4.9.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan", "butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml", "variant: openshift version: 4.9.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony", "butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml", "cd ~/clusterconfigs", "cd manifests", "touch cluster-network-avoid-workers-99-config.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:,", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: \"\"", "sed -i \"s;mastersSchedulable: false;mastersSchedulable: true;g\" clusterconfigs/manifests/cluster-scheduler-02-config.yml", "vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-3.yaml", "spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true", "sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent sudo firewall-cmd --reload", "sudo yum -y install python3 podman httpd httpd-tools jq", "sudo mkdir -p /opt/registry/{auth,certs,data}", "host_fqdn=USD( hostname --long ) cert_c=\"<Country Name>\" # Country Name (C, 2 letter code) cert_s=\"<State>\" # Certificate State (S) cert_l=\"<Locality>\" # Certificate Locality (L) cert_o=\"<Organization>\" # Certificate Organization (O) cert_ou=\"<Org Unit>\" # Certificate Organizational Unit (OU) cert_cn=\"USD{host_fqdn}\" # Certificate Common Name (CN) openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 365 -out /opt/registry/certs/domain.crt -addext \"subjectAltName = DNS:USD{host_fqdn}\" -subj \"/C=USD{cert_c}/ST=USD{cert_s}/L=USD{cert_l}/O=USD{cert_o}/OU=USD{cert_ou}/CN=USD{cert_cn}\"", "sudo cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ sudo update-ca-trust extract", "htpasswd -bBc /opt/registry/auth/htpasswd <user> <passwd>", "podman create --name ocpdiscon-registry -p 5000:5000 -e \"REGISTRY_AUTH=htpasswd\" -e \"REGISTRY_AUTH_HTPASSWD_REALM=Registry\" -e \"REGISTRY_HTTP_SECRET=ALongRandomSecretForRegistry\" -e \"REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd\" -e \"REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt\" -e \"REGISTRY_HTTP_TLS_KEY=/certs/domain.key\" -e \"REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true\" -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/certs:/certs:z docker.io/library/registry:2", "podman start ocpdiscon-registry", "scp kni@provisioner:/home/kni/pull-secret.txt pull-secret.txt", "host_fqdn=USD( hostname --long )", "b64auth=USD( echo -n '<username>:<passwd>' | openssl base64 )", "AUTHSTRING=\"{\\\"USDhost_fqdn:5000\\\": {\\\"auth\\\": \\\"USDb64auth\\\",\\\"email\\\": \\\"[email protected]\\\"}}\"", "jq \".auths += USDAUTHSTRING\" < pull-secret.txt > pull-secret-update.txt", "sudo scp kni@provisioner:/usr/local/bin/oc /usr/local/bin", "VERSION=<release_version>", "LOCAL_REG='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPO='<local_repository_name>'", "/usr/local/bin/oc adm release mirror -a pull-secret-update.txt --from=USDUPSTREAM_REPO --to-release-image=USDLOCAL_REG/USDLOCAL_REPO:USD{VERSION} --to=USDLOCAL_REG/USDLOCAL_REPO", "echo \"additionalTrustBundle: |\" >> install-config.yaml sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml", "echo \"imageContentSources:\" >> install-config.yaml echo \"- mirrors:\" >> install-config.yaml echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml echo \" source: quay.io/openshift-release-dev/ocp-release\" >> install-config.yaml echo \"- mirrors:\" >> install-config.yaml echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml echo \" source: quay.io/openshift-release-dev/ocp-v4.0-art-dev\" >> install-config.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\"", "cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml", "./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster", "tail -f /path/to/install-dir/.openshift_install.log", "variant: openshift version: 4.9.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan", "butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml", "variant: openshift version: 4.9.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony", "butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml", "oc apply -f 99-master-chrony-conf-override.yaml", "machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created", "oc apply -f 99-worker-chrony-conf-override.yaml", "machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created", "oc describe machineconfigpool", "oc get provisioning -o yaml > enable-provisioning-nw.yaml", "vim ~/enable-provisioning-nw.yaml", "apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: 1 provisioningOSDownloadURL: 2 provisioningIP: 3 provisioningNetworkCIDR: 4 provisioningDHCPRange: 5 provisioningInterface: 6 watchAllNameSpaces: 7", "oc apply -f enable-provisioning-nw.yaml", "listen my-cluster-api-6443 bind 0.0.0.0:6443 mode tcp balance roundrobin server my-cluster-master-2 192.0.2.2:6443 check server my-cluster-master-0 192.0.2.3:6443 check server my-cluster-master-1 192.0.2.1:6443 check listen my-cluster-apps-443 bind 0.0.0.0:443 mode tcp balance roundrobin server my-cluster-worker-0 192.0.2.6:443 check server my-cluster-worker-1 192.0.2.5:443 check server my-cluster-worker-2 192.0.2.4:443 check listen my-cluster-apps-80 bind 0.0.0.0:80 mode tcp balance roundrobin server my-cluster-worker-0 192.0.2.7:80 check server my-cluster-worker-1 192.0.2.9:80 check server my-cluster-worker-2 192.0.2.8:80 check", "<load_balancer_ip_address> api.<cluster_name>.<base_domain> <load_balancer_ip_address> apps.<cluster_name>.<base_domain>", "curl https://<loadbalancer_ip_address>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux-USDVERSION.tar.gz | tar zxvf - oc", "sudo cp oc /usr/local/bin", "echo -ne \"root\" | base64", "echo -ne \"password\" | base64", "vim bmh.yaml", "--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret type: Opaque data: username: <base64-of-uid> password: <base64-of-pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> spec: online: true bootMACAddress: <NIC1-mac-address> bmc: address: <protocol>://<bmc-ip> credentialsName: openshift-worker-<num>-bmc-secret", "oc -n openshift-machine-api create -f bmh.yaml", "secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created", "oc -n openshift-machine-api get bmh openshift-worker-<num>", "NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> ready true", "oc get clusteroperator baremetal", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.9.0 True False False 3d15h", "oc delete bmh -n openshift-machine-api <host_name> oc delete machine -n openshift-machine-api <machine_name>", "cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: control-plane-<num>-bmc-secret 1 namespace: openshift-machine-api data: username: <base64_of_uid> 2 password: <base64_of_pwd> 3 type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: control-plane-<num> 4 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: <protocol>://<bmc_ip> 5 credentialsName: control-plane-<num>-bmc-secret 6 bootMACAddress: <NIC1_mac_address> 7 bootMode: UEFI externallyProvisioned: false hardwareProfile: unknown online: true EOF", "oc get bmh -n openshift-machine-api", "NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com available control-plane-1 true 1h10m control-plane-2.example.com externally provisioned control-plane-2 true 4h53m control-plane-3.example.com externally provisioned control-plane-3 true 4h53m compute-1.example.com provisioned compute-1-ktmmx true 4h53m compute-1.example.com provisioned compute-2-l2zmb true 4h53m", "cat <<EOF | oc apply -f - apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: metal3.io/BareMetalHost: openshift-machine-api/control-plane-<num> 1 labels: machine.openshift.io/cluster-api-cluster: control-plane-<num> 2 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: control-plane-<num> 3 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: \"\" url: \"\" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed EOF", "oc get bmh -A", "NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com provisioned control-plane-1 true 2h53m control-plane-2.example.com externally provisioned control-plane-2 true 5h53m control-plane-3.example.com externally provisioned control-plane-3 true 5h53m compute-1.example.com provisioned compute-1-ktmmx true 5h53m compute-2.example.com provisioned compute-2-l2zmb true 5h53m", "oc get nodes", "NAME STATUS ROLES AGE VERSION control-plane-1.example.com available master 4m2s v1.18.2 control-plane-2.example.com available master 141m v1.18.2 control-plane-3.example.com available master 141m v1.18.2 compute-1.example.com available worker 87m v1.18.2 compute-2.example.com available worker 87m v1.18.2", "edit provisioning", "apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: creationTimestamp: \"2021-08-05T18:51:50Z\" finalizers: - provisioning.metal3.io generation: 8 name: provisioning-configuration resourceVersion: \"551591\" uid: f76e956f-24c6-4361-aa5b-feaf72c5b526 spec: preProvisioningOSDownloadURLs: {} provisioningDHCPRange: 172.22.0.10,172.22.0.254 provisioningIP: 172.22.0.3 provisioningInterface: enp1s0 provisioningNetwork: Managed provisioningNetworkCIDR: 172.22.0.0/24 provisioningOSDownloadURL: http://192.168.111.1/images/rhcos-<version>.x86_64.qcow2.gz?sha256=<sha256> virtualMediaViaExternalNetwork: true 1 status: generations: - group: apps hash: \"\" lastGeneration: 7 name: metal3 namespace: openshift-machine-api resource: deployments - group: apps hash: \"\" lastGeneration: 1 name: metal3-image-cache namespace: openshift-machine-api resource: daemonsets observedGeneration: 8 readyReplicas: 0", "edit machineset", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: \"2021-08-05T18:51:52Z\" generation: 11 labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: ostest-hwmdt-worker-0 namespace: openshift-machine-api resourceVersion: \"551513\" uid: fad1c6e0-b9da-4d4a-8d73-286f78788931 spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.x86_64.qcow2.<md5sum> 1 url: http://172.22.0.3:6181/images/rhcos-<version>.x86_64.qcow2 2 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data status: availableReplicas: 2 fullyLabeledReplicas: 2 observedGeneration: 11 readyReplicas: 2 replicas: 2", "oc get bmh -n openshift-machine-api", "NAME STATUS PROVISIONING STATUS CONSUMER openshift-master-0 OK externally provisioned openshift-zpwpq-master-0 openshift-master-1 OK externally provisioned openshift-zpwpq-master-1 openshift-master-2 OK externally provisioned openshift-zpwpq-master-2 openshift-worker-0 OK provisioned openshift-zpwpq-worker-0-lv84n openshift-worker-1 OK provisioned openshift-zpwpq-worker-0-zd8lm openshift-worker-2 error registering", "oc get -n openshift-machine-api bmh <bare_metal_host_name> -o yaml", "status: errorCount: 12 errorMessage: MAC address b4:96:91:1d:7c:20 conflicts with existing node openshift-worker-1 errorType: registration error", "oc -n openshift-machine-api get bmh openshift-worker-<num>", "NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> ready true", "oc get nodes", "NAME STATUS ROLES AGE VERSION provisioner.openshift.example.com Ready master 30h v1.22.1 openshift-master-1.openshift.example.com Ready master 30h v1.22.1 openshift-master-2.openshift.example.com Ready master 30h v1.22.1 openshift-master-3.openshift.example.com Ready master 30h v1.22.1 openshift-worker-0.openshift.example.com Ready master 30h v1.22.1 openshift-worker-1.openshift.example.com Ready master 30h v1.22.1", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE openshift-worker-0.example.com 1 1 1 1 55m openshift-worker-1.example.com 1 1 1 1 55m", "oc scale --replicas=<num> machineset <machineset> -n openshift-machine-api", "oc -n openshift-machine-api get bmh openshift-worker-<num>", "NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioning openshift-worker-<num>-65tjz true", "NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioning openshift-worker-<num>-65tjz true", "oc get nodes", "NAME STATUS ROLES AGE VERSION provisioner.openshift.example.com Ready master 30h v1.22.1 openshift-master-1.openshift.example.com Ready master 30h v1.22.1 openshift-master-2.openshift.example.com Ready master 30h v1.22.1 openshift-master-3.openshift.example.com Ready master 30h v1.22.1 openshift-worker-0.openshift.example.com Ready master 30h v1.22.1 openshift-worker-1.openshift.example.com Ready master 30h v1.22.1 openshift-worker-<num>.openshift.example.com Ready worker 3m27s v1.22.1", "ssh openshift-worker-<num>", "[kni@openshift-worker-<num>]USD journalctl -fu kubelet", "curl -s -o /dev/null -I -w \"%{http_code}\\n\" http://webserver.example.com:8080/rhcos-44.81.202004250133-0-qemu.x86_64.qcow2.gz?sha256=7d884b46ee54fe87bbc3893bf2aa99af3b2d31f2e19ab5529c60636fbd0f1ce7", "sudo virsh list", "Id Name State -------------------------------------------- 12 openshift-xf6fq-bootstrap running", "systemctl status libvirtd", "● libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2020-03-03 21:21:07 UTC; 3 weeks 5 days ago Docs: man:libvirtd(8) https://libvirt.org Main PID: 9850 (libvirtd) Tasks: 20 (limit: 32768) Memory: 74.8M CGroup: /system.slice/libvirtd.service ├─ 9850 /usr/sbin/libvirtd", "sudo virsh console example.com", "Connected to domain example.com Escape character is ^] Red Hat Enterprise Linux CoreOS 43.81.202001142154.0 (Ootpa) 4.3 SSH host key: SHA256:BRWJktXZgQQRY5zjuAV0IKZ4WM7i4TiUyMVanqu9Pqg (ED25519) SSH host key: SHA256:7+iKGA7VtG5szmk2jB5gl/5EZ+SNcJ3a2g23o0lnIio (ECDSA) SSH host key: SHA256:DH5VWhvhvagOTaLsYiVNse9ca+ZSW/30OOMed8rIGOc (RSA) ens3: fd35:919d:4042:2:c7ed:9a9f:a9ec:7 ens4: 172.22.0.2 fe80::1d05:e52e:be5d:263f localhost login:", "ssh [email protected]", "ssh [email protected]", "[core@localhost ~]USD sudo podman logs -f <container-name>", "ipmitool -I lanplus -U root -P <password> -H <out-of-band-ip> power off", "bootstrapOSImage: http://<ip:port>/rhcos-43.81.202001142154.0-qemu.x86_64.qcow2.gz?sha256=9d999f55ff1d44f7ed7c106508e5deecd04dc3c06095d34d36bf1cd127837e0c clusterOSImage: http://<ip:port>/rhcos-43.81.202001142154.0-openstack.x86_64.qcow2.gz?sha256=a1bda656fa0892f7b936fdc6b6a6086bddaed5dafacedcd7a1e811abb78fe3b0", "ssh [email protected]", "[core@localhost ~]USD sudo podman logs -f ipa-downloader", "[core@localhost ~]USD sudo podman logs -f coreos-downloader", "[core@localhost ~]USD journalctl -xe", "[core@localhost ~]USD journalctl -b -f -u bootkube.service", "[core@localhost ~]USD sudo podman ps", "[core@localhost ~]USD sudo podman logs <ironic-api>", "bootMACAddress: 24:6E:96:1B:96:90 # MAC of bootable provisioning NIC hardwareProfile: default #control plane node settings", "bootMACAddress: 24:6E:96:1B:96:90 # MAC of bootable provisioning NIC hardwareProfile: unknown #worker node settings", "hostname", "hostnamectl set-hostname <hostname>", "dig api.<cluster-name>.example.com", "; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el8 <<>> api.<cluster-name>.example.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37551 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: 866929d2f8e8563582af23f05ec44203d313e50948d43f60 (good) ;; QUESTION SECTION: ;api.<cluster-name>.example.com. IN A ;; ANSWER SECTION: api.<cluster-name>.example.com. 10800 IN A 10.19.13.86 ;; AUTHORITY SECTION: <cluster-name>.example.com. 10800 IN NS <cluster-name>.example.com. ;; ADDITIONAL SECTION: <cluster-name>.example.com. 10800 IN A 10.19.14.247 ;; Query time: 0 msec ;; SERVER: 10.19.14.247#53(10.19.14.247) ;; WHEN: Tue May 19 20:30:59 UTC 2020 ;; MSG SIZE rcvd: 140", "ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off", "for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done", "rm -rf ~/clusterconfigs/auth ~/clusterconfigs/terraform* ~/clusterconfigs/tls ~/clusterconfigs/metadata.json", "/usr/local/bin/oc adm release mirror -a pull-secret-update.json --from=USDUPSTREAM_REPO --to-release-image=USDLOCAL_REG/USDLOCAL_REPO:USD{VERSION} --to=USDLOCAL_REG/USDLOCAL_REPO", "UPSTREAM_REPO=USD{RELEASE_IMAGE} LOCAL_REG=<registry_FQDN>:<registry_port> LOCAL_REPO='ocp4/openshift4'", "curl -k -u <user>:<password> https://registry.example.com:<registry-port>/v2/_catalog {\"repositories\":[\"<Repo-Name>\"]}", "`runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network`", "oc get all -n openshift-network-operator", "NAME READY STATUS RESTARTS AGE pod/network-operator-69dfd7b577-bg89v 0/1 ContainerCreating 0 149m", "kubectl get network.config.openshift.io cluster -oyaml", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNetwork: - 172.30.0.0/16 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networkType: OpenShiftSDN", "openshift-install create manifests", "kubectl -n openshift-network-operator get pods", "kubectl -n openshift-network-operator logs -l \"name=network-operator\"", "This is a dnsmasq dhcp reservation, 'id:00:03:00:01' is the client id and '18:db:f2:8c:d5:9f' is the MAC Address for the NIC id:00:03:00:01:18:db:f2:8c:d5:9f,openshift-master-1,[2620:52:0:1302::6]", "Failed Units: 2 NetworkManager-wait-online.service nodeip-configuration.service", "[core@master-X ~]USD hostname", "[core@master-X ~]USD sudo nmcli con up \"<bare-metal-nic>\"", "[core@master-X ~]USD hostname", "[core@master-X ~]USD sudo systemctl restart NetworkManager", "[core@master-X ~]USD sudo systemctl restart nodeip-configuration.service", "[core@master-X ~]USD sudo systemctl daemon-reload", "[core@master-X ~]USD sudo systemctl restart kubelet.service", "[core@master-X ~]USD sudo journalctl -fu kubelet.service", "oc get csr", "oc get csr <pending_csr> -o jsonpath='{.spec.request}' | base64 --decode | openssl req -noout -text", "oc delete csr <wrong_csr>", "oc get route oauth-openshift", "oc get svc oauth-openshift", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE oauth-openshift ClusterIP 172.30.19.162 <none> 443/TCP 59m", "[core@master0 ~]USD curl -k https://172.30.19.162", "{ \"kind\": \"Status\", \"apiVersion\": \"v1\", \"metadata\": { }, \"status\": \"Failure\", \"message\": \"forbidden: User \\\"system:anonymous\\\" cannot get path \\\"/\\\"\", \"reason\": \"Forbidden\", \"details\": { }, \"code\": 403", "oc logs deployment/authentication-operator -n openshift-authentication-operator", "Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"openshift-authentication-operator\", Name:\"authentication-operator\", UID:\"225c5bd5-b368-439b-9155-5fd3c0459d98\", APIVersion:\"apps/v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from \"IngressStateEndpointsDegraded: All 2 endpoints for oauth-server are reporting\"", "Failed Units: 1 machine-config-daemon-firstboot.service", "[core@worker-X ~]USD sudo systemctl restart machine-config-daemon-firstboot.service", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0.cloud.example.com Ready master 145m v1.22.1 master-1.cloud.example.com Ready master 135m v1.22.1 master-2.cloud.example.com Ready master 145m v1.22.1 worker-2.cloud.example.com Ready worker 100m v1.22.1", "oc get bmh -n openshift-machine-api", "master-1 error registering master-1 ipmi://<out-of-band-ip>", "sudo timedatectl", "Local time: Tue 2020-03-10 18:20:02 UTC Universal time: Tue 2020-03-10 18:20:02 UTC RTC time: Tue 2020-03-10 18:36:53 Time zone: UTC (UTC, +0000) System clock synchronized: no NTP service: active RTC in local TZ: no", "variant: openshift version: 4.9.0 metadata: name: 99-master-chrony labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | server <NTP-server> iburst 1 stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony", "butane 99-master-chrony.bu -o 99-master-chrony.yaml", "oc apply -f 99-master-chrony.yaml", "sudo timedatectl", "Local time: Tue 2020-03-10 19:10:02 UTC Universal time: Tue 2020-03-10 19:10:02 UTC RTC time: Tue 2020-03-10 19:36:53 Time zone: UTC (UTC, +0000) System clock synchronized: yes NTP service: active RTC in local TZ: no", "cp chrony-masters.yaml ~/clusterconfigs/openshift/99_masters-chrony-configuration.yaml", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0.example.com Ready master,worker 4h v1.22.1 master-1.example.com Ready master,worker 4h v1.22.1 master-2.example.com Ready master,worker 4h v1.22.1", "oc get pods --all-namespaces | grep -iv running | grep -iv complete" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/installing/deploying-installer-provisioned-clusters-on-bare-metal
Chapter 5. Adding Network Interfaces
Chapter 5. Adding Network Interfaces Satellite supports specifying multiple network interfaces for a single host. You can configure these interfaces when creating a new host as described in Section 2.1, "Creating a Host in Red Hat Satellite" or when editing an existing host. There are several types of network interfaces that you can attach to a host. When adding a new interface, select one of: Interface : Allows you to specify an additional physical or virtual interface. There are two types of virtual interfaces you can create. Use VLAN when the host needs to communicate with several (virtual) networks using a single interface, while these networks are not accessible to each other. Use alias to add an additional IP address to an existing interface. For more information about adding a physical interface, see Section 5.1, "Adding a Physical Interface" . For more information about adding a virtual interface, see Section 5.2, "Adding a Virtual Interface" . Bond : Creates a bonded interface. NIC bonding is a way to bind multiple network interfaces together into a single interface that appears as a single device and has a single MAC address. This enables two or more network interfaces to act as one, increasing the bandwidth and providing redundancy. For more information, see Section 5.3, "Adding a Bonded Interface" . BMC : Baseboard Management Controller (BMC) allows you to remotely monitor and manage the physical state of machines. For more information about BMC, see Enabling Power Management on Managed Hosts in Installing Satellite Server in a Connected Network Environment . For more information about configuring BMC interfaces, see Section 5.5, "Adding a Baseboard Management Controller (BMC) Interface" . Note Additional interfaces have the Managed flag enabled by default, which means the new interface is configured automatically during provisioning by the DNS and DHCP Capsule Servers associated with the selected subnet. This requires a subnet with correctly configured DNS and DHCP Capsule Servers. If you use a Kickstart method for host provisioning, configuration files are automatically created for managed interfaces in the post-installation phase at /etc/sysconfig/network-scripts/ifcfg- interface_id . Note Virtual and bonded interfaces currently require a MAC address of a physical device. Therefore, the configuration of these interfaces works only on bare-metal hosts. 5.1. Adding a Physical Interface Use this procedure to add an additional physical interface to a host. Procedure In the Satellite web UI, navigate to Hosts > All hosts . Click Edit to the host you want to edit. On the Interfaces tab, click Add Interface . Keep the Interface option selected in the Type list. Specify a MAC address . This setting is required. Specify the Device Identifier , for example eth0 . The identifier is used to specify this physical interface when creating bonded interfaces, VLANs, and aliases. Specify the DNS name associated with the host's IP address. Satellite saves this name in Capsule Server associated with the selected domain (the "DNS A" field) and Capsule Server associated with the selected subnet (the "DNS PTR" field). A single host can therefore have several DNS entries. Select a domain from the Domain list. To create and manage domains, navigate to Infrastructure > Domains . Select a subnet from the Subnet list. To create and manage subnets, navigate to Infrastructure > Subnets . Specify the IP address . Managed interfaces with an assigned DHCP Capsule Server require this setting for creating a DHCP lease. DHCP-enabled managed interfaces are automatically provided with a suggested IP address. Select whether the interface is Managed . If the interface is managed, configuration is pulled from the associated Capsule Server during provisioning, and DNS and DHCP entries are created. If using kickstart provisioning, a configuration file is automatically created for the interface. Select whether this is the Primary interface for the host. The DNS name from the primary interface is used as the host portion of the FQDN. Select whether this is the Provision interface for the host. TFTP boot takes place using the provisioning interface. For image-based provisioning, the script to complete the provisioning is executed through the provisioning interface. Select whether to use the interface for Remote execution . Leave the Virtual NIC checkbox clear. Click OK to save the interface configuration. Click Submit to apply the changes to the host. 5.2. Adding a Virtual Interface Use this procedure to configure a virtual interface for a host. This can be either a VLAN or an alias interface. An alias interface is an additional IP address attached to an existing interface. An alias interface automatically inherits a MAC address from the interface it is attached to; therefore, you can create an alias without specifying a MAC address. The interface must be specified in a subnet with boot mode set to static . Procedure In the Satellite web UI, navigate to Hosts > All hosts . Click Edit to the host you want to edit. On the Interfaces tab, click Add Interface . Keep the Interface option selected in the Type list. Specify the general interface settings. The applicable configuration options are the same as for the physical interfaces described in Section 5.1, "Adding a Physical Interface" . Specify a MAC address for managed virtual interfaces so that the configuration files for provisioning are generated correctly. However, a MAC address is not required for virtual interfaces that are not managed. If creating a VLAN, specify ID in the form of eth1.10 in the Device Identifier field. If creating an alias, use ID in the form of eth1:10 . Select the Virtual NIC checkbox. Additional configuration options specific to virtual interfaces are appended to the form: Tag : Optionally set a VLAN tag to trunk a network segment from the physical network through to the virtual interface. If you do not specify a tag, managed interfaces inherit the VLAN tag of the associated subnet. User-specified entries from this field are not applied to alias interfaces. Attached to : Specify the identifier of the physical interface to which the virtual interface belongs, for example eth1 . This setting is required. Click OK to save the interface configuration. Click Submit to apply the changes to the host. 5.3. Adding a Bonded Interface Use this procedure to configure a bonded interface for a host. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > All hosts . Click Edit to the host you want to edit. On the Interfaces tab, click Add Interface . Select Bond from the Type list. Additional type-specific configuration options are appended to the form. Specify the general interface settings. The applicable configuration options are the same as for the physical interfaces described in Section 5.1, "Adding a Physical Interface" . Bonded interfaces use IDs in the form of bond0 in the Device Identifier field. A single MAC address is sufficient. Specify the configuration options specific to bonded interfaces: Mode : Select the bonding mode that defines a policy for fault tolerance and load balancing. See Section 5.4, "Bonding Modes Available in Satellite" for a brief description of each bonding mode. Attached devices : Specify a comma-separated list of identifiers of attached devices. These can be physical interfaces or VLANs. Bond options : Specify a space-separated list of configuration options, for example miimon=100 . See the Red Hat Enterprise Linux 7 Networking Guide for details of the configuration options you can specify for the bonded interface. Click OK to save the interface configuration. Click Submit to apply the changes to the host. CLI procedure To create a host with a bonded interface, enter the following command: 5.4. Bonding Modes Available in Satellite Bonding Mode Description balance-rr Transmissions are received and sent sequentially on each bonded interface. active-backup Transmissions are received and sent through the first available bonded interface. Another bonded interface is only used if the active bonded interface fails. balance-xor Transmissions are based on the selected hash policy. In this mode, traffic destined for specific peers is always sent over the same interface. broadcast All transmissions are sent on all bonded interfaces. 802.a3 Creates aggregation groups that share the same settings. Transmits and receives on all interfaces in the active group. balance-tlb The outgoing traffic is distributed according to the current load on each bonded interface. balance-alb Receive load balancing is achieved through Address Resolution Protocol (ARP) negotiation. 5.5. Adding a Baseboard Management Controller (BMC) Interface Use this procedure to configure a baseboard management controller (BMC) interface for a host that supports this feature. Prerequisites The ipmitool package is installed. You know the MAC address, IP address, and other details of the BMC interface on the host, and the appropriate credentials for that interface. Note You only need the MAC address for the BMC interface if the BMC interface is managed, so that it can create a DHCP reservation. Procedure Enable BMC on the Capsule server if it is not already enabled: Configure BMC power management on Capsule Server by running the satellite-installer script with the following options: In the Satellite web UI, navigate to Infrastructure > Capsules . From the list in the Actions column, click Refresh . The list in the Features column should now include BMC. In the Satellite web UI, navigate to Hosts > All hosts . Click Edit to the host you want to edit. On the Interfaces tab, click Add Interface . Select BMC from the Type list. Type-specific configuration options are appended to the form. Specify the general interface settings. The applicable configuration options are the same as for the physical interfaces described in Section 5.1, "Adding a Physical Interface" . Specify the configuration options specific to BMC interfaces: Username and Password : Specify any authentication credentials required by BMC. Provider : Specify the BMC provider. Click OK to save the interface configuration. Click Submit to apply the changes to the host.
[ "hammer host create --name bonded_interface --hostgroup-id 1 --ip= 192.168.100.123 --mac= 52:54:00:14:92:2a --subnet-id= 1 --managed true --interface=\"identifier= eth1 , mac= 52:54:00:62:43:06 , managed=true, type=Nic::Managed, domain_id= 1 , subnet_id= 1 \" --interface=\"identifier= eth2 , mac= 52:54:00:d3:87:8f , managed=true, type=Nic::Managed, domain_id= 1 , subnet_id= 1 \" --interface=\"identifier= bond0 , ip= 172.25.18.123 , type=Nic::Bond, mode=active-backup, attached_devices=[ eth1,eth2 ], managed=true, domain_id= 1 , subnet_id= 1 \" --organization \" My_Organization \" --location \" My_Location \" --ask-root-password yes", "satellite-installer --foreman-proxy-bmc=true --foreman-proxy-bmc-default-provider=ipmitool" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_hosts/Adding_Network_Interfaces_managing-hosts
Chapter 1. Deploying an overcloud and Red Hat Ceph Storage
Chapter 1. Deploying an overcloud and Red Hat Ceph Storage Red Hat OpenStack Platform (RHOSP) director deploys the cloud environment, also known as the overcloud, and Red Hat Ceph Storage. Director uses Ansible playbooks provided through the tripleo-ansible package to deploy the Ceph Storage cluster. The director also manages the configuration and scaling operations of the Ceph Storage cluster. For more information about Red Hat Ceph Storage, see Red Hat Ceph Storage Architecture Guide . For more information about services in the Red Hat OpenStack Platform, see Configuring a basic overcloud with the CLI tools in Director Installation and Usage . 1.1. Red Hat Ceph Storage clusters Red Hat Ceph Storage is a distributed data object store designed for performance, reliability, and scalability. Distributed object stores use unstructured data to simultaneously service modern and legacy object interfaces. Ceph Storage is deployed as a cluster. A Ceph Storage cluster consists of two primary types of daemons: Ceph Object Storage Daemon (CephOSD) - The CephOSD performs data storage, data replication, rebalancing, recovery, monitoring, and reporting tasks. Ceph Monitor (CephMon) - The CephMon maintains the primary copy of the cluster map with the current state of the cluster. For more information about Red Hat Ceph Storage, see the Red Hat Ceph Storage Architecture Guide . 1.2. Red Hat Ceph Storage node requirements There are additional node requirements using director to create a Ceph Storage cluster: Hardware requirements including processor, memory, and network interface card selection and disk layout are available in the Red Hat Ceph Storage Hardware Guide . Each Ceph Storage node requires a supported power management interface, such as Intelligent Platform Management Interface (IPMI) functionality, on the motherboard of the server. Each Ceph Storage node must have at least two disks. RHOSP director uses cephadm to deploy the Ceph Storage cluster. The cephadm functionality does not support installing Ceph OSD on the root disk of the node. 1.3. Ceph Storage nodes and RHEL compatibility RHOSP 17.0 is supported on RHEL 9.0. However, hosts that are mapped to the Ceph Storage role update to the latest major RHEL release. Before upgrading, review the Red Hat Knowledgebase article Red Hat Ceph Storage: Supported configurations . 1.4. Deploying Red Hat Ceph Storage You deploy Red Hat Ceph Storage in two phases: Create the Red Hat Ceph Storage cluster before deploying the overcloud. Configure the Red Hat Ceph Storage cluster during overcloud deployment. A Ceph Storage cluster is created ready to serve the Ceph RADOS Block Device (RBD) service. Additionally, the following services are running on the appropriate nodes: Ceph Monitor (CephMon) Ceph Manager (CephMgr) Ceph OSD (CephOSD) Pools and cephx keys are created during the configuration phase. The following Ceph Storage components are not available until after the configuration phase: Ceph Dashboard (CephDashboard) Ceph Object Gateway (CephRGW) Ceph MDS (CephMds) Red Hat Ceph Storage cluster configuration finalizes during overcloud deployment. Daemons and services such as Ceph Object Gateway and Ceph Dashboard deploy according to the overcloud definition. Red Hat OpenStack Platform (RHOSP) services are configured as Ceph Storage cluster clients. 1.5. Red Hat Ceph Storage deployment requirements Provisioning of network resources and bare metal instances is required before Ceph Storage cluster creation. Configure the following before creating a Red Hat Ceph Storage cluster: Provision networks with the openstack overcloud network provision command and the cli-overcloud-network-provision.yaml ansible playbook. Provision bare metal instances with the openstack overcloud node provision command to provision bare metal instances using the cli-overcloud-node-provision.yaml ansible playbook. For more information about these tasks, see: Networking Guide Bare Metal Provisioning The following elements must be present in the overcloud environment to finalize the Ceph Storage cluster configuration: Red Hat OpenStack Platform director installed on an undercloud host. See Installing director in Director Installation and Usage. Installation of recommended hardware to support Red Hat Ceph Storage. For more information about recommended hardware, see the Red Hat Ceph Storage Hardware Guide. 1.6. Post deployment verification Director deploys a Ceph Storage cluster ready to serve Ceph RADOS Block Device (RBD) using tripleo-ansible roles executed by the cephadm command. Verify the following are in place after cephadm completes Ceph Storage deployment: SSH access to a CephMon service node to use the sudo cephadm shell command. All OSDs operational. Note Check inoperative OSDs for environmental issues like uncleaned disks. A Ceph configuration file and client administration keyring file in the /etc/ceph directory of CephMon service nodes. The Ceph Storage cluster is ready to serve RBD. Pools, cephx keys, CephDashboard, and CephRGW are configured during overcloud deployment by the openstack overcloud deploy command. This is for two reasons: The Dashboard and RGW services must integrate with haproxy . This is deployed with the overcloud. The creation of pools and cephx keys are dependent on which OpenStack clients are deployed. These resources are created in the Ceph Storage cluster using the client administration keyring file and the ~/deployed_ceph.yaml file output by the openstack overcloud ceph deploy command. For more information about cephadm , see Red Hat Ceph Storage Installation Guide .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/deploying_red_hat_ceph_storage_and_red_hat_openstack_platform_together_with_director/assembly_integrating-an-overcloud-with-red-hat-ceph-storage_deployingcontainerizedrhcs
Chapter 2. Configuring the Knative CLI
Chapter 2. Configuring the Knative CLI You can customize your Knative ( kn ) CLI setup by creating a config.yaml configuration file. You can provide this configuration by using the --config flag, otherwise the configuration is picked up from a default location. The default configuration location conforms to the XDG Base Directory Specification , and is different for UNIX systems and Windows systems. For UNIX systems: If the XDG_CONFIG_HOME environment variable is set, the default configuration location that the Knative ( kn ) CLI looks for is USDXDG_CONFIG_HOME/kn . If the XDG_CONFIG_HOME environment variable is not set, the Knative ( kn ) CLI looks for the configuration in the home directory of the user at USDHOME/.config/kn/config.yaml . For Windows systems, the default Knative ( kn ) CLI configuration location is %APPDATA%\kn . Example configuration file plugins: path-lookup: true 1 directory: ~/.config/kn/plugins 2 eventing: sink-mappings: 3 - prefix: svc 4 group: core 5 version: v1 6 resource: services 7 1 Specifies whether the Knative ( kn ) CLI should look for plugins in the PATH environment variable. This is a boolean configuration option. The default value is false . 2 Specifies the directory where the Knative ( kn ) CLI looks for plugins. The default path depends on the operating system, as described previously. This can be any directory that is visible to the user. 3 The sink-mappings spec defines the Kubernetes addressable resource that is used when you use the --sink flag with a Knative ( kn ) CLI command. 4 The prefix you want to use to describe your sink. svc for a service, channel , and broker are predefined prefixes for the Knative ( kn ) CLI. 5 The API group of the Kubernetes resource. 6 The version of the Kubernetes resource. 7 The plural name of the Kubernetes resource type. For example, services or brokers .
[ "plugins: path-lookup: true 1 directory: ~/.config/kn/plugins 2 eventing: sink-mappings: 3 - prefix: svc 4 group: core 5 version: v1 6 resource: services 7" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/knative_cli/advanced-kn-config
Chapter 2. Installing Satellite Server
Chapter 2. Installing Satellite Server When the intended host for Satellite Server is in a disconnected environment, you can install Satellite Server by using an external computer to download an ISO image of the packages, and copying the packages to the system you want to install Satellite Server on. This method is not recommended for any other situation as ISO images might not contain the latest updates, bug fixes, and functionality. Use the following procedures to install Satellite Server, perform the initial configuration, and import subscription manifests. Before you continue, consider which manifests are relevant for your environment. For more information on manifests, see Managing Red Hat Subscriptions in the Content Management Guide . Note You cannot register Satellite Server to itself. 2.1. Downloading the Binary DVD Images Use this procedure to download the ISO images for Red Hat Enterprise Linux and Red Hat Satellite. Procedure Go to Red Hat Customer Portal and log in. Click DOWNLOADS . Select Red Hat Enterprise Linux . Ensure that you have the correct product and version for your environment. Product Variant is set to Red Hat Enterprise Linux for x86_64 for Red Hat Enterprise Linux 8, or Red Hat Enterprise Linux Server for Red Hat Enterprise Linux 7. Version is set to the latest minor version of the product you plan to use as the base operating system. Architecture is set to the 64 bit version. On the Product Software tab, download the applicable Binary DVD image. For Red Hat Enterprise Linux 8, it must be the latest Red Hat Enterprise Linux for x86_64 version. For Red Hat Enterprise Linux 7, it must be the latest Red Hat Enterprise Linux Server version. Click DOWNLOADS and select Red Hat Satellite . Ensure that you have the correct product and version for your environment. Product Variant is set to Red Hat Satellite . Version is set to the latest minor version of the product you plan to use. On the Product Software tab, download the Binary DVD image for the latest Red Hat Satellite version. Copy the ISO files to /var/tmp on the Satellite base operating system or other accessible storage device. 2.2. Configuring the Base Operating System with Offline Repositories in RHEL 7 Use this procedure to configure offline repositories for the Red Hat Enterprise Linux 7 and Red Hat Satellite ISO images. Procedure Create a directory to serve as the mount point for the ISO file corresponding to the base operating system's version. Mount the ISO image for Red Hat Enterprise Linux to the mount point. To copy the ISO file's repository data file and change permissions, enter: Edit the repository data file and add the baseurl directive. Verify that the repository has been configured. Create a directory to serve as the mount point for the ISO file of Satellite Server. Mount the ISO image for Satellite Server to the mount point. 2.3. Configuring the Base Operating System with Offline Repositories in RHEL 8 Use this procedure to configure offline repositories for the Red Hat Enterprise Linux 8 and Red Hat Satellite ISO images. Procedure Create a directory to serve as the mount point for the ISO file corresponding to the version of the base operating system. Mount the ISO image for Red Hat Enterprise Linux to the mount point. To copy the ISO file's repository data file and change permissions, enter: Edit the repository data file and add the baseurl directive. Verify that the repository has been configured. Create a directory to serve as the mount point for the ISO file of Satellite Server. Mount the ISO image for Satellite Server to the mount point. 2.4. Installing the Satellite Packages from the Offline Repositories Use this procedure to install the Satellite packages from the offline repositories. Procedure Ensure the ISO images for Red Hat Enterprise Linux Server and Red Hat Satellite are mounted: Import the Red Hat GPG keys: Ensure the base operating system is up to date with the Binary DVD image: Change to the directory where the Satellite ISO is mounted: Run the installation script in the mounted directory: Note The script contains a command that enables the satellite:el8 module. Enablement of the module satellite:el8 warns about a conflict with postgresql:10 and ruby:2.5 as these modules are set to the default module versions on Red Hat Enterprise Linux 8. The module satellite:el8 has a dependency for the modules postgresql:12 and ruby:2.7 that will be enabled with the satellite:el8 module. These warnings do not cause installation process failure, hence can be ignored safely. For more information about modules and lifecycle streams on Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux Application Streams Life Cycle . If you have successfully installed the Satellite packages, the following message is displayed: Install is complete. Please run satellite-installer --scenario satellite . 2.5. Resolving Package Dependency Errors If there are package dependency errors during installation of Satellite Server packages, you can resolve the errors by downloading and installing packages from Red Hat Customer Portal. For more information about resolving dependency errors, see the KCS solution How can I use the yum output to solve yum dependency errors? . If you have successfully installed the Satellite packages, skip this procedure. Procedure Go to the Red Hat Customer Portal and log in. Click DOWNLOADS . Click the Product that contains the package that you want to download. Ensure that you have the correct Product Variant , Version , and Architecture for your environment. Click the Packages tab. In the Search field, enter the name of the package. Click the package. From the Version list, select the version of the package. At the bottom of the page, click Download Now . Copy the package to the Satellite base operating system. On Satellite Server, change to the directory where the package is located: Install the package locally: Change to the directory where the Satellite ISO is mounted: Verify that you have resolved the package dependency errors by installing Satellite Server packages. If there are further package dependency errors, repeat this procedure. Note The script contains a command that enables the satellite:el8 module. Enablement of the module satellite:el8 warns about a conflict with postgresql:10 and ruby:2.5 as these modules are set to the default module versions on Red Hat Enterprise Linux 8. The module satellite:el8 has a dependency for the modules postgresql:12 and ruby:2.7 that will be enabled with the satellite:el8 module. These warnings do not cause installation process failure, hence can be ignored safely. For more information about modules and lifecycle streams on Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux Application Streams Life Cycle . If you have successfully installed the Satellite packages, the following message is displayed: Install is complete. Please run satellite-installer --scenario satellite . 2.6. Synchronizing the System Clock With chronyd To minimize the effects of time drift, you must synchronize the system clock on the base operating system on which you want to install Satellite Server with Network Time Protocol (NTP) servers. If the base operating system clock is configured incorrectly, certificate verification might fail. For more information about the chrony suite, see Using the Chrony suite to configure NTP in Red Hat Enterprise Linux 8 Configuring basic system settings , and Configuring NTP Using the chrony Suite in the Red Hat Enterprise Linux 7 System Administrator's Guide . Procedure Install the chrony package: Start and enable the chronyd service: 2.7. Installing the SOS Package on the Base Operating System Install the sos package on the base operating system so that you can collect configuration and diagnostic information from a Red Hat Enterprise Linux system. You can also use it to provide the initial system analysis, which is required when opening a service request with Red Hat Technical Support. For more information on using sos , see the Knowledgebase solution What is a sosreport and how to create one in Red Hat Enterprise Linux 4.6 and later? on the Red Hat Customer Portal. Procedure Install the sos package: 2.8. Configuring Satellite Server Install Satellite Server using the satellite-installer installation script. Choose from one of the following methods: Section 2.8.1, "Configuring Satellite Installation" . This method is performed by running the installation script with one or more command options. The command options override the corresponding default initial configuration options and are recorded in the Satellite answer file. You can run the script as often as needed to configure any necessary options. Note Depending on the options that you use when running the Satellite installer, the configuration can take several minutes to complete. An administrator can view the answer file to see previously used options for both methods. 2.8.1. Configuring Satellite Installation This initial configuration procedure creates an organization, location, user name, and password. After the initial configuration, you can create additional organizations and locations if required. The initial configuration also installs PostgreSQL databases on the same server. The installation process can take tens of minutes to complete. If you are connecting remotely to the system, use a utility such as tmux that allows suspending and reattaching a communication session so that you can check the installation progress in case you become disconnected from the remote system. If you lose connection to the shell where the installation command is running, see the log at /var/log/foreman-installer/satellite.log to determine if the process completed successfully. Considerations Use the satellite-installer --scenario satellite --help command to display the available options and any default values. If you do not specify any values, the default values are used. Specify a meaningful value for the option: --foreman-initial-organization . This can be your company name. An internal label that matches the value is also created and cannot be changed afterwards. If you do not specify a value, an organization called Default Organization with the label Default_Organization is created. You can rename the organization name but not the label. Remote Execution is the primary method of managing packages on Content Hosts. If you want to use the deprecated Katello Agent instead of Remote Execution SSH, use the --foreman-proxy-content-enable-katello-agent=true option to enable it. The same option should be given on any Capsule Server as well as Satellite Server. By default, all configuration files configured by the installer are managed by Puppet. When satellite-installer runs, it overwrites any manual changes to the Puppet managed files with the initial values. If you want to manage DNS files and DHCP files manually, use the --foreman-proxy-dns-managed=false and --foreman-proxy-dhcp-managed=false options so that Puppet does not manage the files related to the respective services. For more information on how to apply custom configuration on other services, see Applying Custom Configuration to Satellite . Procedure Enter the following command with any additional options that you want to use: The script displays its progress and writes logs to /var/log/foreman-installer/satellite.log . Unmount the ISO images: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 7: 2.9. Disabling Subscription Connection Disable subscription connection on disconnected Satellite Server to avoid connecting to the Red Hat Portal. This will also prevent you from refreshing the manifest, updating upstream entitlements, and changing the status of Simple Content Access. Procedure In the Satellite web UI, navigate to Administer > Settings . Click the Content tab. Set the Subscription Connection Enabled value to No . CLI procedure Enter the following command on Satellite Server: 2.10. Importing a Red Hat Subscription Manifest into Satellite Server Use the following procedure to import a Red Hat subscription manifest into Satellite Server. Prerequisites You must have a Red Hat subscription manifest file exported from the Customer Portal. For more information, see Creating and Managing Manifests in Using Red Hat Subscription Management . Ensure that you disable subscription connection on your Satellite Server. For more information, see Section 2.9, "Disabling Subscription Connection" . Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions and click Manage Manifest . In the Manage Manifest window, click Browse . Navigate to the location that contains the Red Hat subscription manifest file, then click Open . If the Manage Manifest window does not close automatically, click Close to return to the Subscriptions window. CLI procedure Copy the Red Hat subscription manifest file from your client to Satellite Server: Log in to Satellite Server as the root user and import the Red Hat subscription manifest file: You can now enable repositories and import Red Hat content. For more information, see Importing Content in the Content Management guide.
[ "scp localfile username@hostname:remotefile", "mkdir /media/rhel7-server", "mount -o loop rhel7-Server-DVD .iso /media/rhel7-server", "cp /media/rhel7-server/media.repo /etc/yum.repos.d/rhel7-server.repo chmod u+w /etc/yum.repos.d/rhel7-server.repo", "baseurl=file:///media/rhel7-server/", "yum repolist", "mkdir /media/sat6", "mount -o loop sat6-DVD .iso /media/sat6", "mkdir /media/rhel8", "mount -o loop rhel8-DVD .iso /media/rhel8", "cp /media/rhel8/media.repo /etc/yum.repos.d/rhel8.repo chmod u+w /etc/yum.repos.d/rhel8.repo", "[RHEL8-BaseOS] name=Red Hat Enterprise Linux BaseOS mediaid=None metadata_expire=-1 gpgcheck=0 cost=500 baseurl=file:///media/rhel8/BaseOS/ [RHEL8-AppStream] name=Red Hat Enterprise Linux Appstream mediaid=None metadata_expire=-1 gpgcheck=0 cost=500 baseurl=file:///media/rhel8/AppStream/", "yum repolist", "mkdir /media/sat6", "mount -o loop sat6-DVD .iso /media/sat6", "findmnt -t iso9660", "rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release", "yum update", "cd /media/sat6/", "./install_packages", "cd /path-to-package/", "yum localinstall package_name", "cd /media/sat6/", "./install_packages", "yum install chrony", "systemctl start chronyd systemctl enable chronyd", "yum install sos", "satellite-installer --scenario satellite --foreman-initial-organization \" My_Organization \" --foreman-initial-location \" My_Location \" --foreman-initial-admin-username admin_user_name --foreman-initial-admin-password admin_password", "umount /media/sat6 umount /media/rhel8", "umount /media/sat6 umount /media/rhel7-server", "hammer settings set --name subscription_connection_enabled --value false", "scp ~/ manifest_file .zip root@ satellite.example.com :~/.", "hammer subscription upload --file ~/ manifest_file .zip --organization \" My_Organization \"" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/installing_satellite_server_in_a_disconnected_network_environment/installing_server_disconnected_satellite
Chapter 1. How to read the Red Hat JBoss Enterprise Application Platform 8.0 documentation
Chapter 1. How to read the Red Hat JBoss Enterprise Application Platform 8.0 documentation We are in the process of modernizing the Red Hat JBoss Enterprise Application Platform 8.0 documentation. We are working to create more solution-centric documentation. The JBoss EAP 8.0 documentation contains content specific to the JBoss EAP 8.0 release including new and enhanced features found in JBoss EAP 8.0. Functionality from releases that are still supported in JBoss EAP 8.0 can be accessed in the JBoss EAP 7.4 documentation set. You can access the documentation set at Product Documentation for Red Hat JBoss Enterprise Application Platform 7.4 . The following is a suggested approach for using the JBoss EAP 8.0 documentation: Read the JBoss EAP 8.0 Release notes to learn about new, enhanced, unsupported, and removed features. Read the other JBoss EAP 8.0 documentation set for detailed information about new and enhanced features. Read the JBoss EAP 8.0 Migration Guide for details on how to migrate applications to JBoss EAP 8.0. If you need information on features supported from releases that have not been enhanced in JBoss EAP 8.0, see the JBoss EAP 7.4 documentation set at Product Documentation for Red Hat JBoss Enterprise Application Platform 7.4 . For example, development and configuration guides are available in the JBoss EAP 7.4 documentation set.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/release_notes_for_red_hat_jboss_enterprise_application_platform_8.0/how-to-read-the-jboss-enterprise-application-platform-8-0-beta-documentation_assembly-release-notes
Chapter 21. Project [config.openshift.io/v1]
Chapter 21. Project [config.openshift.io/v1] Description Project holds cluster-wide information about Project. The canonical name is cluster Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 21.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 21.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description projectRequestMessage string projectRequestMessage is the string presented to a user if they are unable to request a project via the projectrequest api endpoint projectRequestTemplate object projectRequestTemplate is the template to use for creating projects in response to projectrequest. This must point to a template in 'openshift-config' namespace. It is optional. If it is not specified, a default template is used. 21.1.2. .spec.projectRequestTemplate Description projectRequestTemplate is the template to use for creating projects in response to projectrequest. This must point to a template in 'openshift-config' namespace. It is optional. If it is not specified, a default template is used. Type object Property Type Description name string name is the metadata.name of the referenced project request template 21.1.3. .status Description status holds observed values from the cluster. They may not be overridden. Type object 21.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/projects DELETE : delete collection of Project GET : list objects of kind Project POST : create a Project /apis/config.openshift.io/v1/projects/{name} DELETE : delete a Project GET : read the specified Project PATCH : partially update the specified Project PUT : replace the specified Project /apis/config.openshift.io/v1/projects/{name}/status GET : read status of the specified Project PATCH : partially update status of the specified Project PUT : replace status of the specified Project 21.2.1. /apis/config.openshift.io/v1/projects HTTP method DELETE Description delete collection of Project Table 21.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Project Table 21.2. HTTP responses HTTP code Reponse body 200 - OK ProjectList schema 401 - Unauthorized Empty HTTP method POST Description create a Project Table 21.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.4. Body parameters Parameter Type Description body Project schema Table 21.5. HTTP responses HTTP code Reponse body 200 - OK Project schema 201 - Created Project schema 202 - Accepted Project schema 401 - Unauthorized Empty 21.2.2. /apis/config.openshift.io/v1/projects/{name} Table 21.6. Global path parameters Parameter Type Description name string name of the Project HTTP method DELETE Description delete a Project Table 21.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 21.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Project Table 21.9. HTTP responses HTTP code Reponse body 200 - OK Project schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Project Table 21.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.11. HTTP responses HTTP code Reponse body 200 - OK Project schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Project Table 21.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.13. Body parameters Parameter Type Description body Project schema Table 21.14. HTTP responses HTTP code Reponse body 200 - OK Project schema 201 - Created Project schema 401 - Unauthorized Empty 21.2.3. /apis/config.openshift.io/v1/projects/{name}/status Table 21.15. Global path parameters Parameter Type Description name string name of the Project HTTP method GET Description read status of the specified Project Table 21.16. HTTP responses HTTP code Reponse body 200 - OK Project schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Project Table 21.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.18. HTTP responses HTTP code Reponse body 200 - OK Project schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Project Table 21.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.20. Body parameters Parameter Type Description body Project schema Table 21.21. HTTP responses HTTP code Reponse body 200 - OK Project schema 201 - Created Project schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/config_apis/project-config-openshift-io-v1
Chapter 15. Concepts for sizing CPU and memory resources
Chapter 15. Concepts for sizing CPU and memory resources Use this as a starting point to size a product environment. Adjust the values for your environment as needed based on your load tests. 15.1. Performance recommendations Warning Performance will be lowered when scaling to more Pods (due to additional overhead) and using a cross-datacenter setup (due to additional traffic and operations). Increased cache sizes can improve the performance when Red Hat build of Keycloak instances running for a longer time. This will decrease response times and reduce IOPS on the database. Still, those caches need to be filled when an instance is restarted, so do not set resources too tight based on the stable state measured once the caches have been filled. Use these values as a starting point and perform your own load tests before going into production. Summary: The used CPU scales linearly with the number of requests up to the tested limit below. The used memory scales linearly with the number of active sessions up to the tested limit below. Recommendations: The base memory usage for an inactive Pod is 1000 MB of RAM. For each 100,000 active user sessions, add 500 MB per Pod in a three-node cluster (tested with up to 200,000 sessions). This assumes that each user connects to only one client. Memory requirements increase with the number of client sessions per user session (not tested yet). In containers, Keycloak allocates 70% of the memory limit for heap based memory. It will also use approximately 300 MB of non-heap-based memory. To calculate the requested memory, use the calculation above. As memory limit, subtract the non-heap memory from the value above and divide the result by 0.7. For each 8 password-based user logins per second, 1 vCPU per Pod in a three-node cluster (tested with up to 300 per second). Red Hat build of Keycloak spends most of the CPU time hashing the password provided by the user, and it is proportional to the number of hash iterations. For each 450 client credential grants per second, 1 vCPU per Pod in a three node cluster (tested with up to 2000 per second). Most CPU time goes into creating new TLS connections, as each client runs only a single request. For each 350 refresh token requests per second, 1 vCPU per Pod in a three-node cluster (tested with up to 435 refresh token requests per second). Leave 200% extra head-room for CPU usage to handle spikes in the load. This ensures a fast startup of the node, and sufficient capacity to handle failover tasks like, for example, re-balancing Infinispan caches, when one node fails. Performance of Red Hat build of Keycloak dropped significantly when its Pods were throttled in our tests. 15.1.1. Calculation example Target size: 50,000 active user sessions 24 logins per seconds 450 client credential grants per second 350 refresh token requests per second Limits calculated: CPU requested: 5 vCPU (24 logins per second = 3 vCPU, 450 client credential grants per second = 1 vCPU, 350 refresh token = 1 vCPU) CPU limit: 15 vCPU (Allow for three times the CPU requested to handle peaks, startups and failover tasks) Memory requested: 1250 MB (1000 MB base memory plus 250 MB RAM for 50,000 active sessions) Memory limit: 1360 MB (1250 MB expected memory usage minus 300 non-heap-usage, divided by 0.7) 15.2. Reference architecture The following setup was used to retrieve the settings above to run tests of about 10 minutes for different scenarios: OpenShift 4.14.x deployed on AWS via ROSA. Machinepool with m5.4xlarge instances. Red Hat build of Keycloak deployed with the Operator and 3 pods in a high-availability setup with two sites in active/passive mode. OpenShift's reverse proxy running in passthrough mode were the TLS connection of the client is terminated at the Pod. Database Amazon Aurora PostgreSQL in a multi-AZ setup, with the writer instance in the availability zone of the primary site. Default user password hashing with PBKDF2(SHA512) 210,000 hash iterations which is the default as recommended by OWASP . Client credential grants don't use refresh tokens (which is the default). Database seeded with 20,000 users and 20,000 clients. Infinispan local caches at default of 10,000 entries, so not all clients and users fit into the cache, and some requests will need to fetch the data from the database. All sessions in distributed caches as per default, with two owners per entries, allowing one failing Pod without losing data.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/high_availability_guide/concepts-memory-and-cpu-sizing-
3.3. Red Hat Enterprise Linux-Specific Information
3.3. Red Hat Enterprise Linux-Specific Information Monitoring bandwidth and CPU utilization under Red Hat Enterprise Linux entails using the tools discussed in Chapter 2, Resource Monitoring ; therefore, if you have not yet read that chapter, you should do so before continuing. 3.3.1. Monitoring Bandwidth on Red Hat Enterprise Linux As stated in Section 2.4.2, "Monitoring Bandwidth" , it is difficult to directly monitor bandwidth utilization. However, by examining device-level statistics, it is possible to roughly gauge whether insufficient bandwidth is an issue on your system. By using vmstat , it is possible to determine if overall device activity is excessive by examining the bi and bo fields; in addition, taking note of the si and so fields give you a bit more insight into how much disk activity is due to swap-related I/O: In this example, the bi field shows two blocks/second written to block devices (primarily disk drives), while the bo field shows six blocks/second read from block devices. We can determine that none of this activity was due to swapping, as the si and so fields both show a swap-related I/O rate of zero kilobytes/second. By using iostat , it is possible to gain a bit more insight into disk-related activity: This output shows us that the device with major number 8 (which is /dev/sda , the first SCSI disk) averaged slightly more than one I/O operation per second (the tsp field). Most of the I/O activity for this device were writes (the Blk_wrtn field), with slightly more than 25 blocks written each second (the Blk_wrtn/s field). If more detail is required, use iostat 's -x option: Over and above the longer lines containing more fields, the first thing to keep in mind is that this iostat output is now displaying statistics on a per-partition level. By using df to associate mount points with device names, it is possible to use this report to determine if, for example, the partition containing /home/ is experiencing an excessive workload. Actually, each line output from iostat -x is longer and contains more information than this; here is the remainder of each line (with the device column added for easier reading): In this example, it is interesting to note that /dev/sda2 is the system swap partition; it is obvious from the many fields reading 0.00 for this partition that swapping is not a problem on this system. Another interesting point to note is /dev/sda1 . The statistics for this partition are unusual; the overall activity seems low, but why are the average I/O request size (the avgrq-sz field), average wait time (the await field), and the average service time (the svctm field) so much larger than the other partitions? The answer is that this partition contains the /boot/ directory, which is where the kernel and initial ramdisk are stored. When the system boots, the read I/Os (notice that only the rsec/s and rkB/s fields are non-zero; no writing is done here on a regular basis) used during the boot process are for large numbers of blocks, resulting in the relatively long wait and service times iostat displays. It is possible to use sar for a longer-term overview of I/O statistics; for example, sar -b displays a general I/O report: Here, like iostat 's initial display, the statistics are grouped for all block devices. Another I/O-related report is produced using sar -d : This report provides per-device information, but with little detail. While there are no explicit statistics showing bandwidth utilization for a given bus or datapath, we can at least determine what the devices are doing and use their activity to indirectly determine the bus loading.
[ "procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 0 248088 158636 480804 0 0 2 6 120 120 10 3 87 0", "Linux 2.4.21-1.1931.2.349.2.2.entsmp (raptor.example.com) 07/21/2003 avg-cpu: %user %nice %sys %idle 5.34 4.60 2.83 87.24 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn dev8-0 1.10 6.21 25.08 961342 3881610 dev8-1 0.00 0.00 0.00 16 0", "Linux 2.4.21-1.1931.2.349.2.2.entsmp (raptor.example.com) 07/21/2003 avg-cpu: %user %nice %sys %idle 5.37 4.54 2.81 87.27 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz /dev/sda 13.57 2.86 0.36 0.77 32.20 29.05 16.10 14.53 54.52 /dev/sda1 0.17 0.00 0.00 0.00 0.34 0.00 0.17 0.00 133.40 /dev/sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11.56 /dev/sda3 0.31 2.11 0.29 0.62 4.74 21.80 2.37 10.90 29.42 /dev/sda4 0.09 0.75 0.04 0.15 1.06 7.24 0.53 3.62 43.01", "Device: avgqu-sz await svctm %util /dev/sda 0.24 20.86 3.80 0.43 /dev/sda1 0.00 141.18 122.73 0.03 /dev/sda2 0.00 6.00 6.00 0.00 /dev/sda3 0.12 12.84 2.68 0.24 /dev/sda4 0.11 57.47 8.94 0.17", "Linux 2.4.21-1.1931.2.349.2.2.entsmp (raptor.example.com) 07/21/2003 12:00:00 AM tps rtps wtps bread/s bwrtn/s 12:10:00 AM 0.51 0.01 0.50 0.25 14.32 12:20:01 AM 0.48 0.00 0.48 0.00 13.32 ... 06:00:02 PM 1.24 0.00 1.24 0.01 36.23 Average: 1.11 0.31 0.80 68.14 34.79", "Linux 2.4.21-1.1931.2.349.2.2.entsmp (raptor.example.com) 07/21/2003 12:00:00 AM DEV tps sect/s 12:10:00 AM dev8-0 0.51 14.57 12:10:00 AM dev8-1 0.00 0.00 12:20:01 AM dev8-0 0.48 13.32 12:20:01 AM dev8-1 0.00 0.00 ... 06:00:02 PM dev8-0 1.24 36.25 06:00:02 PM dev8-1 0.00 0.00 Average: dev8-0 1.11 102.93 Average: dev8-1 0.00 0.00" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-bandwidth-rhlspec
Chapter 6. Installing a cluster on GCP in a restricted network
Chapter 6. Installing a cluster on GCP in a restricted network In OpenShift Container Platform 4.16, you can install a cluster on Google Cloud Platform (GCP) in a restricted network by creating an internal mirror of the installation release content on an existing Google Virtual Private Cloud (VPC). Important You can install an OpenShift Container Platform cluster by using mirrored installation release content, but your cluster will require internet access to use the GCP APIs. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VPC in GCP. While installing a cluster in a restricted network that uses installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements: Contains the mirror registry Has firewall rules or a peering connection to access the mirror registry hosted elsewhere If you use a firewall, you configured it to allow the sites that your cluster requires access to. While you might need to grant access to more sites, you must grant access to *.googleapis.com and accounts.google.com . 6.2. About installations in restricted networks In OpenShift Container Platform 4.16, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 6.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 6.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 6.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VPC to install the cluster in under the parent platform.gcp field: network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet> For platform.gcp.network , specify the name for the existing Google VPC. For platform.gcp.controlPlaneSubnet and platform.gcp.computeSubnet , specify the existing subnets to deploy the control plane machines and compute machines, respectively. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 6.5.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.5.2. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 6.1. Machine series A2 A3 C2 C2D C3 C3D E2 M1 N1 N2 N2D Tau T2D 6.5.3. Tested instance types for GCP on 64-bit ARM infrastructures The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform. Example 6.2. Machine series for 64-bit ARM machines Tau T2A 6.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 6.5.5. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 6.5.6. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 6.5.7. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 additionalTrustBundle: | 27 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 28 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 15 17 18 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 9 If you do not provide these parameters and values, the installation program provides the default value. 4 10 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 11 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 6 12 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 7 13 19 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 8 14 20 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 16 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 21 Specify the name of an existing VPC. 22 Specify the name of the existing subnet to deploy the control plane machines to. The subnet must belong to the VPC that you specified. 23 Specify the name of the existing subnet to deploy the compute machines to. The subnet must belong to the VPC that you specified. 24 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 25 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 26 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 27 Provide the contents of the certificate file that you used for your mirror registry. 28 Provide the imageContentSources section from the output of the command to mirror the repository. 6.5.8. Create an Ingress Controller with global access on GCP You can create an Ingress Controller that has global access to a Google Cloud Platform (GCP) cluster. Global access is only available to Ingress Controllers using internal load balancers. Prerequisites You created the install-config.yaml and complete any modifications to it. Procedure Create an Ingress Controller with global access on a new GCP cluster. Change to the directory that contains the installation program and create a manifest file: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: Sample clientAccess configuration to Global apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService 1 Set gcp.clientAccess to Global . 2 Global access is only available to Ingress Controllers using internal load balancers. 6.5.9. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.6. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.7. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 6.7.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 6.3. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 6.7.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 6.7.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have added one of the following authentication options to the GCP account that the installation program uses: The IAM Workload Identity Pool Admin role. The following granular permissions: Example 6.4. Required GCP permissions compute.projects.get iam.googleapis.com/workloadIdentityPoolProviders.create iam.googleapis.com/workloadIdentityPoolProviders.get iam.googleapis.com/workloadIdentityPools.create iam.googleapis.com/workloadIdentityPools.delete iam.googleapis.com/workloadIdentityPools.get iam.googleapis.com/workloadIdentityPools.undelete iam.roles.create iam.roles.delete iam.roles.list iam.roles.undelete iam.roles.update iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.getIamPolicy iam.serviceAccounts.list iam.serviceAccounts.setIamPolicy iam.workloadIdentityPoolProviders.get iam.workloadIdentityPools.delete resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.getIamPolicy storage.buckets.setIamPolicy storage.objects.create storage.objects.delete storage.objects.list Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 6.7.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 6.7.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 6.5. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 6.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.10. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 6.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.12. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 additionalTrustBundle: | 27 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 28 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_gcp/installing-restricted-networks-gcp-installer-provisioned
31.4. Performance Testing Procedures
31.4. Performance Testing Procedures The goal of this section is to construct a performance profile of the device with VDO installed. Each test should be run with and without VDO installed, so that VDO's performance can be evaluated relative to the performance of the base system. 31.4.1. Phase 1: Effects of I/O Depth, Fixed 4 KB Blocks The goal of this test is to determine the I/O depth that produces the optimal throughput and the lowest latency for your appliance. VDO uses a 4 KB sector size rather than the traditional 512 B used on legacy storage devices. The larger sector size allows it to support higher-capacity storage, improve performance, and match the cache buffer size used by most operating systems. Perform four-corner testing at 4 KB I/O, and I/O depth of 1, 8, 16, 32, 64, 128, 256, 512, 1024: Sequential 100% reads, at fixed 4 KB * Sequential 100% write, at fixed 4 KB Random 100% reads, at fixed 4 KB * Random 100% write, at fixed 4 KB ** * Prefill any areas that may be read during the read test by performing a write fio job first ** Re-create the VDO volume after 4 KB random write I/O runs Example shell test input stimulus (write): Record throughput and latency at each data point, and then graph. Repeat test to complete four-corner testing: --rw=randwrite , --rw=read , and --rw=randread . The result is a graph as shown below. Points of interest are the behavior across the range and the points of inflection where increased I\ufeff/\ufeffO depth proves to provide diminishing throughput gains. Likely, sequential access and random access will peak at different values, but it may be different for all types of storage configurations. In Figure 31.1, "I/O Depth Analysis" notice the "knee" in each performance curve. Marker 1 identifies the peak sequential throughput at point X, and marker 2 identifies peak random 4 KB throughput at point Z. This particular appliance does not benefit from sequential 4 KB I\ufeff/\ufeffO depth > X. Beyond that depth, there are diminishing bandwidth bandwidth gains, and average request latency will increase 1:1 for each additional I\ufeff/\ufeffO request. This particular appliance does not benefit from random 4 KB I\ufeff/\ufeffO depth > Z. Beyond that depth, there are diminishing bandwidth gains, and average request latency will increase 1:1 for each additional I\ufeff/\ufeffO request. Figure 31.1. I/O Depth Analysis Figure 31.2, "Latency Response of Increasing I/O for Random Writes" shows an example of the random write latency after the "knee" of the curve in Figure 31.1, "I/O Depth Analysis" . Benchmarking practice should test at these points for maximum throughput that incurs the least response time penalty. As we move forward in the test plan for this example appliance, we will collect additional data with I\ufeff/\ufeffO depth = Z Figure 31.2. Latency Response of Increasing I/O for Random Writes 31.4.2. Phase 2: Effects of I/O Request Size The goal of this test is to understand the block size that produces the best performance of the system under test at the optimal I/O depth determined in the step. Perform four-corner testing at fixed I/O depth, with varied block size (powers of 2) over the range 8 KB to 1 MB. Remember to prefill any areas to be read and to recreate volumes between tests. Set the I/O Depth to the value determined in Section 31.4.1, "Phase 1: Effects of I/O Depth, Fixed 4 KB Blocks" . Example test input stimulus (write): Record throughput and latency at each data point, and then graph. Repeat test to complete four-corner testing: --rw=randwrite , --rw=read , and --rw=randread . There are several points of interest that you may find in the results. In this example: Sequential writes reach a peak throughput at request size Y. This curve demonstrates how applications that are configurable or naturally dominated by certain request sizes may perceive performance. Larger request sizes often provide more throughput because 4 KB I/Os may benefit from merging. Sequential reads reach a similar peak throughput at point Z. Remember that after these peaks, overall latency before the I/O completes will increase with no additional throughput. It would be wise to tune the device to not accept I/Os larger than this size. Random reads achieve peak throughput at point X. Some devices may achieve near-sequential throughput rates at large request size random accesses, while others suffer more penalty when varying from purely sequential access. Random writes achieve peak throughput at point Y. Random writes involve the most interaction of a deduplication device, and VDO achieves high performance especially when request sizes and/or I/O depths are large. The results from this test Figure 31.3, "Request Size vs. Throughput Analysis and Key Inflection Points" help in understanding the characteristics of the storage device and the user experience for specific applications. Consult with a Red Hat Sales Engineer to determine if there may be further tuning needed to increase performance at different request sizes. Figure 31.3. Request Size vs. Throughput Analysis and Key Inflection Points 31.4.3. Phase 3: Effects of Mixing Read & Write I/Os The goal of this test is to understand how your appliance with VDO behaves when presented with mixed I/O loads (read/write), analyzing the effects of read/write mix at the optimal random queue depth and request sizes from 4 KB to 1 MB. You should use whatever is appropriate in your case. Perform four-corner testing at fixed I/O depth, varied block size (powers of 2) over the 8 KB to 256 KB range, and set read percentage at 10% increments, beginning with 0%. Remember to prefill any areas to be read and to recreate volumes between tests. Set the I/O Depth to the value determined in Section 31.4.1, "Phase 1: Effects of I/O Depth, Fixed 4 KB Blocks" . Example test input stimulus (read/write mix): Record throughput and latency at each data point, and then graph. Figure 31.4, "Performance Is Consistent across Varying Read/Write Mixes" shows an example of how VDO may respond to I/O loads: Figure 31.4. Performance Is Consistent across Varying Read/Write Mixes Performance (aggregate) and latency (aggregate) are relatively consistent across the range of mixing reads and writes, trending from the lower max write throughput to the higher max read throughput. This behavior may vary with different storage, but the important observation is that the performance is consistent under varying loads and/or that you can understand performance expectation for applications that demonstrate specific read/write mixes. If you discover any unexpected results, Red Hat Sales Engineers will be able to help you understand if it is VDO or the storage device itself that needs modification. Note: Systems that do not exhibit a similar response consistency often signify a sub-optimal configuration. Contact your Red Hat Sales Engineer if this occurs. 31.4.4. Phase 4: Application Environments The goal of these final tests is to understand how the system with VDO behaves when deployed in a real application environment. If possible, use real applications and use the knowledge learned so far; consider limiting the permissible queue depth on your appliance, and if possible tune the application to issue requests with those block sizes most beneficial to VDO performance. Request sizes, I/O loads, read/write patterns, etc., are generally hard to predict, as they will vary by application use case (i.e., filers vs. virtual desktops vs. database), and applications often vary in the types of I/O based on the specific operation or due to multi-tenant access. The final test shows general VDO performance in a mixed environment. If more specific details are known about your expected environment, test those settings as well. Example test input stimulus (read/write mix): Record throughput and latency at each data point, and then graph ( Figure 31.5, "Mixed Environment Performance" ). Figure 31.5. Mixed Environment Performance
[ "for depth in 1 2 4 8 16 32 64 128 256 512 1024 2048; do fio --rw=write --bs=4096 --name=vdo --filename=/dev/mapper/vdo0 --ioengine=libaio --numjobs=1 --thread --norandommap --runtime=300 --direct=1 --iodepth=USDdepth --scramble_buffers=1 --offset=0 --size=100g done", "z= [see previous step] for iosize in 4 8 16 32 64 128 256 512 1024; do fio --rw=write --bs=USDiosize\\k --name=vdo --filename=/dev/mapper/vdo0 --ioengine=libaio --numjobs=1 --thread --norandommap --runtime=300 --direct=1 --iodepth=USDz --scramble_buffers=1 --offset=0 --size=100g done", "z= [see previous step] for readmix in 0 10 20 30 40 50 60 70 80 90 100; do for iosize in 4 8 16 32 64 128 256 512 1024; do fio --rw=rw --rwmixread=USDreadmix --bs=USDiosize\\k --name=vdo --filename=/dev/mapper/vdo0 --ioengine=libaio --numjobs=1 --thread --norandommap --runtime=300 --direct=0 --iodepth=USDz --scramble_buffers=1 --offset=0 --size=100g done done", "for readmix in 20 50 80; do for iosize in 4 8 16 32 64 128 256 512 1024; do fio --rw=rw --rwmixread=USDreadmix --bsrange=4k-256k --name=vdo --filename=/dev/mapper/vdo0 --ioengine=libaio --numjobs=1 --thread --norandommap --runtime=300 --direct=0 --iodepth=USDiosize --scramble_buffers=1 --offset=0 --size=100g done done" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/vdo-ev-performance-testing
5.3.11. Splitting a Volume Group
5.3.11. Splitting a Volume Group To split the physical volumes of a volume group and create a new volume group, use the vgsplit command. Logical volumes cannot be split between volume groups. Each existing logical volume must be entirely on the physical volumes forming either the old or the new volume group. If necessary, however, you can use the pvmove command to force the split. The following example splits off the new volume group smallvg from the original volume group bigvg .
[ "vgsplit bigvg smallvg /dev/ram15 Volume group \"smallvg\" successfully split from \"bigvg\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/vg_split
Chapter 6. Monitoring and managing upgrade of the storage cluster
Chapter 6. Monitoring and managing upgrade of the storage cluster After running the ceph orch upgrade start command to upgrade the Red Hat Ceph Storage cluster, you can check the status, pause, resume, or stop the upgrade process. The health of the cluster changes to HEALTH_WARNING during an upgrade. If the host of the cluster is offline, the upgrade is paused. Note You have to upgrade one daemon type after the other. If a daemon cannot be upgraded, the upgrade is paused. Prerequisites A running Red Hat Ceph Storage cluster 5. Root-level access to all the nodes. At least two Ceph Manager nodes in the storage cluster: one active and one standby. Upgrade for the storage cluster initiated. Procedure Determine whether an upgrade is in process and the version to which the cluster is upgrading: Example Note You do not get a message once the upgrade is successful. Run ceph versions and ceph orch ps commands to verify the new image ID and the version of the storage cluster. Optional: Pause the upgrade process: Example Optional: Resume a paused upgrade process: Example Optional: Stop the upgrade process: Example
[ "ceph orch upgrade status", "ceph orch upgrade pause", "ceph orch upgrade resume", "ceph orch upgrade stop" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/upgrade_guide/monitoring-and-managing-upgrade-of-the-storage-cluster_upgrade
6.15. Improving Uptime with Virtual Machine High Availability
6.15. Improving Uptime with Virtual Machine High Availability 6.15.1. What is High Availability? High availability is recommended for virtual machines running critical workloads. A highly available virtual machine is automatically restarted, either on its original host or another host in the cluster, if its process is interrupted, such as in the following scenarios: A host becomes non-operational due to hardware failure. A host is put into maintenance mode for scheduled downtime. A host becomes unavailable because it has lost communication with an external storage resource. A highly available virtual machine is not restarted if it is shut down cleanly, such as in the following scenarios: The virtual machine is shut down from within the guest. The virtual machine is shut down from the Manager. The host is shut down by an administrator without being put in maintenance mode first. With storage domains V4 or later, virtual machines have the additional capability to acquire a lease on a special volume on the storage, enabling a virtual machine to start on another host even if the original host loses power. The functionality also prevents the virtual machine from being started on two different hosts, which may lead to corruption of the virtual machine disks. With high availability, interruption to service is minimal because virtual machines are restarted within seconds with no user intervention required. High availability keeps your resources balanced by restarting guests on a host with low current resource utilization, or based on any workload balancing or power saving policies that you configure. This ensures that there is sufficient capacity to restart virtual machines at all times. High Availability and Storage I/O Errors If a storage I/O error occurs, the virtual machine is paused. You can define how the host handles highly available virtual machines after the connection with the storage domain is reestablished; they can either be resumed, ungracefully shut down, or remain paused. For more information about these options, see Virtual Machine High Availability settings explained . 6.15.2. High Availability Considerations A highly available host requires a power management device and fencing parameters. In addition, for a virtual machine to be highly available when its host becomes non-operational, it needs to be started on another available host in the cluster. To enable the migration of highly available virtual machines: Power management must be configured for the hosts running the highly available virtual machines. The host running the highly available virtual machine must be part of a cluster which has other available hosts. The destination host must be running. The source and destination host must have access to the data domain on which the virtual machine resides. The source and destination host must have access to the same virtual networks and VLANs. There must be enough CPUs on the destination host that are not in use to support the virtual machine's requirements. There must be enough RAM on the destination host that is not in use to support the virtual machine's requirements. 6.15.3. Configuring a Highly Available Virtual Machine High availability must be configured individually for each virtual machine. Procedure Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the High Availability tab. Select the Highly Available check box to enable high availability for the virtual machine. Select the storage domain to hold the virtual machine lease, or select No VM Lease to disable the functionality, from the Target Storage Domain for VM Lease drop-down list. See What is high availability for more information about virtual machine leases. Important This functionality is only available on storage domains that are V4 or later. Select AUTO_RESUME , LEAVE_PAUSED , or KILL from the Resume Behavior drop-down list. If you defined a virtual machine lease, KILL is the only option available. For more information see Virtual Machine High Availability settings explained . Select Low , Medium , or High from the Priority drop-down list. When migration is triggered, a queue is created in which the high priority virtual machines are migrated first. If a cluster is running low on resources, only the high priority virtual machines are migrated. Click OK .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/sect-Improving_Uptime_with_Virtual_Machine_High_Availability
Chapter 8. LVM Configuration
Chapter 8. LVM Configuration LVM can be configured during the graphical installation process, the text-based installation process, or during a kickstart installation. You can use the utilities from the lvm package to create your own LVM configuration post-installation, but these instructions focus on using Disk Druid during installation to complete this task. Read Chapter 7, Logical Volume Manager (LVM) first to learn about LVM. An overview of the steps required to configure LVM include: Creating physical volumes from the hard drives. Creating volume groups from the physical volumes. Creating logical volumes from the volume groups and assign the logical volumes mount points. Note Although the following steps are illustrated during a GUI installation, the same can be done during a text-based installation. Two 9.1 GB SCSI drives ( /dev/sda and /dev/sdb ) are used in the following examples. They detail how to create a simple configuration using a single LVM volume group with associated logical volumes during installation. 8.1. Automatic Partitioning On the Disk Partitioning Setup screen, select Automatically partition . For Red Hat Enterprise Linux, LVM is the default method for disk partitioning. If you do not wish to have LVM implemented, or if you require RAID partitioning, manual disk partitioning through Disk Druid is required. The following properties make up the automatically created configuration: The /boot/ partition resides on its own non-LVM partition. In the following example, it is the first partition on the first drive ( /dev/sda1 ). Bootable partitions cannot reside on LVM logical volumes. A single LVM volume group ( VolGroup00 ) is created, which spans all selected drives and all remaining space available. In the following example, the remainder of the first drive ( /dev/sda2 ), and the entire second drive ( /dev/sdb1 ) are allocated to the volume group. Two LVM logical volumes ( LogVol00 and LogVol01 ) are created from the newly created spanned volume group. In the following example, the recommended swap space is automatically calculated and assigned to LogVol01 , and the remainder is allocated to the root file system, LogVol00 . Figure 8.1. Automatic LVM Configuration With Two SCSI Drives Note If enabling quotas are of interest to you, it may be best to modify the automatic configuration to include other mount points, such as /home/ or /var/ , so that each file system has its own independent quota configuration limits. In most cases, the default automatic LVM partitioning is sufficient, but advanced implementations could warrant modification or manual configuration of the LVM partition tables. Note If you anticipate future memory upgrades, leaving some free space in the volume group would allow for easy future expansion of the swap space logical volume on the system; in which case, the automatic LVM configuration should be modified to leave available space for future growth.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/LVM_Configuration
6.14. Migrating Virtual Machines Between Hosts
6.14. Migrating Virtual Machines Between Hosts Live migration provides the ability to move a running virtual machine between physical hosts with no interruption to service. The virtual machine remains powered on and user applications continue to run while the virtual machine is relocated to a new physical host. In the background, the virtual machine's RAM is copied from the source host to the destination host. Storage and network connectivity are not altered. Note A virtual machine that is using a vGPU cannot be migrated to a different host. 6.14.1. Live Migration Prerequisites Note This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV You can use live migration to seamlessly move virtual machines to support a number of common maintenance tasks. Your Red Hat Virtualization environment must be correctly configured to support live migration well in advance of using it. At a minimum, the following prerequisites must be met to enable successful live migration of virtual machines: The source and destination hosts are members of the same cluster, ensuring CPU compatibility between them. Note Live migrating virtual machines between different clusters is generally not recommended. The source and destination hosts' status is Up . The source and destination hosts have access to the same virtual networks and VLANs. The source and destination hosts have access to the data storage domain on which the virtual machine resides. The destination host has sufficient CPU capacity to support the virtual machine's requirements. The destination host has sufficient unused RAM to support the virtual machine's requirements. The migrating virtual machine does not have the cache!=none custom property set. Live migration is performed using the management network and involves transferring large amounts of data between hosts. Concurrent migrations have the potential to saturate the management network. For best performance, Red Hat recommends creating separate logical networks for management, storage, display, and virtual machine data to minimize the risk of network saturation. Configuring Virtual Machines with SR-IOV-Enabled vNICs to Reduce Network Outage during Migration Virtual machines with vNICs that are directly connected to a virtual function (VF) of an SR-IOV-enabled host NIC can be further configured to reduce network outage during live migration: Ensure that the destination host has an available VF. Set the Passthrough and Migratable options in the passthrough vNIC's profile. See Enabling Passthrough on a vNIC Profile in the Administration Guide . Enable hotplugging for the virtual machine's network interface. Ensure that the virtual machine has a backup VirtIO vNIC, in addition to the passthrough vNIC, to maintain the virtual machine's network connection during migration. Set the VirtIO vNIC's No Network Filter option before configuring the bond. See Explanation of Settings in the VM Interface Profile Window in the Administration Guide . Add both vNICs as slaves under an active-backup bond on the virtual machine, with the passthrough vNIC as the primary interface. The bond and vNIC profiles can have one of the following configurations: Recommended : The bond is not configured with fail_over_mac=active and the VF vNIC is the primary slave. Disable the VirtIO vNIC profile's MAC-spoofing filter to ensure that traffic passing through the VirtIO vNIC is not dropped because it uses the VF vNIC MAC address. See Applying Network Filtering in the RHEL 7 Virtualization Deployment and Administration Guide . The bond is configured with fail_over_mac=active . This failover policy ensures that the MAC address of the bond is always the MAC address of the active slave. During failover, the virtual machine's MAC address changes, with a slight disruption in traffic. 6.14.2. Optimizing Live Migration Live virtual machine migration can be a resource-intensive operation. The following two options can be set globally for every virtual machine in the environment, at the cluster level, or at the individual virtual machine level to optimize live migration. The Auto Converge migrations option allows you to set whether auto-convergence is used during live migration of virtual machines. Large virtual machines with high workloads can dirty memory more quickly than the transfer rate achieved during live migration, and prevent the migration from converging. Auto-convergence capabilities in QEMU allow you to force convergence of virtual machine migrations. QEMU automatically detects a lack of convergence and triggers a throttle-down of the vCPUs on the virtual machine. The Enable migration compression option allows you to set whether migration compression is used during live migration of the virtual machine. This feature uses Xor Binary Zero Run-Length-Encoding to reduce virtual machine downtime and total live migration time for virtual machines running memory write-intensive workloads or for any application with a sparse memory update pattern. Both options are disabled globally by default. Configuring Auto-convergence and Migration Compression for Virtual Machine Migration Configure the optimization settings at the global level: Enable auto-convergence at the global level: Enable migration compression at the global level: Restart the ovirt-engine service to apply the changes: Configure the optimization settings at the cluster level: Click Compute Clusters and select a cluster. Click Edit . Click the Migration Policy tab. From the Auto Converge migrations list, select Inherit from global setting , Auto Converge , or Don't Auto Converge . From the Enable migration compression list, select Inherit from global setting , Compress , or Don't Compress . Click OK . Configure the optimization settings at the virtual machine level: Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the Host tab. From the Auto Converge migrations list, select Inherit from cluster setting , Auto Converge , or Don't Auto Converge . From the Enable migration compression list, select Inherit from cluster setting , Compress , or Don't Compress . Click OK . 6.14.3. Guest Agent Hooks Hooks are scripts that trigger activity within a virtual machine when key events occur: Before migration After migration Before hibernation After hibernation The hooks configuration base directory is /etc/ovirt-guest-agent/hooks.d on Linux systems and C:\Program Files\Redhat\RHEV\Drivers\Agent on Windows systems. Each event has a corresponding subdirectory: before_migration and after_migration , before_hibernation and after_hibernation . All files or symbolic links in that directory will be executed. The executing user on Linux systems is ovirtagent . If the script needs root permissions, the elevation must be executed by the creator of the hook script. The executing user on Windows systems is the System Service user. 6.14.4. Automatic Virtual Machine Migration Red Hat Virtualization Manager automatically initiates live migration of all virtual machines running on a host when the host is moved into maintenance mode. The destination host for each virtual machine is assessed as the virtual machine is migrated, in order to spread the load across the cluster. From version 4.3, all virtual machines defined with manual or automatic migration modes are migrated when the host is moved into maintenance mode. However, for high performance and/or pinned virtual machines, a Maintenance Host window is displayed, asking you to confirm the action because the performance on the target host may be less than the performance on the current host. The Manager automatically initiates live migration of virtual machines in order to maintain load-balancing or power-saving levels in line with scheduling policy. Specify the scheduling policy that best suits the needs of your environment. You can also disable automatic, or even manual, live migration of specific virtual machines where required. If your virtual machines are configured for high performance, and/or if they have been pinned (by setting Passthrough Host CPU, CPU Pinning, or NUMA Pinning), the migration mode is set to Allow manual migration only . However, this can be changed to Allow Manual and Automatic mode if required. Special care should be taken when changing the default migration setting so that it does not result in a virtual machine migrating to a host that does not support high performance or pinning. 6.14.5. Preventing Automatic Migration of a Virtual Machine Red Hat Virtualization Manager allows you to disable automatic migration of virtual machines. You can also disable manual migration of virtual machines by setting the virtual machine to run only on a specific host. The ability to disable automatic migration and require a virtual machine to run on a particular host is useful when using application high availability products, such as Red Hat High Availability or Cluster Suite. Preventing Automatic Migration of Virtual Machines Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the Host tab. In the Start Running On section, select Any Host in Cluster or Specific Host(s) , which enables you to select multiple hosts. Warning Explicitly assigning a virtual machine to a specific host and disabling migration are mutually exclusive with Red Hat Virtualization high availability. Important If the virtual machine has host devices directly attached to it, and a different host is specified, the host devices from the host will be automatically removed from the virtual machine. Select Allow manual migration only or Do not allow migration from the Migration Options drop-down list. Optionally, select the Use custom migration downtime check box and specify a value in milliseconds. Click OK . 6.14.6. Manually Migrating Virtual Machines A running virtual machine can be live migrated to any host within its designated host cluster. Live migration of virtual machines does not cause any service interruption. Migrating virtual machines to a different host is especially useful if the load on a particular host is too high. For live migration prerequisites, see Section 6.14.1, "Live Migration Prerequisites" . For high performance virtual machines and/or virtual machines defined with Pass-Through Host CPU , CPU Pinning , or NUMA Pinning , the default migration mode is Manual . Select Select Host Automatically so that the virtual machine migrates to the host that offers the best performance. Note When you place a host into maintenance mode, the virtual machines running on that host are automatically migrated to other hosts in the same cluster. You do not need to manually migrate these virtual machines. Note Live migrating virtual machines between different clusters is generally not recommended. The currently only supported use case is documented at https://access.redhat.com/articles/1390733 . Manually Migrating Virtual Machines Click Compute Virtual Machines and select a running virtual machine. Click Migrate . Use the radio buttons to select whether to Select Host Automatically or to Select Destination Host , specifying the host using the drop-down list. Note When the Select Host Automatically option is selected, the system determines the host to which the virtual machine is migrated according to the load balancing and power management rules set up in the scheduling policy. Click OK . During migration, progress is shown in the Migration progress bar. Once migration is complete the Host column will update to display the host the virtual machine has been migrated to. 6.14.7. Setting Migration Priority Red Hat Virtualization Manager queues concurrent requests for migration of virtual machines off of a given host. The load balancing process runs every minute. Hosts already involved in a migration event are not included in the migration cycle until their migration event has completed. When there is a migration request in the queue and available hosts in the cluster to action it, a migration event is triggered in line with the load balancing policy for the cluster. You can influence the ordering of the migration queue by setting the priority of each virtual machine; for example, setting mission critical virtual machines to migrate before others. Migrations will be ordered by priority; virtual machines with the highest priority will be migrated first. Setting Migration Priority Click Compute Virtual Machines and select a virtual machine. Click Edit . Select the High Availability tab. Select Low , Medium , or High from the Priority drop-down list. Click OK . 6.14.8. Canceling Ongoing Virtual Machine Migrations A virtual machine migration is taking longer than you expected. You'd like to be sure where all virtual machines are running before you make any changes to your environment. Canceling Ongoing Virtual Machine Migrations Select the migrating virtual machine. It is displayed in Compute Virtual Machines with a status of Migrating from . Click More Actions ( ), then click Cancel Migration . The virtual machine status returns from Migrating from to Up . 6.14.9. Event and Log Notification upon Automatic Migration of Highly Available Virtual Servers When a virtual server is automatically migrated because of the high availability function, the details of an automatic migration are documented in the Events tab and in the engine log to aid in troubleshooting, as illustrated in the following examples: Example 6.4. Notification in the Events Tab of the Administration Portal Highly Available Virtual_Machine_Name failed. It will be restarted automatically. Virtual_Machine_Name was restarted on Host Host_Name Example 6.5. Notification in the Manager engine.log This log can be found on the Red Hat Virtualization Manager at /var/log/ovirt-engine/engine.log : Failed to start Highly Available VM. Attempting to restart. VM Name: Virtual_Machine_Name , VM Id:_Virtual_Machine_ID_Number_
[ "engine-config -s DefaultAutoConvergence=True", "engine-config -s DefaultMigrationCompression=True", "systemctl restart ovirt-engine.service" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-migrating_virtual_machines_between_hosts
Chapter 8. Asynchronous errata updates
Chapter 8. Asynchronous errata updates 8.1. RHSA-2025:0082 OpenShift Data Foundation 4.16.5 bug fixes and security updates OpenShift Data Foundation release 4.16.5 is now available. The bug fixes that are included in the update are listed in the RHSA-2025:0082 advisory. 8.2. RHSA-2024:11292 OpenShift Data Foundation 4.16.4 bug fixes and security updates OpenShift Data Foundation release 4.16.4 is now available. The bug fixes that are included in the update are listed in the RHSA-2024:11292 advisory. 8.3. RHSA-2024:8113 OpenShift Data Foundation 4.16.3 bug fixes and security updates OpenShift Data Foundation release 4.16.3 is now available. The bug fixes that are included in the update are listed in the RHSA-2024:8113 advisory. 8.4. RHSA-2024:6755 OpenShift Data Foundation 4.16.2 bug fixes and security updates OpenShift Data Foundation release 4.16.2 is now available. The bug fixes that are included in the update are listed in the RHSA-2024:6755 advisory. 8.5. RHSA-2024:5547 OpenShift Data Foundation 4.16.1 bug fixes and security updates OpenShift Data Foundation release 4.16.1 is now available. The bug fixes that are included in the update are listed in the RHSA-2024:5547 advisory.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/4.16_release_notes/asynchronous_errata_updates
Chapter 13. Restoring ceph-monitor quorum in OpenShift Data Foundation
Chapter 13. Restoring ceph-monitor quorum in OpenShift Data Foundation In some circumstances, the ceph-mons might lose quorum. If the mons cannot form quorum again, there is a manual procedure to get the quorum going again. The only requirement is that at least one mon must be healthy. The following steps removes the unhealthy mons from quorum and enables you to form a quorum again with a single mon , then bring the quorum back to the original size. For example, if you have three mons and lose quorum, you need to remove the two bad mons from quorum, notify the good mon that it is the only mon in quorum, and then restart the good mon . Procedure Stop the rook-ceph-operator so that the mons are not failed over when you are modifying the monmap . Inject a new monmap . Warning You must inject the monmap very carefully. If run incorrectly, your cluster could be permanently destroyed. The Ceph monmap keeps track of the mon quorum. The monmap is updated to only contain the healthy mon. In this example, the healthy mon is rook-ceph-mon-b , while the unhealthy mons are rook-ceph-mon-a and rook-ceph-mon-c . Take a backup of the current rook-ceph-mon-b Deployment: Open the YAML file and copy the command and arguments from the mon container (see containers list in the following example). This is needed for the monmap changes. Cleanup the copied command and args fields to form a pastable command as follows: Note Make sure to remove the single quotes around the --log-stderr-prefix flag and the parenthesis around the variables being passed ROOK_CEPH_MON_HOST , ROOK_CEPH_MON_INITIAL_MEMBERS and ROOK_POD_IP ). Patch the rook-ceph-mon-b Deployment to stop the working of this mon without deleting the mon pod. Perform the following steps on the mon-b pod: Connect to the pod of a healthy mon and run the following commands: Set the variable. Extract the monmap to a file, by pasting the ceph mon command from the good mon deployment and adding the --extract-monmap=USD{monmap_path} flag. Review the contents of the monmap . Remove the bad mons from the monmap . In this example we remove mon0 and mon2 : Inject the modified monmap into the good mon , by pasting the ceph mon command and adding the --inject-monmap=USD{monmap_path} flag as follows: Exit the shell to continue. Edit the Rook configmaps . Edit the configmap that the operator uses to track the mons . Verify that in the data element you see three mons such as the following (or more depending on your moncount ): Delete the bad mons from the list to end up with a single good mon . For example: Save the file and exit. Now, you need to adapt a Secret which is used for the mons and other components. Set a value for the variable good_mon_id . For example: You can use the oc patch command to patch the rook-ceph-config secret and update the two key/value pairs mon_host and mon_initial_members . Note If you are using hostNetwork: true , you need to replace the mon_host var with the node IP the mon is pinned to ( nodeSelector ). This is because there is no rook-ceph-mon-* service created in that "mode". Restart the mon . You need to restart the good mon pod with the original ceph-mon command to pick up the changes. Use the oc replace command on the backup of the mon deployment YAML file: Note Option --force deletes the deployment and creates a new one. Verify the status of the cluster. The status should show one mon in quorum. If the status looks good, your cluster should be healthy again. Delete the two mon deployments that are no longer expected to be in quorum. For example: In this example the deployments to be deleted are rook-ceph-mon-a and rook-ceph-mon-c . Restart the operator. Start the rook operator again to resume monitoring the health of the cluster. Note It is safe to ignore the errors that a number of resources already exist. The operator automatically adds more mons to increase the quorum size again depending on the mon count.
[ "oc -n openshift-storage scale deployment rook-ceph-operator --replicas=0", "oc -n openshift-storage get deployment rook-ceph-mon-b -o yaml > rook-ceph-mon-b-deployment.yaml", "[...] containers: - args: - --fsid=41a537f2-f282-428e-989f-a9e07be32e47 - --keyring=/etc/ceph/keyring-store/keyring - --log-to-stderr=true - --err-to-stderr=true - --mon-cluster-log-to-stderr=true - '--log-stderr-prefix=debug ' - --default-log-to-file=false - --default-mon-cluster-log-to-file=false - --mon-host=USD(ROOK_CEPH_MON_HOST) - --mon-initial-members=USD(ROOK_CEPH_MON_INITIAL_MEMBERS) - --id=b - --setuser=ceph - --setgroup=ceph - --foreground - --public-addr=10.100.13.242 - --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db - --public-bind-addr=USD(ROOK_POD_IP) command: - ceph-mon [...]", "ceph-mon --fsid=41a537f2-f282-428e-989f-a9e07be32e47 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=USDROOK_CEPH_MON_HOST --mon-initial-members=USDROOK_CEPH_MON_INITIAL_MEMBERS --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.100.13.242 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db --public-bind-addr=USDROOK_POD_IP", "oc -n openshift-storage patch deployment rook-ceph-mon-b --type='json' -p '[{\"op\":\"remove\", \"path\":\"/spec/template/spec/containers/0/livenessProbe\"}]' oc -n openshift-storage patch deployment rook-ceph-mon-b -p '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"mon\", \"command\": [\"sleep\", \"infinity\"], \"args\": []}]}}}}'", "oc -n openshift-storage exec -it <mon-pod> bash", "monmap_path=/tmp/monmap", "ceph-mon --fsid=41a537f2-f282-428e-989f-a9e07be32e47 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=USDROOK_CEPH_MON_HOST --mon-initial-members=USDROOK_CEPH_MON_INITIAL_MEMBERS --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.100.13.242 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db --public-bind-addr=USDROOK_POD_IP --extract-monmap=USD{monmap_path}", "monmaptool --print /tmp/monmap", "monmaptool USD{monmap_path} --rm <bad_mon>", "monmaptool USD{monmap_path} --rm a monmaptool USD{monmap_path} --rm c", "ceph-mon --fsid=41a537f2-f282-428e-989f-a9e07be32e47 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=USDROOK_CEPH_MON_HOST --mon-initial-members=USDROOK_CEPH_MON_INITIAL_MEMBERS --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.100.13.242 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db --public-bind-addr=USDROOK_POD_IP --inject-monmap=USD{monmap_path}", "oc -n openshift-storage edit configmap rook-ceph-mon-endpoints", "data: a=10.100.35.200:6789;b=10.100.13.242:6789;c=10.100.35.12:6789", "data: b=10.100.13.242:6789", "good_mon_id=b", "mon_host=USD(oc -n openshift-storage get svc rook-ceph-mon-b -o jsonpath='{.spec.clusterIP}') oc -n openshift-storage patch secret rook-ceph-config -p '{\"stringData\": {\"mon_host\": \"[v2:'\"USD{mon_host}\"':3300,v1:'\"USD{mon_host}\"':6789]\", \"mon_initial_members\": \"'\"USD{good_mon_id}\"'\"}}'", "oc replace --force -f rook-ceph-mon-b-deployment.yaml", "oc delete deploy <rook-ceph-mon-1> oc delete deploy <rook-ceph-mon-2>", "oc -n openshift-storage scale deployment rook-ceph-operator --replicas=1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/troubleshooting_openshift_data_foundation/restoring-ceph-monitor-quorum-in-openshift-data-foundation_rhodf
14.5.2. Connecting the Serial Console for the Guest Virtual Machine
14.5.2. Connecting the Serial Console for the Guest Virtual Machine The USD virsh console <domain> [--devname <string>] [--force] [--safe] command connects the virtual serial console for the guest virtual machine. The optional --devname <string> parameter refers to the device alias of an alternate console, serial, or parallel device configured for the guest virtual machine. If this parameter is omitted, the primary console will be opened. The --force option will force the console connection or when used with disconnect, will disconnect connections. Using the --safe option will only allow the guest to connect if safe console handling is supported.
[ "virsh console virtual_machine --safe" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-domain_commands-connecting_the_serial_console_for_the_guest_virtual_machine
10.7. VLAN on Bond and Bridge Using the NetworkManager Command Line Tool, nmcli
10.7. VLAN on Bond and Bridge Using the NetworkManager Command Line Tool, nmcli To use VLANs over bonds and bridges, proceed as follows: Add a bond device: Note that in this case a bond connection serves only as a "lower interface" for VLAN, and does not get any IP address. Therefore, the ipv4.method disabled and ipv6.method ignore parameters have been added on the command line. Add ports to the bond device: Add a bridge device: Add a VLAN interface on top of bond, assigned to the bridge device: View the created connections:
[ "~]USD nmcli connection add type bond con-name Bond0 ifname bond0 bond.options \"mode=active-backup,miimon=100\" ipv4.method disabled ipv6.method ignore", "~]USD nmcli connection add type ethernet con-name Slave1 ifname em1 master bond0 slave-type bond ~]USD nmcli connection add type ethernet con-name Slave2 ifname em2 master bond0 slave-type bond", "~]USD nmcli connection add type bridge con-name Bridge0 ifname br0 ipv4.method manual ipv4.addresses 192.0.2.1/24", "~]USD nmcli connection add type vlan con-name Vlan2 ifname bond0.2 dev bond0 id 2 master br0 slave-type bridge", "~]USD nmcli connection show NAME UUID TYPE DEVICE Bond0 f05806fa-72c3-4803-8743-2377f0c10bed bond bond0 Bridge0 22d3c0de-d79a-4779-80eb-10718c2bed61 bridge br0 Slave1 e59e13cb-d749-4df2-aee6-de3bfaec698c 802-3-ethernet em1 Slave2 25361a76-6b3c-4ae5-9073-005be5ab8b1c 802-3-ethernet em2 Vlan2 e2333426-eea4-4f5d-a589-336f032ec822 vlan bond0.2" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-vlan_on_bond_and_bridge_using_the_networkmanager_command_line_tool_nmcli
4.9. Mounting File Systems
4.9. Mounting File Systems By default, when a file system that supports extended attributes is mounted, the security context for each file is obtained from the security.selinux extended attribute of the file. Files in file systems that do not support extended attributes are assigned a single, default security context from the policy configuration, based on file system type. Use the mount -o context command to override existing extended attributes, or to specify a different, default context for file systems that do not support extended attributes. This is useful if you do not trust a file system to supply the correct attributes, for example, removable media used in multiple systems. The mount -o context command can also be used to support labeling for file systems that do not support extended attributes, such as File Allocation Table (FAT) or NFS volumes. The context specified with the context option is not written to disk: the original contexts are preserved, and are seen when mounting without context if the file system had extended attributes in the first place. For further information about file system labeling, see James Morris's "Filesystem Labeling in SELinux" article: http://www.linuxjournal.com/article/7426 . 4.9.1. Context Mounts To mount a file system with the specified context, overriding existing contexts if they exist, or to specify a different, default context for a file system that does not support extended attributes, as the root user, use the mount -o context= SELinux_user:role:type:level command when mounting the required file system. Context changes are not written to disk. By default, NFS mounts on the client side are labeled with a default context defined by policy for NFS volumes. In common policies, this default context uses the nfs_t type. Without additional mount options, this may prevent sharing NFS volumes using other services, such as the Apache HTTP Server. The following example mounts an NFS volume so that it can be shared using the Apache HTTP Server: Newly-created files and directories on this file system appear to have the SELinux context specified with -o context . However, since these changes are not written to disk, the context specified with this option does not persist between mounts. Therefore, this option must be used with the same context specified during every mount to retain the required context. For information about making context mount persistent, see Section 4.9.5, "Making Context Mounts Persistent" . Type Enforcement is the main permission control used in SELinux targeted policy. For the most part, SELinux users and roles can be ignored, so, when overriding the SELinux context with -o context , use the SELinux system_u user and object_r role, and concentrate on the type. If you are not using the MLS policy or multi-category security, use the s0 level. Note When a file system is mounted with a context option, context changes by users and processes are prohibited. For example, running the chcon command on a file system mounted with a context option results in a Operation not supported error. 4.9.2. Changing the Default Context As mentioned in Section 4.8, "The file_t and default_t Types" , on file systems that support extended attributes, when a file that lacks an SELinux context on disk is accessed, it is treated as if it had a default context as defined by SELinux policy. In common policies, this default context uses the file_t type. If it is desirable to use a different default context, mount the file system with the defcontext option. The following example mounts a newly-created file system on /dev/sda2 to the newly-created test/ directory. It assumes that there are no rules in /etc/selinux/targeted/contexts/files/ that define a context for the test/ directory: In this example: the defcontext option defines that system_u:object_r:samba_share_t:s0 is "the default security context for unlabeled files" [5] . when mounted, the root directory ( test/ ) of the file system is treated as if it is labeled with the context specified by defcontext (this label is not stored on disk). This affects the labeling for files created under test/ : new files inherit the samba_share_t type, and these labels are stored on disk. files created under test/ while the file system was mounted with a defcontext option retain their labels. 4.9.3. Mounting an NFS Volume By default, NFS mounts on the client side are labeled with a default context defined by policy for NFS volumes. In common policies, this default context uses the nfs_t type. Depending on policy configuration, services, such as Apache HTTP Server and MariaDB, may not be able to read files labeled with the nfs_t type. This may prevent file systems labeled with this type from being mounted and then read or exported by other services. If you would like to mount an NFS volume and read or export that file system with another service, use the context option when mounting to override the nfs_t type. Use the following context option to mount NFS volumes so that they can be shared using the Apache HTTP Server: Since these changes are not written to disk, the context specified with this option does not persist between mounts. Therefore, this option must be used with the same context specified during every mount to retain the required context. For information about making context mount persistent, see Section 4.9.5, "Making Context Mounts Persistent" . As an alternative to mounting file systems with context options, Booleans can be enabled to allow services access to file systems labeled with the nfs_t type. See Part II, "Managing Confined Services" for instructions on configuring Booleans to allow services access to the nfs_t type. 4.9.4. Multiple NFS Mounts When mounting multiple mounts from the same NFS export, attempting to override the SELinux context of each mount with a different context, results in subsequent mount commands failing. In the following example, the NFS server has a single export, export/ , which has two subdirectories, web/ and database/ . The following commands attempt two mounts from a single NFS export, and try to override the context for each one: The second mount command fails, and the following is logged to /var/log/messages : To mount multiple mounts from a single NFS export, with each mount having a different context, use the -o nosharecache,context options. The following example mounts multiple mounts from a single NFS export, with a different context for each mount (allowing a single service access to each one): In this example, server:/export/web is mounted locally to the /local/web/ directory, with all files being labeled with the httpd_sys_content_t type, allowing Apache HTTP Server access. server:/export/database is mounted locally to /local/database/ , with all files being labeled with the mysqld_db_t type, allowing MariaDB access. These type changes are not written to disk. Important The nosharecache options allows you to mount the same subdirectory of an export multiple times with different contexts, for example, mounting /export/web/ multiple times. Do not mount the same subdirectory from an export multiple times with different contexts, as this creates an overlapping mount, where files are accessible under two different contexts. 4.9.5. Making Context Mounts Persistent To make context mounts persistent across remounting and reboots, add entries for the file systems in the /etc/fstab file or an automounter map, and use the required context as a mount option. The following example adds an entry to /etc/fstab for an NFS context mount: [5] Morris, James. "Filesystem Labeling in SELinux". Published 1 October 2004. Accessed 14 October 2008: http://www.linuxjournal.com/article/7426 .
[ "~]# mount server:/export /local/mount/point -o \\ context=\"system_u:object_r:httpd_sys_content_t:s0\"", "~]# mount /dev/sda2 /test/ -o defcontext=\"system_u:object_r:samba_share_t:s0\"", "~]# mount server:/export /local/mount/point -o context=\"system_u:object_r:httpd_sys_content_t:s0\"", "~]# mount server:/export/web /local/web -o context=\"system_u:object_r:httpd_sys_content_t:s0\"", "~]# mount server:/export/database /local/database -o context=\"system_u:object_r:mysqld_db_t:s0\"", "kernel: SELinux: mount invalid. Same superblock, different security settings for (dev 0:15, type nfs)", "~]# mount server:/export/web /local/web -o nosharecache,context=\"system_u:object_r:httpd_sys_content_t:s0\"", "~]# mount server:/export/database /local/database -o \\ nosharecache,context=\"system_u:object_r:mysqld_db_t:s0\"", "server:/export /local/mount/ nfs context=\"system_u:object_r:httpd_sys_content_t:s0\" 0 0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-working_with_selinux-mounting_file_systems
Chapter 10. Container Images Based on Red Hat Software Collections 3.1
Chapter 10. Container Images Based on Red Hat Software Collections 3.1 Component Description Supported architectures Application Images rhscl/php-70-rhel7 PHP 7.0 platform for building and running applications (EOL) x86_64 rhscl/perl-526-rhel7 Perl 5.26 platform for building and running applications (EOL) x86_64 Daemon Images rhscl/varnish-5-rhel7 Varnish Cache 5.0 HTTP reverse proxy (EOL) x86_64, s390x, ppc64le Database Images rhscl/mongodb-36-rhel7 MongoDB 3.6 NoSQL database server (EOL) x86_64 rhscl/postgresql-10-rhel7 PostgreSQL 10 SQL database server x86_64, s390x, ppc64le Red Hat Developer Toolset Images rhscl/devtoolset-7-toolchain-rhel7 Red Hat Developer Toolset toolchain (EOL) x86_64, s390x, ppc64le rhscl/devtoolset-7-perftools-rhel7 Red Hat Developer Toolset perftools (EOL) x86_64, s390x, ppc64le Legend: x86_64 - AMD64 and Intel 64 architectures s390x - 64-bit IBM Z ppc64le - IBM POWER, little endian All images are based on components from Red Hat Software Collections. The images are available for Red Hat Enterprise Linux 7 through the Red Hat Container Registry. For detailed information about components provided by Red Hat Software Collections 3.1, see the Red Hat Software Collections 3.1 Release Notes . For more information about the Red Hat Developer Toolset 7.1 components, see the Red Hat Developer Toolset 7 User Guide . For information regarding container images based on Red Hat Software Collections 2, see Using Red Hat Software Collections 2 Container Images . EOL images are no longer supported.
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/using_red_hat_software_collections_container_images/RHSCL_3.1_images
Chapter 2. Working with pods
Chapter 2. Working with pods 2.1. Using pods A pod is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. 2.1.1. Understanding pods Pods are the rough equivalent of a machine instance (physical or virtual) to a Container. Each pod is allocated its own internal IP address, therefore owning its entire port space, and containers within pods can share their local storage and networking. Pods have a lifecycle; they are defined, then they are assigned to run on a node, then they run until their container(s) exit or they are removed for some other reason. Pods, depending on policy and exit code, might be removed after exiting, or can be retained to enable access to the logs of their containers. OpenShift Container Platform treats pods as largely immutable; changes cannot be made to a pod definition while it is running. OpenShift Container Platform implements changes by terminating an existing pod and recreating it with modified configuration, base image(s), or both. Pods are also treated as expendable, and do not maintain state when recreated. Therefore pods should usually be managed by higher-level controllers, rather than directly by users. Note For the maximum number of pods per OpenShift Container Platform node host, see the Cluster Limits. Warning Bare pods that are not managed by a replication controller will be not rescheduled upon node disruption. 2.1.2. Example pod configurations OpenShift Container Platform leverages the Kubernetes concept of a pod , which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. The following is an example definition of a pod. It demonstrates many features of pods, most of which are discussed in other topics and thus only briefly mentioned here: Pod object definition (YAML) kind: Pod apiVersion: v1 metadata: name: example labels: environment: production app: abc 1 spec: restartPolicy: Always 2 securityContext: 3 runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: 4 - name: abc args: - sleep - "1000000" volumeMounts: 5 - name: cache-volume mountPath: /cache 6 image: registry.access.redhat.com/ubi7/ubi-init:latest 7 securityContext: allowPrivilegeEscalation: false runAsNonRoot: true capabilities: drop: ["ALL"] resources: limits: memory: "100Mi" cpu: "1" requests: memory: "100Mi" cpu: "1" volumes: 8 - name: cache-volume emptyDir: sizeLimit: 500Mi 1 Pods can be "tagged" with one or more labels, which can then be used to select and manage groups of pods in a single operation. The labels are stored in key/value format in the metadata hash. 2 The pod restart policy with possible values Always , OnFailure , and Never . The default value is Always . 3 OpenShift Container Platform defines a security context for containers which specifies whether they are allowed to run as privileged containers, run as a user of their choice, and more. The default context is very restrictive but administrators can modify this as needed. 4 containers specifies an array of one or more container definitions. 5 The container specifies where external storage volumes are mounted within the container. 6 Specify the volumes to provide for the pod. Volumes mount at the specified path. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 7 Each container in the pod is instantiated from its own container image. 8 The pod defines storage volumes that are available to its container(s) to use. If you attach persistent volumes that have high file counts to pods, those pods can fail or can take a long time to start. For more information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state? . Note This pod definition does not include attributes that are filled by OpenShift Container Platform automatically after the pod is created and its lifecycle begins. The Kubernetes pod documentation has details about the functionality and purpose of pods. 2.1.3. Additional resources For more information on pods and storage see Understanding persistent storage and Understanding ephemeral storage . 2.2. Viewing pods As an administrator, you can view the pods in your cluster and to determine the health of those pods and the cluster as a whole. 2.2.1. About pods OpenShift Container Platform leverages the Kubernetes concept of a pod , which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. Pods are the rough equivalent of a machine instance (physical or virtual) to a container. You can view a list of pods associated with a specific project or view usage statistics about pods. 2.2.2. Viewing pods in a project You can view a list of pods associated with the current project, including the number of replica, the current status, number or restarts and the age of the pod. Procedure To view the pods in a project: Change to the project: USD oc project <project-name> Run the following command: USD oc get pods For example: USD oc get pods Example output NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m Add the -o wide flags to view the pod IP address and the node where the pod is located. USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none> 2.2.3. Viewing pod usage statistics You can display usage statistics about pods, which provide the runtime environments for containers. These usage statistics include CPU, memory, and storage consumption. Prerequisites You must have cluster-reader permission to view the usage statistics. Metrics must be installed to view the usage statistics. Procedure To view the usage statistics: Run the following command: USD oc adm top pods For example: USD oc adm top pods -n openshift-console Example output NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi Run the following command to view the usage statistics for pods with labels: USD oc adm top pod --selector='' You must choose the selector (label query) to filter on. Supports = , == , and != . For example: USD oc adm top pod --selector='name=my-pod' 2.2.4. Viewing resource logs You can view the log for various resources in the OpenShift CLI ( oc ) and web console. Logs read from the tail, or end, of the log. Prerequisites Access to the OpenShift CLI ( oc ). Procedure (UI) In the OpenShift Container Platform console, navigate to Workloads Pods or navigate to the pod through the resource you want to investigate. Note Some resources, such as builds, do not have pods to query directly. In such instances, you can locate the Logs link on the Details page for the resource. Select a project from the drop-down menu. Click the name of the pod you want to investigate. Click Logs . Procedure (CLI) View the log for a specific pod: USD oc logs -f <pod_name> -c <container_name> where: -f Optional: Specifies that the output follows what is being written into the logs. <pod_name> Specifies the name of the pod. <container_name> Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name. For example: USD oc logs ruby-58cd97df55-mww7r USD oc logs -f ruby-57f7f4855b-znl92 -c ruby The contents of log files are printed out. View the log for a specific resource: USD oc logs <object_type>/<resource_name> 1 1 Specifies the resource type and name. For example: USD oc logs deployment/ruby The contents of log files are printed out. 2.3. Configuring an OpenShift Container Platform cluster for pods As an administrator, you can create and maintain an efficient cluster for pods. By keeping your cluster efficient, you can provide a better environment for your developers using such tools as what a pod does when it exits, ensuring that the required number of pods is always running, when to restart pods designed to run only once, limit the bandwidth available to pods, and how to keep pods running during disruptions. 2.3.1. Configuring how pods behave after restart A pod restart policy determines how OpenShift Container Platform responds when Containers in that pod exit. The policy applies to all Containers in that pod. The possible values are: Always - Tries restarting a successfully exited Container on the pod continuously, with an exponential back-off delay (10s, 20s, 40s) capped at 5 minutes. The default is Always . OnFailure - Tries restarting a failed Container on the pod with an exponential back-off delay (10s, 20s, 40s) capped at 5 minutes. Never - Does not try to restart exited or failed Containers on the pod. Pods immediately fail and exit. After the pod is bound to a node, the pod will never be bound to another node. This means that a controller is necessary in order for a pod to survive node failure: Condition Controller Type Restart Policy Pods that are expected to terminate (such as batch computations) Job OnFailure or Never Pods that are expected to not terminate (such as web servers) Replication controller Always . Pods that must run one-per-machine Daemon set Any If a Container on a pod fails and the restart policy is set to OnFailure , the pod stays on the node and the Container is restarted. If you do not want the Container to restart, use a restart policy of Never . If an entire pod fails, OpenShift Container Platform starts a new pod. Developers must address the possibility that applications might be restarted in a new pod. In particular, applications must handle temporary files, locks, incomplete output, and so forth caused by runs. Note Kubernetes architecture expects reliable endpoints from cloud providers. When a cloud provider is down, the kubelet prevents OpenShift Container Platform from restarting. If the underlying cloud provider endpoints are not reliable, do not install a cluster using cloud provider integration. Install the cluster as if it was in a no-cloud environment. It is not recommended to toggle cloud provider integration on or off in an installed cluster. For details on how OpenShift Container Platform uses restart policy with failed Containers, see the Example States in the Kubernetes documentation. 2.3.2. Limiting the bandwidth available to pods You can apply quality-of-service traffic shaping to a pod and effectively limit its available bandwidth. Egress traffic (from the pod) is handled by policing, which simply drops packets in excess of the configured rate. Ingress traffic (to the pod) is handled by shaping queued packets to effectively handle data. The limits you place on a pod do not affect the bandwidth of other pods. Procedure To limit the bandwidth on a pod: Write an object definition JSON file, and specify the data traffic speed using kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations. For example, to limit both pod egress and ingress bandwidth to 10M/s: Limited Pod object definition { "kind": "Pod", "spec": { "containers": [ { "image": "openshift/hello-openshift", "name": "hello-openshift" } ] }, "apiVersion": "v1", "metadata": { "name": "iperf-slow", "annotations": { "kubernetes.io/ingress-bandwidth": "10M", "kubernetes.io/egress-bandwidth": "10M" } } } Create the pod using the object definition: USD oc create -f <file_or_dir_path> 2.3.3. Understanding how to use pod disruption budgets to specify the number of pods that must be up A pod disruption budget allows the specification of safety constraints on pods during operations, such as draining a node for maintenance. PodDisruptionBudget is an API object that specifies the minimum number or percentage of replicas that must be up at a time. Setting these in projects can be helpful during node maintenance (such as scaling a cluster down or a cluster upgrade) and is only honored on voluntary evictions (not on node failures). A PodDisruptionBudget object's configuration consists of the following key parts: A label selector, which is a label query over a set of pods. An availability level, which specifies the minimum number of pods that must be available simultaneously, either: minAvailable is the number of pods must always be available, even during a disruption. maxUnavailable is the number of pods can be unavailable during a disruption. Note Available refers to the number of pods that has condition Ready=True . Ready=True refers to the pod that is able to serve requests and should be added to the load balancing pools of all matching services. A maxUnavailable of 0% or 0 or a minAvailable of 100% or equal to the number of replicas is permitted but can block nodes from being drained. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. You can check for pod disruption budgets across all projects with the following: USD oc get poddisruptionbudget --all-namespaces Note The following example contains some values that are specific to OpenShift Container Platform on AWS. Example output NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #... The PodDisruptionBudget is considered healthy when there are at least minAvailable pods running in the system. Every pod above that limit can be evicted. Note Depending on your pod priority and preemption settings, lower-priority pods might be removed despite their pod disruption budget requirements. 2.3.3.1. Specifying the number of pods that must be up with pod disruption budgets You can use a PodDisruptionBudget object to specify the minimum number or percentage of replicas that must be up at a time. Procedure To configure a pod disruption budget: Create a YAML file with the an object definition similar to the following: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The minimum number of pods that must be available simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this parameter blank, for example selector {} , to select all pods in the project. Or: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The maximum number of pods that can be unavailable simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this parameter blank, for example selector {} , to select all pods in the project. Run the following command to add the object to project: USD oc create -f </path/to/file> -n <project_name> 2.3.3.2. Specifying the eviction policy for unhealthy pods When you use pod disruption budgets (PDBs) to specify how many pods must be available simultaneously, you can also define the criteria for how unhealthy pods are considered for eviction. You can choose one of the following policies: IfHealthyBudget Running pods that are not yet healthy can be evicted only if the guarded application is not disrupted. AlwaysAllow Running pods that are not yet healthy can be evicted regardless of whether the criteria in the pod disruption budget is met. This policy can help evict malfunctioning applications, such as ones with pods stuck in the CrashLoopBackOff state or failing to report the Ready status. Note It is recommended to set the unhealthyPodEvictionPolicy field to AlwaysAllow in the PodDisruptionBudget object to support the eviction of misbehaving applications during a node drain. The default behavior is to wait for the application pods to become healthy before the drain can proceed. Procedure Create a YAML file that defines a PodDisruptionBudget object and specify the unhealthy pod eviction policy: Example pod-disruption-budget.yaml file apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1 1 Choose either IfHealthyBudget or AlwaysAllow as the unhealthy pod eviction policy. The default is IfHealthyBudget when the unhealthyPodEvictionPolicy field is empty. Create the PodDisruptionBudget object by running the following command: USD oc create -f pod-disruption-budget.yaml With a PDB that has the AlwaysAllow unhealthy pod eviction policy set, you can now drain nodes and evict the pods for a malfunctioning application guarded by this PDB. Additional resources Enabling features using feature gates Unhealthy Pod Eviction Policy in the Kubernetes documentation 2.3.4. Preventing pod removal using critical pods There are a number of core components that are critical to a fully functional cluster, but, run on a regular cluster node rather than the master. A cluster might stop working properly if a critical add-on is evicted. Pods marked as critical are not allowed to be evicted. Procedure To make a pod critical: Create a Pod spec or edit existing pods to include the system-cluster-critical priority class: apiVersion: v1 kind: Pod metadata: name: my-pdb spec: template: metadata: name: critical-pod priorityClassName: system-cluster-critical 1 # ... 1 Default priority class for pods that should never be evicted from a node. Alternatively, you can specify system-node-critical for pods that are important to the cluster but can be removed if necessary. Create the pod: USD oc create -f <file-name>.yaml 2.3.5. Reducing pod timeouts when using persistent volumes with high file counts If a storage volume contains many files (~1,000,000 or greater), you might experience pod timeouts. This can occur because, when volumes are mounted, OpenShift Container Platform recursively changes the ownership and permissions of the contents of each volume in order to match the fsGroup specified in a pod's securityContext . For large volumes, checking and changing the ownership and permissions can be time consuming, resulting in a very slow pod startup. You can reduce this delay by applying one of the following workarounds: Use a security context constraint (SCC) to skip the SELinux relabeling for a volume. Use the fsGroupChangePolicy field inside an SCC to control the way that OpenShift Container Platform checks and manages ownership and permissions for a volume. Use the Cluster Resource Override Operator to automatically apply an SCC to skip the SELinux relabeling. Use a runtime class to skip the SELinux relabeling for a volume. For information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state? . 2.4. Automatically scaling pods with the horizontal pod autoscaler As a developer, you can use a horizontal pod autoscaler (HPA) to specify how OpenShift Container Platform should automatically increase or decrease the scale of a replication controller or deployment configuration, based on metrics collected from the pods that belong to that replication controller or deployment configuration. You can create an HPA for any deployment, deployment config, replica set, replication controller, or stateful set. For information on scaling pods based on custom metrics, see Automatically scaling pods based on custom metrics . Note It is recommended to use a Deployment object or ReplicaSet object unless you need a specific feature or behavior provided by other objects. For more information on these objects, see Understanding deployments . 2.4.1. Understanding horizontal pod autoscalers You can create a horizontal pod autoscaler to specify the minimum and maximum number of pods you want to run, as well as the CPU utilization or memory utilization your pods should target. After you create a horizontal pod autoscaler, OpenShift Container Platform begins to query the CPU and/or memory resource metrics on the pods. When these metrics are available, the horizontal pod autoscaler computes the ratio of the current metric utilization with the desired metric utilization, and scales up or down accordingly. The query and scaling occurs at a regular interval, but can take one to two minutes before metrics become available. For replication controllers, this scaling corresponds directly to the replicas of the replication controller. For deployment configurations, scaling corresponds directly to the replica count of the deployment configuration. Note that autoscaling applies only to the latest deployment in the Complete phase. OpenShift Container Platform automatically accounts for resources and prevents unnecessary autoscaling during resource spikes, such as during start up. Pods in the unready state have 0 CPU usage when scaling up and the autoscaler ignores the pods when scaling down. Pods without known metrics have 0% CPU usage when scaling up and 100% CPU when scaling down. This allows for more stability during the HPA decision. To use this feature, you must configure readiness checks to determine if a new pod is ready for use. To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. 2.4.1.1. Supported metrics The following metrics are supported by horizontal pod autoscalers: Table 2.1. Metrics Metric Description API version CPU utilization Number of CPU cores used. Can be used to calculate a percentage of the pod's requested CPU. autoscaling/v1 , autoscaling/v2 Memory utilization Amount of memory used. Can be used to calculate a percentage of the pod's requested memory. autoscaling/v2 Important For memory-based autoscaling, memory usage must increase and decrease proportionally to the replica count. On average: An increase in replica count must lead to an overall decrease in memory (working set) usage per-pod. A decrease in replica count must lead to an overall increase in per-pod memory usage. Use the OpenShift Container Platform web console to check the memory behavior of your application and ensure that your application meets these requirements before using memory-based autoscaling. The following example shows autoscaling for the hello-node Deployment object. The initial deployment requires 3 pods. The HPA object increases the minimum to 5. If CPU usage on the pods reaches 75%, the pods increase to 7: USD oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75 Example output horizontalpodautoscaler.autoscaling/hello-node autoscaled Sample YAML to create an HPA for the hello-node deployment object with minReplicas set to 3 apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hello-node namespace: default spec: maxReplicas: 7 minReplicas: 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hello-node targetCPUUtilizationPercentage: 75 status: currentReplicas: 5 desiredReplicas: 0 After you create the HPA, you can view the new state of the deployment by running the following command: USD oc get deployment hello-node There are now 5 pods in the deployment: Example output NAME REVISION DESIRED CURRENT TRIGGERED BY hello-node 1 5 5 config 2.4.2. How does the HPA work? The horizontal pod autoscaler (HPA) extends the concept of pod auto-scaling. The HPA lets you create and manage a group of load-balanced nodes. The HPA automatically increases or decreases the number of pods when a given CPU or memory threshold is crossed. Figure 2.1. High level workflow of the HPA The HPA is an API resource in the Kubernetes autoscaling API group. The autoscaler works as a control loop with a default of 15 seconds for the sync period. During this period, the controller manager queries the CPU, memory utilization, or both, against what is defined in the YAML file for the HPA. The controller manager obtains the utilization metrics from the resource metrics API for per-pod resource metrics like CPU or memory, for each pod that is targeted by the HPA. If a utilization value target is set, the controller calculates the utilization value as a percentage of the equivalent resource request on the containers in each pod. The controller then takes the average of utilization across all targeted pods and produces a ratio that is used to scale the number of desired replicas. The HPA is configured to fetch metrics from metrics.k8s.io , which is provided by the metrics server. Because of the dynamic nature of metrics evaluation, the number of replicas can fluctuate during scaling for a group of replicas. Note To implement the HPA, all targeted pods must have a resource request set on their containers. 2.4.3. About requests and limits The scheduler uses the resource request that you specify for containers in a pod, to decide which node to place the pod on. The kubelet enforces the resource limit that you specify for a container to ensure that the container is not allowed to use more than the specified limit. The kubelet also reserves the request amount of that system resource specifically for that container to use. How to use resource metrics? In the pod specifications, you must specify the resource requests, such as CPU and memory. The HPA uses this specification to determine the resource utilization and then scales the target up or down. For example, the HPA object uses the following metric source: type: Resource resource: name: cpu target: type: Utilization averageUtilization: 60 In this example, the HPA keeps the average utilization of the pods in the scaling target at 60%. Utilization is the ratio between the current resource usage to the requested resource of the pod. 2.4.4. Best practices All pods must have resource requests configured The HPA makes a scaling decision based on the observed CPU or memory utilization values of pods in an OpenShift Container Platform cluster. Utilization values are calculated as a percentage of the resource requests of each pod. Missing resource request values can affect the optimal performance of the HPA. Configure the cool down period During horizontal pod autoscaling, there might be a rapid scaling of events without a time gap. Configure the cool down period to prevent frequent replica fluctuations. You can specify a cool down period by configuring the stabilizationWindowSeconds field. The stabilization window is used to restrict the fluctuation of replicas count when the metrics used for scaling keep fluctuating. The autoscaling algorithm uses this window to infer a desired state and avoid unwanted changes to workload scale. For example, a stabilization window is specified for the scaleDown field: behavior: scaleDown: stabilizationWindowSeconds: 300 In the above example, all desired states for the past 5 minutes are considered. This approximates a rolling maximum, and avoids having the scaling algorithm frequently remove pods only to trigger recreating an equivalent pod just moments later. 2.4.4.1. Scaling policies The autoscaling/v2 API allows you to add scaling policies to a horizontal pod autoscaler. A scaling policy controls how the OpenShift Container Platform horizontal pod autoscaler (HPA) scales pods. Scaling policies allow you to restrict the rate that HPAs scale pods up or down by setting a specific number or specific percentage to scale in a specified period of time. You can also define a stabilization window , which uses previously computed desired states to control scaling if the metrics are fluctuating. You can create multiple policies for the same scaling direction, and determine which policy is used, based on the amount of change. You can also restrict the scaling by timed iterations. The HPA scales pods during an iteration, then performs scaling, as needed, in further iterations. Sample HPA object with a scaling policy apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: behavior: scaleDown: 1 policies: 2 - type: Pods 3 value: 4 4 periodSeconds: 60 5 - type: Percent value: 10 6 periodSeconds: 60 selectPolicy: Min 7 stabilizationWindowSeconds: 300 8 scaleUp: 9 policies: - type: Pods value: 5 10 periodSeconds: 70 - type: Percent value: 12 11 periodSeconds: 80 selectPolicy: Max stabilizationWindowSeconds: 0 ... 1 Specifies the direction for the scaling policy, either scaleDown or scaleUp . This example creates a policy for scaling down. 2 Defines the scaling policy. 3 Determines if the policy scales by a specific number of pods or a percentage of pods during each iteration. The default value is pods . 4 Limits the amount of scaling, either the number of pods or percentage of pods, during each iteration. There is no default value for scaling down by number of pods. 5 Determines the length of a scaling iteration. The default value is 15 seconds. 6 The default value for scaling down by percentage is 100%. 7 Determines which policy to use first, if multiple policies are defined. Specify Max to use the policy that allows the highest amount of change, Min to use the policy that allows the lowest amount of change, or Disabled to prevent the HPA from scaling in that policy direction. The default value is Max . 8 Determines the time period the HPA should look back at desired states. The default value is 0 . 9 This example creates a policy for scaling up. 10 Limits the amount of scaling up by the number of pods. The default value for scaling up the number of pods is 4%. 11 Limits the amount of scaling up by the percentage of pods. The default value for scaling up by percentage is 100%. Example policy for scaling down apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: ... minReplicas: 20 ... behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 30 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max scaleUp: selectPolicy: Disabled In this example, when the number of pods is greater than 40, the percent-based policy is used for scaling down, as that policy results in a larger change, as required by the selectPolicy . If there are 80 pod replicas, in the first iteration the HPA reduces the pods by 8, which is 10% of the 80 pods (based on the type: Percent and value: 10 parameters), over one minute ( periodSeconds: 60 ). For the iteration, the number of pods is 72. The HPA calculates that 10% of the remaining pods is 7.2, which it rounds up to 8 and scales down 8 pods. On each subsequent iteration, the number of pods to be scaled is re-calculated based on the number of remaining pods. When the number of pods falls below 40, the pods-based policy is applied, because the pod-based number is greater than the percent-based number. The HPA reduces 4 pods at a time ( type: Pods and value: 4 ), over 30 seconds ( periodSeconds: 30 ), until there are 20 replicas remaining ( minReplicas ). The selectPolicy: Disabled parameter prevents the HPA from scaling up the pods. You can manually scale up by adjusting the number of replicas in the replica set or deployment set, if needed. If set, you can view the scaling policy by using the oc edit command: USD oc edit hpa hpa-resource-metrics-memory Example output apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: annotations: autoscaling.alpha.kubernetes.io/behavior:\ '{"ScaleUp":{"StabilizationWindowSeconds":0,"SelectPolicy":"Max","Policies":[{"Type":"Pods","Value":4,"PeriodSeconds":15},{"Type":"Percent","Value":100,"PeriodSeconds":15}]},\ "ScaleDown":{"StabilizationWindowSeconds":300,"SelectPolicy":"Min","Policies":[{"Type":"Pods","Value":4,"PeriodSeconds":60},{"Type":"Percent","Value":10,"PeriodSeconds":60}]}}' ... 2.4.5. Creating a horizontal pod autoscaler by using the web console From the web console, you can create a horizontal pod autoscaler (HPA) that specifies the minimum and maximum number of pods you want to run on a Deployment or DeploymentConfig object. You can also define the amount of CPU or memory usage that your pods should target. Note An HPA cannot be added to deployments that are part of an Operator-backed service, Knative service, or Helm chart. Procedure To create an HPA in the web console: In the Topology view, click the node to reveal the side pane. From the Actions drop-down list, select Add HorizontalPodAutoscaler to open the Add HorizontalPodAutoscaler form. Figure 2.2. Add HorizontalPodAutoscaler From the Add HorizontalPodAutoscaler form, define the name, minimum and maximum pod limits, the CPU and memory usage, and click Save . Note If any of the values for CPU and memory usage are missing, a warning is displayed. To edit an HPA in the web console: In the Topology view, click the node to reveal the side pane. From the Actions drop-down list, select Edit HorizontalPodAutoscaler to open the Edit Horizontal Pod Autoscaler form. From the Edit Horizontal Pod Autoscaler form, edit the minimum and maximum pod limits and the CPU and memory usage, and click Save . Note While creating or editing the horizontal pod autoscaler in the web console, you can switch from Form view to YAML view . To remove an HPA in the web console: In the Topology view, click the node to reveal the side panel. From the Actions drop-down list, select Remove HorizontalPodAutoscaler . In the confirmation pop-up window, click Remove to remove the HPA. 2.4.6. Creating a horizontal pod autoscaler for CPU utilization by using the CLI Using the OpenShift Container Platform CLI, you can create a horizontal pod autoscaler (HPA) to automatically scale an existing Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet object. The HPA scales the pods associated with that object to maintain the CPU usage you specify. Note It is recommended to use a Deployment object or ReplicaSet object unless you need a specific feature or behavior provided by other objects. The HPA increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified CPU utilization across all pods. When autoscaling for CPU utilization, you can use the oc autoscale command and specify the minimum and maximum number of pods you want to run at any given time and the average CPU utilization your pods should target. If you do not specify a minimum, the pods are given default values from the OpenShift Container Platform server. To autoscale for a specific CPU value, create a HorizontalPodAutoscaler object with the target CPU and pod limits. Prerequisites To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with Cpu and Memory displayed under Usage . USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Example output Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none> Procedure To create a horizontal pod autoscaler for CPU utilization: Perform one of the following: To scale based on the percent of CPU utilization, create a HorizontalPodAutoscaler object for an existing object: USD oc autoscale <object_type>/<name> \ 1 --min <number> \ 2 --max <number> \ 3 --cpu-percent=<percent> 4 1 Specify the type and name of the object to autoscale. The object must exist and be a Deployment , DeploymentConfig / dc , ReplicaSet / rs , ReplicationController / rc , or StatefulSet . 2 Optionally, specify the minimum number of replicas when scaling down. 3 Specify the maximum number of replicas when scaling up. 4 Specify the target average CPU utilization over all the pods, represented as a percent of requested CPU. If not specified or negative, a default autoscaling policy is used. For example, the following command shows autoscaling for the hello-node deployment object. The initial deployment requires 3 pods. The HPA object increases the minimum to 5. If CPU usage on the pods reaches 75%, the pods will increase to 7: USD oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75 To scale for a specific CPU value, create a YAML file similar to the following for an existing object: Create a YAML file similar to the following: apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: cpu-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: cpu 9 target: type: AverageValue 10 averageValue: 500m 11 1 Use the autoscaling/v2 API. 2 Specify a name for this horizontal pod autoscaler object. 3 Specify the API version of the object to scale: For a Deployment , ReplicaSet , Statefulset object, use apps/v1 . For a ReplicationController , use v1 . For a DeploymentConfig , use apps.openshift.io/v1 . 4 Specify the type of object. The object must be a Deployment , DeploymentConfig / dc , ReplicaSet / rs , ReplicationController / rc , or StatefulSet . 5 Specify the name of the object to scale. The object must exist. 6 Specify the minimum number of replicas when scaling down. 7 Specify the maximum number of replicas when scaling up. 8 Use the metrics parameter for memory utilization. 9 Specify cpu for CPU utilization. 10 Set to AverageValue . 11 Set to averageValue with the targeted CPU value. Create the horizontal pod autoscaler: USD oc create -f <file-name>.yaml Verify that the horizontal pod autoscaler was created: USD oc get hpa cpu-autoscale Example output NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE cpu-autoscale Deployment/example 173m/500m 1 10 1 20m 2.4.7. Creating a horizontal pod autoscaler object for memory utilization by using the CLI Using the OpenShift Container Platform CLI, you can create a horizontal pod autoscaler (HPA) to automatically scale an existing Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet object. The HPA scales the pods associated with that object to maintain the average memory utilization you specify, either a direct value or a percentage of requested memory. Note It is recommended to use a Deployment object or ReplicaSet object unless you need a specific feature or behavior provided by other objects. The HPA increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified memory utilization across all pods. For memory utilization, you can specify the minimum and maximum number of pods and the average memory utilization your pods should target. If you do not specify a minimum, the pods are given default values from the OpenShift Container Platform server. Prerequisites To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with Cpu and Memory displayed under Usage . USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-129-223.compute.internal -n openshift-kube-scheduler Example output Name: openshift-kube-scheduler-ip-10-0-129-223.compute.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Cpu: 0 Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2020-02-14T22:21:14Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-129-223.compute.internal Timestamp: 2020-02-14T22:21:14Z Window: 5m0s Events: <none> Procedure To create a horizontal pod autoscaler for memory utilization: Create a YAML file for one of the following: To scale for a specific memory value, create a HorizontalPodAutoscaler object similar to the following for an existing object: apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: AverageValue 10 averageValue: 500Mi 11 behavior: 12 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 60 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max 1 Use the autoscaling/v2 API. 2 Specify a name for this horizontal pod autoscaler object. 3 Specify the API version of the object to scale: For a Deployment , ReplicaSet , or Statefulset object, use apps/v1 . For a ReplicationController , use v1 . For a DeploymentConfig , use apps.openshift.io/v1 . 4 Specify the type of object. The object must be a Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet . 5 Specify the name of the object to scale. The object must exist. 6 Specify the minimum number of replicas when scaling down. 7 Specify the maximum number of replicas when scaling up. 8 Use the metrics parameter for memory utilization. 9 Specify memory for memory utilization. 10 Set the type to AverageValue . 11 Specify averageValue and a specific memory value. 12 Optional: Specify a scaling policy to control the rate of scaling up or down. To scale for a percentage, create a HorizontalPodAutoscaler object similar to the following for an existing object: apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: memory-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: Utilization 10 averageUtilization: 50 11 behavior: 12 scaleUp: stabilizationWindowSeconds: 180 policies: - type: Pods value: 6 periodSeconds: 120 - type: Percent value: 10 periodSeconds: 120 selectPolicy: Max 1 Use the autoscaling/v2 API. 2 Specify a name for this horizontal pod autoscaler object. 3 Specify the API version of the object to scale: For a ReplicationController, use v1 . For a DeploymentConfig, use apps.openshift.io/v1 . For a Deployment, ReplicaSet, Statefulset object, use apps/v1 . 4 Specify the type of object. The object must be a Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet . 5 Specify the name of the object to scale. The object must exist. 6 Specify the minimum number of replicas when scaling down. 7 Specify the maximum number of replicas when scaling up. 8 Use the metrics parameter for memory utilization. 9 Specify memory for memory utilization. 10 Set to Utilization . 11 Specify averageUtilization and a target average memory utilization over all the pods, represented as a percent of requested memory. The target pods must have memory requests configured. 12 Optional: Specify a scaling policy to control the rate of scaling up or down. Create the horizontal pod autoscaler: USD oc create -f <file-name>.yaml For example: USD oc create -f hpa.yaml Example output horizontalpodautoscaler.autoscaling/hpa-resource-metrics-memory created Verify that the horizontal pod autoscaler was created: USD oc get hpa hpa-resource-metrics-memory Example output NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-resource-metrics-memory Deployment/example 2441216/500Mi 1 10 1 20m USD oc describe hpa hpa-resource-metrics-memory Example output Name: hpa-resource-metrics-memory Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Wed, 04 Mar 2020 16:31:37 +0530 Reference: Deployment/example Metrics: ( current / target ) resource memory on pods: 2441216 / 500Mi Min replicas: 1 Max replicas: 10 ReplicationController pods: 1 current / 1 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale recommended size matches current size ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 6m34s horizontal-pod-autoscaler New size: 1; reason: All metrics below target 2.4.8. Understanding horizontal pod autoscaler status conditions by using the CLI You can use the status conditions set to determine whether or not the horizontal pod autoscaler (HPA) is able to scale and whether or not it is currently restricted in any way. The HPA status conditions are available with the v2 version of the autoscaling API. The HPA responds with the following status conditions: The AbleToScale condition indicates whether HPA is able to fetch and update metrics, as well as whether any backoff-related conditions could prevent scaling. A True condition indicates scaling is allowed. A False condition indicates scaling is not allowed for the reason specified. The ScalingActive condition indicates whether the HPA is enabled (for example, the replica count of the target is not zero) and is able to calculate desired metrics. A True condition indicates metrics is working properly. A False condition generally indicates a problem with fetching metrics. The ScalingLimited condition indicates that the desired scale was capped by the maximum or minimum of the horizontal pod autoscaler. A True condition indicates that you need to raise or lower the minimum or maximum replica count in order to scale. A False condition indicates that the requested scaling is allowed. USD oc describe hpa cm-test Example output Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) "http_requests" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range Events: 1 The horizontal pod autoscaler status messages. The following is an example of a pod that is unable to scale: Example output Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: no matches for kind "ReplicationController" in group "apps" Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 6s (x3 over 36s) horizontal-pod-autoscaler no matches for kind "ReplicationController" in group "apps" The following is an example of a pod that could not obtain the needed metrics for scaling: Example output Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API The following is an example of a pod where the requested autoscaling was less than the required minimums: Example output Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range 2.4.8.1. Viewing horizontal pod autoscaler status conditions by using the CLI You can view the status conditions set on a pod by the horizontal pod autoscaler (HPA). Note The horizontal pod autoscaler status conditions are available with the v2 version of the autoscaling API. Prerequisites To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with Cpu and Memory displayed under Usage . USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Example output Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none> Procedure To view the status conditions on a pod, use the following command with the name of the pod: USD oc describe hpa <pod-name> For example: USD oc describe hpa cm-test The conditions appear in the Conditions field in the output. Example output Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) "http_requests" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range 2.4.9. Additional resources For more information on replication controllers and deployment controllers, see Understanding deployments and deployment configs . For an example on the usage of HPA, see Horizontal Pod Autoscaling of Quarkus Application Based on Memory Utilization . 2.5. Automatically adjust pod resource levels with the vertical pod autoscaler The OpenShift Container Platform Vertical Pod Autoscaler Operator (VPA) automatically reviews the historic and current CPU and memory resources for containers in pods and can update the resource limits and requests based on the usage values it learns. The VPA uses individual custom resources (CR) to update all of the pods in a project that are associated with any built-in workload objects, including the following object types: Deployment DeploymentConfig StatefulSet Job DaemonSet ReplicaSet ReplicationController The VPA can also update certain custom resource object that manage pods, as described in Using the Vertical Pod Autoscaler Operator with Custom Resources . The VPA helps you to understand the optimal CPU and memory usage for your pods and can automatically maintain pod resources through the pod lifecycle. 2.5.1. About the Vertical Pod Autoscaler Operator The Vertical Pod Autoscaler Operator (VPA) is implemented as an API resource and a custom resource (CR). The CR determines the actions that the VPA Operator should take with the pods associated with a specific workload object, such as a daemon set, replication controller, and so forth, in a project. The VPA Operator consists of three components, each of which has its own pod in the VPA namespace: Recommender The VPA recommender monitors the current and past resource consumption and, based on this data, determines the optimal CPU and memory resources for the pods in the associated workload object. Updater The VPA updater checks if the pods in the associated workload object have the correct resources. If the resources are correct, the updater takes no action. If the resources are not correct, the updater kills the pod so that they can be recreated by their controllers with the updated requests. Admission controller The VPA admission controller sets the correct resource requests on each new pod in the associated workload object, whether the pod is new or was recreated by its controller due to the VPA updater actions. You can use the default recommender or use your own alternative recommender to autoscale based on your own algorithms. The default recommender automatically computes historic and current CPU and memory usage for the containers in those pods and uses this data to determine optimized resource limits and requests to ensure that these pods are operating efficiently at all times. For example, the default recommender suggests reduced resources for pods that are requesting more resources than they are using and increased resources for pods that are not requesting enough. The VPA then automatically deletes any pods that are out of alignment with these recommendations one at a time, so that your applications can continue to serve requests with no downtime. The workload objects then re-deploy the pods with the original resource limits and requests. The VPA uses a mutating admission webhook to update the pods with optimized resource limits and requests before the pods are admitted to a node. If you do not want the VPA to delete pods, you can view the VPA resource limits and requests and manually update the pods as needed. Note By default, workload objects must specify a minimum of two replicas in order for the VPA to automatically delete their pods. Workload objects that specify fewer replicas than this minimum are not deleted. If you manually delete these pods, when the workload object redeploys the pods, the VPA does update the new pods with its recommendations. You can change this minimum by modifying the VerticalPodAutoscalerController object as shown in Changing the VPA minimum value . For example, if you have a pod that uses 50% of the CPU but only requests 10%, the VPA determines that the pod is consuming more CPU than requested and deletes the pod. The workload object, such as replica set, restarts the pods and the VPA updates the new pod with its recommended resources. For developers, you can use the VPA to help ensure your pods stay up during periods of high demand by scheduling pods onto nodes that have appropriate resources for each pod. Administrators can use the VPA to better utilize cluster resources, such as preventing pods from reserving more CPU resources than needed. The VPA monitors the resources that workloads are actually using and adjusts the resource requirements so capacity is available to other workloads. The VPA also maintains the ratios between limits and requests that are specified in initial container configuration. Note If you stop running the VPA or delete a specific VPA CR in your cluster, the resource requests for the pods already modified by the VPA do not change. Any new pods get the resources defined in the workload object, not the recommendations made by the VPA. 2.5.2. Installing the Vertical Pod Autoscaler Operator You can use the OpenShift Container Platform web console to install the Vertical Pod Autoscaler Operator (VPA). Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose VerticalPodAutoscaler from the list of available Operators, and click Install . On the Install Operator page, ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory openshift-vertical-pod-autoscaler namespace, which is automatically created if it does not exist. Click Install . Verifiction Verify the installation by listing the VPA Operator components: Navigate to Workloads Pods . Select the openshift-vertical-pod-autoscaler project from the drop-down menu and verify that there are four pods running. Navigate to Workloads Deployments to verify that there are four deployments running. Optional: Verify the installation in the OpenShift Container Platform CLI using the following command: USD oc get all -n openshift-vertical-pod-autoscaler The output shows four pods and four deployments: Example output NAME READY STATUS RESTARTS AGE pod/vertical-pod-autoscaler-operator-85b4569c47-2gmhc 1/1 Running 0 3m13s pod/vpa-admission-plugin-default-67644fc87f-xq7k9 1/1 Running 0 2m56s pod/vpa-recommender-default-7c54764b59-8gckt 1/1 Running 0 2m56s pod/vpa-updater-default-7f6cc87858-47vw9 1/1 Running 0 2m56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/vpa-webhook ClusterIP 172.30.53.206 <none> 443/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/vertical-pod-autoscaler-operator 1/1 1 1 3m13s deployment.apps/vpa-admission-plugin-default 1/1 1 1 2m56s deployment.apps/vpa-recommender-default 1/1 1 1 2m56s deployment.apps/vpa-updater-default 1/1 1 1 2m56s NAME DESIRED CURRENT READY AGE replicaset.apps/vertical-pod-autoscaler-operator-85b4569c47 1 1 1 3m13s replicaset.apps/vpa-admission-plugin-default-67644fc87f 1 1 1 2m56s replicaset.apps/vpa-recommender-default-7c54764b59 1 1 1 2m56s replicaset.apps/vpa-updater-default-7f6cc87858 1 1 1 2m56s 2.5.3. Moving the Vertical Pod Autoscaler Operator components The Vertical Pod Autoscaler Operator (VPA) and each component has its own pod in the VPA namespace on the control plane nodes. You can move the VPA Operator and component pods to infrastructure or worker nodes by adding a node selector to the VPA subscription and the VerticalPodAutoscalerController CR. You can create and use infrastructure nodes to host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure nodes are not counted toward the total number of subscriptions that are required to run the environment. For more information, see Creating infrastructure machine sets . You can move the components to the same node or separate nodes as appropriate for your organization. The following example shows the default deployment of the VPA pods to the control plane nodes. Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-master-1 <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-master-1 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-master-0 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-master-1 <none> <none> Procedure Move the VPA Operator pod by adding a node selector to the Subscription custom resource (CR) for the VPA Operator: Edit the CR: USD oc edit Subscription vertical-pod-autoscaler -n openshift-vertical-pod-autoscaler Add a node selector to match the node role label on the node where you want to install the VPA Operator pod: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: "" name: vertical-pod-autoscaler # ... spec: config: nodeSelector: node-role.kubernetes.io/<node_role>: "" 1 1 1 Specifies the node role of the node where you want to move the VPA Operator pod. Note If the infra node uses taints, you need to add a toleration to the Subscription CR. For example: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: "" name: vertical-pod-autoscaler # ... spec: config: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: 1 - key: "node-role.kubernetes.io/infra" operator: "Exists" effect: "NoSchedule" 1 Specifies a toleration for a taint on the node where you want to move the VPA Operator pod. Move each VPA component by adding node selectors to the VerticalPodAutoscaler custom resource (CR): Edit the CR: USD oc edit VerticalPodAutoscalerController default -n openshift-vertical-pod-autoscaler Add node selectors to match the node role label on the node where you want to install the VPA components: apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler # ... spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: "" 1 recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: "" 2 updater: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: "" 3 1 Optional: Specifies the node role for the VPA admission pod. 2 Optional: Specifies the node role for the VPA recommender pod. 3 Optional: Specifies the node role for the VPA updater pod. Note If a target node uses taints, you need to add a toleration to the VerticalPodAutoscalerController CR. For example: apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler # ... spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: "" tolerations: 1 - key: "my-example-node-taint-key" operator: "Exists" effect: "NoSchedule" recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: "" tolerations: 2 - key: "my-example-node-taint-key" operator: "Exists" effect: "NoSchedule" updater: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: "" tolerations: 3 - key: "my-example-node-taint-key" operator: "Exists" effect: "NoSchedule" 1 Specifies a toleration for the admission controller pod for a taint on the node where you want to install the pod. 2 Specifies a toleration for the recommender pod for a taint on the node where you want to install the pod. 3 Specifies a toleration for the updater pod for a taint on the node where you want to install the pod. Verification You can verify the pods have moved by using the following command: USD oc get pods -n openshift-vertical-pod-autoscaler -o wide The pods are no longer deployed to the control plane nodes. Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-infra-eastus3-2bndt <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> Additional resources Creating infrastructure machine sets 2.5.4. About Using the Vertical Pod Autoscaler Operator To use the Vertical Pod Autoscaler Operator (VPA), you create a VPA custom resource (CR) for a workload object in your cluster. The VPA learns and applies the optimal CPU and memory resources for the pods associated with that workload object. You can use a VPA with a deployment, stateful set, job, daemon set, replica set, or replication controller workload object. The VPA CR must be in the same project as the pods you want to monitor. You use the VPA CR to associate a workload object and specify which mode the VPA operates in: The Auto and Recreate modes automatically apply the VPA CPU and memory recommendations throughout the pod lifetime. The VPA deletes any pods in the project that are out of alignment with its recommendations. When redeployed by the workload object, the VPA updates the new pods with its recommendations. The Initial mode automatically applies VPA recommendations only at pod creation. The Off mode only provides recommended resource limits and requests, allowing you to manually apply the recommendations. The off mode does not update pods. You can also use the CR to opt-out certain containers from VPA evaluation and updates. For example, a pod has the following limits and requests: resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi After creating a VPA that is set to auto , the VPA learns the resource usage and deletes the pod. When redeployed, the pod uses the new resource limits and requests: resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k You can view the VPA recommendations using the following command: USD oc get vpa <vpa-name> --output yaml After a few minutes, the output shows the recommendations for CPU and memory requests, similar to the following: Example output ... status: ... recommendation: containerRecommendations: - containerName: frontend lowerBound: cpu: 25m memory: 262144k target: cpu: 25m memory: 262144k uncappedTarget: cpu: 25m memory: 262144k upperBound: cpu: 262m memory: "274357142" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: "498558823" ... The output shows the recommended resources, target , the minimum recommended resources, lowerBound , the highest recommended resources, upperBound , and the most recent resource recommendations, uncappedTarget . The VPA uses the lowerBound and upperBound values to determine if a pod needs to be updated. If a pod has resource requests below the lowerBound values or above the upperBound values, the VPA terminates and recreates the pod with the target values. 2.5.4.1. Changing the VPA minimum value By default, workload objects must specify a minimum of two replicas in order for the VPA to automatically delete and update their pods. As a result, workload objects that specify fewer than two replicas are not automatically acted upon by the VPA. The VPA does update new pods from these workload objects if the pods are restarted by some process external to the VPA. You can change this cluster-wide minimum value by modifying the minReplicas parameter in the VerticalPodAutoscalerController custom resource (CR). For example, if you set minReplicas to 3 , the VPA does not delete and update pods for workload objects that specify fewer than three replicas. Note If you set minReplicas to 1 , the VPA can delete the only pod for a workload object that specifies only one replica. You should use this setting with one-replica objects only if your workload can tolerate downtime whenever the VPA deletes a pod to adjust its resources. To avoid unwanted downtime with one-replica objects, configure the VPA CRs with the podUpdatePolicy set to Initial , which automatically updates the pod only when it is restarted by some process external to the VPA, or Off , which allows you to update the pod manually at an appropriate time for your application. Example VerticalPodAutoscalerController object apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: creationTimestamp: "2021-04-21T19:29:49Z" generation: 2 name: default namespace: openshift-vertical-pod-autoscaler resourceVersion: "142172" uid: 180e17e9-03cc-427f-9955-3b4d7aeb2d59 spec: minReplicas: 3 1 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15 1 Specify the minimum number of replicas in a workload object for the VPA to act on. Any objects with replicas fewer than the minimum are not automatically deleted by the VPA. 2.5.4.2. Automatically applying VPA recommendations To use the VPA to automatically update pods, create a VPA CR for a specific workload object with updateMode set to Auto or Recreate . When the pods are created for the workload object, the VPA constantly monitors the containers to analyze their CPU and memory needs. The VPA deletes any pods that do not meet the VPA recommendations for CPU and memory. When redeployed, the pods use the new resource limits and requests based on the VPA recommendations, honoring any pod disruption budget set for your applications. The recommendations are added to the status field of the VPA CR for reference. Note By default, workload objects must specify a minimum of two replicas in order for the VPA to automatically delete their pods. Workload objects that specify fewer replicas than this minimum are not deleted. If you manually delete these pods, when the workload object redeploys the pods, the VPA does update the new pods with its recommendations. You can change this minimum by modifying the VerticalPodAutoscalerController object as shown in Changing the VPA minimum value . Example VPA CR for the Auto mode apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Auto" 3 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Auto or Recreate : Auto . The VPA assigns resource requests on pod creation and updates the existing pods by terminating them when the requested resources differ significantly from the new recommendation. Recreate . The VPA assigns resource requests on pod creation and updates the existing pods by terminating them when the requested resources differ significantly from the new recommendation. This mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. Note Before a VPA can determine recommendations for resources and apply the recommended resources to new pods, operating pods must exist and be running in the project. If a workload's resource usage, such as CPU and memory, is consistent, the VPA can determine recommendations for resources in a few minutes. If a workload's resource usage is inconsistent, the VPA must collect metrics at various resource usage intervals for the VPA to make an accurate recommendation. 2.5.4.3. Automatically applying VPA recommendations on pod creation To use the VPA to apply the recommended resources only when a pod is first deployed, create a VPA CR for a specific workload object with updateMode set to Initial . Then, manually delete any pods associated with the workload object that you want to use the VPA recommendations. In the Initial mode, the VPA does not delete pods and does not update the pods as it learns new resource recommendations. Example VPA CR for the Initial mode apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Initial" 3 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Initial . The VPA assigns resources when pods are created and does not change the resources during the lifetime of the pod. Note Before a VPA can determine recommended resources and apply the recommendations to new pods, operating pods must exist and be running in the project. To obtain the most accurate recommendations from the VPA, wait at least 8 days for the pods to run and for the VPA to stabilize. 2.5.4.4. Manually applying VPA recommendations To use the VPA to only determine the recommended CPU and memory values, create a VPA CR for a specific workload object with updateMode set to off . When the pods are created for that workload object, the VPA analyzes the CPU and memory needs of the containers and records those recommendations in the status field of the VPA CR. The VPA does not update the pods as it determines new resource recommendations. Example VPA CR for the Off mode apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Off" 3 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Off . You can view the recommendations using the following command. USD oc get vpa <vpa-name> --output yaml With the recommendations, you can edit the workload object to add CPU and memory requests, then delete and redeploy the pods using the recommended resources. Note Before a VPA can determine recommended resources and apply the recommendations to new pods, operating pods must exist and be running in the project. To obtain the most accurate recommendations from the VPA, wait at least 8 days for the pods to run and for the VPA to stabilize. 2.5.4.5. Exempting containers from applying VPA recommendations If your workload object has multiple containers and you do not want the VPA to evaluate and act on all of the containers, create a VPA CR for a specific workload object and add a resourcePolicy to opt-out specific containers. When the VPA updates the pods with recommended resources, any containers with a resourcePolicy are not updated and the VPA does not present recommendations for those containers in the pod. apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Auto" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: "Off" 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Auto , Recreate , or Off . The Recreate mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. 4 Specify the containers you want to opt-out and set mode to Off . For example, a pod has two containers, the same resource requests and limits: # ... spec: containers: - name: frontend resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi - name: backend resources: limits: cpu: "1" memory: 500Mi requests: cpu: 500m memory: 100Mi # ... After launching a VPA CR with the backend container set to opt-out, the VPA terminates and recreates the pod with the recommended resources applied only to the frontend container: ... spec: containers: name: frontend resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k ... name: backend resources: limits: cpu: "1" memory: 500Mi requests: cpu: 500m memory: 100Mi ... 2.5.4.6. Performance tuning the VPA Operator As a cluster administrator, you can tune the performance of your Vertical Pod Autoscaler Operator (VPA) to limit the rate at which the VPA makes requests of the Kubernetes API server and to specify the CPU and memory resources for the VPA recommender, updater, and admission controller component pods. Additionally, you can configure the VPA Operator to monitor only those workloads that are being managed by a VPA custom resource (CR). By default, the VPA Operator monitors every workload in the cluster. This allows the VPA Operator to accrue and store 8 days of historical data for all workloads, which the Operator can use if a new VPA CR is created for a workload. However, this causes the VPA Operator to use significant CPU and memory, which could cause the Operator to fail, particularly on larger clusters. By configuring the VPA Operator to monitor only workloads with a VPA CR, you can save on CPU and memory resources. One trade-off is that if you have a workload that has been running, and you create a VPA CR to manage that workload, the VPA Operator does not have any historical data for that workload. As a result, the initial recommendations are not as useful as those after the workload had been running for some time. These tunings allow you to ensure the VPA has sufficient resources to operate at peak efficiency and to prevent throttling and a possible delay in pod admissions. You can perform the following tunings on the VPA components by editing the VerticalPodAutoscalerController custom resource (CR): To prevent throttling and pod admission delays, set the queries-per-second (QPS) and burst rates for VPA requests of the Kubernetes API server by using the kube-api-qps and kube-api-burst parameters. To ensure sufficient CPU and memory, set the CPU and memory requests for VPA component pods by using the standard cpu and memory resource requests. To configure the VPA Operator to monitor only workloads that are being managed by a VPA CR, set the memory-saver parameter to true for the recommender component. For guidelines on the resources and rate limits that you could set for each VPA component, the following tables provide recommended baseline values, depending on the size of your cluster and other factors. Important These recommended values were derived from internal Red Hat testing on clusters that are not necessarily representative of real-world clusters. You should test these values in a non-production cluster before configuring a production cluster. Table 2.2. Requests by containers in the cluster Component 1-500 containers 500-1000 containers 1000-2000 containers 2000-4000 containers 4000+ containers CPU Memory CPU Memory CPU Memory CPU Memory CPU Memory Admission 25m 50Mi 25m 75Mi 40m 150Mi 75m 260Mi (0.03c)/2 + 10 [1] (0.1c)/2 + 50 [1] Recommender 25m 100Mi 50m 160Mi 75m 275Mi 120m 420Mi (0.05c)/2 + 50 [1] (0.15c)/2 + 120 [1] Updater 25m 100Mi 50m 220Mi 80m 350Mi 150m 500Mi (0.07c)/2 + 20 [1] (0.15c)/2 + 200 [1] c is the number of containers in the cluster. Note It is recommended that you set the memory limit on your containers to at least double the recommended requests in the table. However, because CPU is a compressible resource, setting CPU limits for containers can throttle the VPA. As such, it is recommended that you do not set a CPU limit on your containers. Table 2.3. Rate limits by VPAs in the cluster Component 1 - 150 VPAs 151 - 500 VPAs 501-2000 VPAs 2001-4000 VPAs QPS Limit [1] Burst [2] QPS Limit Burst QPS Limit Burst QPS Limit Burst Recommender 5 10 30 60 60 120 120 240 Updater 5 10 30 60 60 120 120 240 QPS specifies the queries per second (QPS) limit when making requests to Kubernetes API server. The default for the updater and recommender pods is 5.0 . Burst specifies the burst limit when making requests to Kubernetes API server. The default for the updater and recommender pods is 10.0 . Note If you have more than 4000 VPAs in your cluster, it is recommended that you start performance tuning with the values in the table and slowly increase the values until you achieve the desired recommender and updater latency and performance. You should adjust these values slowly because increased QPS and Burst could affect the cluster health and slow down the Kubernetes API server if too many API requests are being sent to the API server from the VPA components. The following example VPA controller CR is for a cluster with 1000 to 2000 containers and a pod creation surge of 26 to 50. The CR sets the following values: The container memory and CPU requests for all three VPA components The container memory limit for all three VPA components The QPS and burst rates for all three VPA components The memory-saver parameter to true for the VPA recommender component Example VerticalPodAutoscalerController CR apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: 1 container: args: 2 - '--kube-api-qps=50.0' - '--kube-api-burst=100.0' resources: requests: 3 cpu: 40m memory: 150Mi limits: memory: 300Mi recommender: 4 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' - '--memory-saver=true' 5 resources: requests: cpu: 75m memory: 275Mi limits: memory: 550Mi updater: 6 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' resources: requests: cpu: 80m memory: 350M limits: memory: 700Mi minReplicas: 2 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15 1 Specifies the tuning parameters for the VPA admission controller. 2 Specifies the API QPS and burst rates for the VPA admission controller. kube-api-qps : Specifies the queries per second (QPS) limit when making requests to Kubernetes API server. The default is 5.0 . kube-api-burst : Specifies the burst limit when making requests to Kubernetes API server. The default is 10.0 . 3 Specifies the resource requests and limits for the VPA admission controller pod. 4 Specifies the tuning parameters for the VPA recommender. 5 Specifies that the VPA Operator monitors only workloads with a VPA CR. The default is false . 6 Specifies the tuning parameters for the VPA updater. You can verify that the settings were applied to each VPA component pod. Example updater pod apiVersion: v1 kind: Pod metadata: name: vpa-updater-default-d65ffb9dc-hgw44 namespace: openshift-vertical-pod-autoscaler # ... spec: containers: - args: - --logtostderr - --v=1 - --min-replicas=2 - --kube-api-qps=60.0 - --kube-api-burst=120.0 # ... resources: requests: cpu: 80m memory: 350M # ... Example admission controller pod apiVersion: v1 kind: Pod metadata: name: vpa-admission-plugin-default-756999448c-l7tsd namespace: openshift-vertical-pod-autoscaler # ... spec: containers: - args: - --logtostderr - --v=1 - --tls-cert-file=/data/tls-certs/tls.crt - --tls-private-key=/data/tls-certs/tls.key - --client-ca-file=/data/tls-ca-certs/service-ca.crt - --webhook-timeout-seconds=10 - --kube-api-qps=50.0 - --kube-api-burst=100.0 # ... resources: requests: cpu: 40m memory: 150Mi # ... Example recommender pod apiVersion: v1 kind: Pod metadata: name: vpa-recommender-default-74c979dbbc-znrd2 namespace: openshift-vertical-pod-autoscaler # ... spec: containers: - args: - --logtostderr - --v=1 - --recommendation-margin-fraction=0.15 - --pod-recommendation-min-cpu-millicores=25 - --pod-recommendation-min-memory-mb=250 - --kube-api-qps=60.0 - --kube-api-burst=120.0 - --memory-saver=true # ... resources: requests: cpu: 75m memory: 275Mi # ... 2.5.4.7. Using an alternative recommender You can use your own recommender to autoscale based on your own algorithms. If you do not specify an alternative recommender, OpenShift Container Platform uses the default recommender, which suggests CPU and memory requests based on historical usage. Because there is no universal recommendation policy that applies to all types of workloads, you might want to create and deploy different recommenders for specific workloads. For example, the default recommender might not accurately predict future resource usage when containers exhibit certain resource behaviors, such as cyclical patterns that alternate between usage spikes and idling as used by monitoring applications, or recurring and repeating patterns used with deep learning applications. Using the default recommender with these usage behaviors might result in significant over-provisioning and Out of Memory (OOM) kills for your applications. Note Instructions for how to create a recommender are beyond the scope of this documentation, Procedure To use an alternative recommender for your pods: Create a service account for the alternative recommender and bind that service account to the required cluster role: apiVersion: v1 1 kind: ServiceAccount metadata: name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRoleBinding metadata: name: system:example-metrics-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 3 kind: ClusterRoleBinding metadata: name: system:example-vpa-actor roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-actor subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRoleBinding metadata: name: system:example-vpa-target-reader-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-target-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> 1 Creates a service account for the recommender in the namespace where the recommender is deployed. 2 Binds the recommender service account to the metrics-reader role. Specify the namespace where the recommender is to be deployed. 3 Binds the recommender service account to the vpa-actor role. Specify the namespace where the recommender is to be deployed. 4 Binds the recommender service account to the vpa-target-reader role. Specify the namespace where the recommender is to be deployed. To add the alternative recommender to the cluster, create a Deployment object similar to the following: apiVersion: apps/v1 kind: Deployment metadata: name: alt-vpa-recommender namespace: <namespace_name> spec: replicas: 1 selector: matchLabels: app: alt-vpa-recommender template: metadata: labels: app: alt-vpa-recommender spec: containers: 1 - name: recommender image: quay.io/example/alt-recommender:latest 2 imagePullPolicy: Always resources: limits: cpu: 200m memory: 1000Mi requests: cpu: 50m memory: 500Mi ports: - name: prometheus containerPort: 8942 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL seccompProfile: type: RuntimeDefault serviceAccountName: alt-vpa-recommender-sa 3 securityContext: runAsNonRoot: true 1 Creates a container for your alternative recommender. 2 Specifies your recommender image. 3 Associates the service account that you created for the recommender. A new pod is created for the alternative recommender in the same namespace. USD oc get pods Example output NAME READY STATUS RESTARTS AGE frontend-845d5478d-558zf 1/1 Running 0 4m25s frontend-845d5478d-7z9gx 1/1 Running 0 4m25s frontend-845d5478d-b7l4j 1/1 Running 0 4m25s vpa-alt-recommender-55878867f9-6tp5v 1/1 Running 0 9s Configure a VPA CR that includes the name of the alternative recommender Deployment object. Example VPA CR to include the alternative recommender apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender namespace: <namespace_name> spec: recommenders: - name: alt-vpa-recommender 1 targetRef: apiVersion: "apps/v1" kind: Deployment 2 name: frontend 1 Specifies the name of the alternative recommender deployment. 2 Specifies the name of an existing workload object you want this VPA to manage. 2.5.5. Using the Vertical Pod Autoscaler Operator You can use the Vertical Pod Autoscaler Operator (VPA) by creating a VPA custom resource (CR). The CR indicates which pods it should analyze and determines the actions the VPA should take with those pods. You can use the VPA to scale built-in resources such as deployments or stateful sets, and custom resources that manage pods. For more information on using the VPA with custom resources, see "Using the Vertical Pod Autoscaler Operator with Custom Resources." Prerequisites The workload object that you want to autoscale must exist. If you want to use an alternative recommender, a deployment including that recommender must exist. Procedure To create a VPA CR for a specific workload object: Change to the project where the workload object you want to scale is located. Create a VPA CR YAML file: apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Auto" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: "Off" recommenders: 5 - name: my-recommender 1 Specify the type of workload object you want this VPA to manage: Deployment , StatefulSet , Job , DaemonSet , ReplicaSet , or ReplicationController . 2 Specify the name of an existing workload object you want this VPA to manage. 3 Specify the VPA mode: auto to automatically apply the recommended resources on pods associated with the controller. The VPA terminates existing pods and creates new pods with the recommended resource limits and requests. recreate to automatically apply the recommended resources on pods associated with the workload object. The VPA terminates existing pods and creates new pods with the recommended resource limits and requests. The recreate mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. initial to automatically apply the recommended resources when pods associated with the workload object are created. The VPA does not update the pods as it learns new resource recommendations. off to only generate resource recommendations for the pods associated with the workload object. The VPA does not update the pods as it learns new resource recommendations and does not apply the recommendations to new pods. 4 Optional. Specify the containers you want to opt-out and set the mode to Off . 5 Optional. Specify an alternative recommender. Create the VPA CR: USD oc create -f <file-name>.yaml After a few moments, the VPA learns the resource usage of the containers in the pods associated with the workload object. You can view the VPA recommendations using the following command: USD oc get vpa <vpa-name> --output yaml The output shows the recommendations for CPU and memory requests, similar to the following: Example output ... status: ... recommendation: containerRecommendations: - containerName: frontend lowerBound: 1 cpu: 25m memory: 262144k target: 2 cpu: 25m memory: 262144k uncappedTarget: 3 cpu: 25m memory: 262144k upperBound: 4 cpu: 262m memory: "274357142" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: "498558823" ... 1 lowerBound is the minimum recommended resource levels. 2 target is the recommended resource levels. 3 upperBound is the highest recommended resource levels. 4 uncappedTarget is the most recent resource recommendations. 2.5.5.1. Example custom resources for the Vertical Pod Autoscaler The Vertical Pod Autoscaler Operator (VPA) can update not only built-in resources such as deployments or stateful sets, but also custom resources that manage pods. In order to use the VPA with a custom resource, when you create the CustomResourceDefinition (CRD) object, you must configure the labelSelectorPath field in the /scale subresource. The /scale subresource creates a Scale object. The labelSelectorPath field defines the JSON path inside the custom resource that corresponds to Status.Selector in the Scale object and in the custom resource. The following is an example of a CustomResourceDefinition and a CustomResource that fulfills these requirements, along with a VerticalPodAutoscaler definition that targets the custom resource. The following example shows the /scale subresource contract. Note This example does not result in the VPA scaling pods because there is no controller for the custom resource that allows it to own any pods. As such, you must write a controller in a language supported by Kubernetes to manage the reconciliation and state management between the custom resource and your pods. The example illustrates the configuration for the VPA to understand the custom resource as scalable. Example custom CRD, CR apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: scalablepods.testing.openshift.io spec: group: testing.openshift.io versions: - name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: replicas: type: integer minimum: 0 selector: type: string status: type: object properties: replicas: type: integer subresources: status: {} scale: specReplicasPath: .spec.replicas statusReplicasPath: .status.replicas labelSelectorPath: .spec.selector 1 scope: Namespaced names: plural: scalablepods singular: scalablepod kind: ScalablePod shortNames: - spod 1 Specifies the JSON path that corresponds to status.selector field of the custom resource object. Example custom CR apiVersion: testing.openshift.io/v1 kind: ScalablePod metadata: name: scalable-cr namespace: default spec: selector: "app=scalable-cr" 1 replicas: 1 1 Specify the label type to apply to managed pods. This is the field referenced by the labelSelectorPath in the custom resource definition object. Example VPA object apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: scalable-cr namespace: default spec: targetRef: apiVersion: testing.openshift.io/v1 kind: ScalablePod name: scalable-cr updatePolicy: updateMode: "Auto" 2.5.6. Uninstalling the Vertical Pod Autoscaler Operator You can remove the Vertical Pod Autoscaler Operator (VPA) from your OpenShift Container Platform cluster. After uninstalling, the resource requests for the pods already modified by an existing VPA CR do not change. Any new pods get the resources defined in the workload object, not the recommendations made by the Vertical Pod Autoscaler Operator. Note You can remove a specific VPA CR by using the oc delete vpa <vpa-name> command. The same actions apply for resource requests as uninstalling the vertical pod autoscaler. After removing the VPA Operator, it is recommended that you remove the other components associated with the Operator to avoid potential issues. Prerequisites The Vertical Pod Autoscaler Operator must be installed. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Switch to the openshift-vertical-pod-autoscaler project. For the VerticalPodAutoscaler Operator, click the Options menu and select Uninstall Operator . Optional: To remove all operands associated with the Operator, in the dialog box, select Delete all operand instances for this operator checkbox. Click Uninstall . Optional: Use the OpenShift CLI to remove the VPA components: Delete the VPA namespace: USD oc delete namespace openshift-vertical-pod-autoscaler Delete the VPA custom resource definition (CRD) objects: USD oc delete crd verticalpodautoscalercheckpoints.autoscaling.k8s.io USD oc delete crd verticalpodautoscalercontrollers.autoscaling.openshift.io USD oc delete crd verticalpodautoscalers.autoscaling.k8s.io Deleting the CRDs removes the associated roles, cluster roles, and role bindings. Note This action removes from the cluster all user-created VPA CRs. If you re-install the VPA, you must create these objects again. Delete the MutatingWebhookConfiguration object by running the following command: USD oc delete MutatingWebhookConfiguration vpa-webhook-config Delete the VPA Operator: USD oc delete operator/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler 2.6. Providing sensitive data to pods by using secrets Some applications need sensitive information, such as passwords and user names, that you do not want developers to have. As an administrator, you can use Secret objects to provide this information without exposing that information in clear text. 2.6.1. Understanding secrets The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. Key properties include: Secret data can be referenced independently from its definition. Secret data volumes are backed by temporary file-storage facilities (tmpfs) and never come to rest on a node. Secret data can be shared within a namespace. YAML Secret object definition apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5 1 Indicates the structure of the secret's key names and values. 2 The allowable format for the keys in the data field must meet the guidelines in the DNS_SUBDOMAIN value in the Kubernetes identifiers glossary . 3 The value associated with keys in the data map must be base64 encoded. 4 Entries in the stringData map are converted to base64 and the entry will then be moved to the data map automatically. This field is write-only; the value will only be returned via the data field. 5 The value associated with keys in the stringData map is made up of plain text strings. You must create a secret before creating the pods that depend on that secret. When creating secrets: Create a secret object with secret data. Update the pod's service account to allow the reference to the secret. Create a pod, which consumes the secret as an environment variable or as a file (using a secret volume). 2.6.1.1. Types of secrets The value in the type field indicates the structure of the secret's key names and values. The type can be used to enforce the presence of user names and keys in the secret object. If you do not want validation, use the opaque type, which is the default. Specify one of the following types to trigger minimal server-side validation to ensure the presence of specific key names in the secret data: kubernetes.io/basic-auth : Use with Basic authentication kubernetes.io/dockercfg : Use as an image pull secret kubernetes.io/dockerconfigjson : Use as an image pull secret kubernetes.io/service-account-token : Use to obtain a legacy service account API token kubernetes.io/ssh-auth : Use with SSH key authentication kubernetes.io/tls : Use with TLS certificate authorities Specify type: Opaque if you do not want validation, which means the secret does not claim to conform to any convention for key names or values. An opaque secret, allows for unstructured key:value pairs that can contain arbitrary values. Note You can specify other arbitrary types, such as example.com/my-secret-type . These types are not enforced server-side, but indicate that the creator of the secret intended to conform to the key/value requirements of that type. For examples of creating different types of secrets, see Understanding how to create secrets . 2.6.1.2. Secret data keys Secret keys must be in a DNS subdomain. 2.6.1.3. Automatically generated image pull secrets By default, OpenShift Container Platform creates an image pull secret for each service account. Note Prior to OpenShift Container Platform 4.16, a long-lived service account API token secret was also generated for each service account that was created. Starting with OpenShift Container Platform 4.16, this service account API token secret is no longer created. After upgrading to 4.16, any existing long-lived service account API token secrets are not deleted and will continue to function. For information about detecting long-lived API tokens that are in use in your cluster or deleting them if they are not needed, see the Red Hat Knowledgebase article Long-lived service account API tokens in OpenShift Container Platform . This image pull secret is necessary to integrate the OpenShift image registry into the cluster's user authentication and authorization system. However, if you do not enable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, an image pull secret is not generated for each service account. When the integrated OpenShift image registry is disabled on a cluster that previously had it enabled, the previously generated image pull secrets are deleted automatically. 2.6.2. Understanding how to create secrets As an administrator you must create a secret before developers can create the pods that depend on that secret. When creating secrets: Create a secret object that contains the data you want to keep secret. The specific data required for each secret type is descibed in the following sections. Example YAML object that creates an opaque secret apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque 1 data: 2 username: <username> password: <password> stringData: 3 hostname: myapp.mydomain.com secret.properties: | property1=valueA property2=valueB 1 Specifies the type of secret. 2 Specifies encoded string and data. 3 Specifies decoded string and data. Use either the data or stringdata fields, not both. Update the pod's service account to reference the secret: YAML of a service account that uses a secret apiVersion: v1 kind: ServiceAccount ... secrets: - name: test-secret Create a pod, which consumes the secret as an environment variable or as a file (using a secret volume): YAML of a pod populating files in a volume with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "cat /etc/secret-volume/*" ] volumeMounts: 1 - name: secret-volume mountPath: /etc/secret-volume 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: secret-volume secret: secretName: test-secret 4 restartPolicy: Never 1 Add a volumeMounts field to each container that needs the secret. 2 Specifies an unused directory name where you would like the secret to appear. Each key in the secret data map becomes the filename under mountPath . 3 Set to true . If true, this instructs the driver to provide a read-only volume. 4 Specifies the name of the secret. YAML of a pod populating environment variables with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "export" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Specifies the environment variable that consumes the secret key. YAML of a build config populating environment variables with secret data apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username from: kind: ImageStreamTag namespace: openshift name: 'cli:latest' 1 Specifies the environment variable that consumes the secret key. 2.6.2.1. Secret creation restrictions To use a secret, a pod needs to reference the secret. A secret can be used with a pod in three ways: To populate environment variables for containers. As files in a volume mounted on one or more of its containers. By kubelet when pulling images for the pod. Volume type secrets write data into the container as a file using the volume mechanism. Image pull secrets use service accounts for the automatic injection of the secret into all pods in a namespace. When a template contains a secret definition, the only way for the template to use the provided secret is to ensure that the secret volume sources are validated and that the specified object reference actually points to a Secret object. Therefore, a secret needs to be created before any pods that depend on it. The most effective way to ensure this is to have it get injected automatically through the use of a service account. Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace. Individual secrets are limited to 1MB in size. This is to discourage the creation of large secrets that could exhaust apiserver and kubelet memory. However, creation of a number of smaller secrets could also exhaust memory. 2.6.2.2. Creating an opaque secret As an administrator, you can create an opaque secret, which allows you to store unstructured key:value pairs that can contain arbitrary values. Procedure Create a Secret object in a YAML file on a control plane node. For example: apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password> 1 Specifies an opaque secret. Use the following command to create a Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets 2.6.2.3. Creating a legacy service account token secret As an administrator, you can create a legacy service account token secret, which allows you to distribute a service account token to applications that must authenticate to the API. Warning It is recommended to obtain bound service account tokens using the TokenRequest API instead of using legacy service account token secrets. You should create a service account token secret only if you cannot use the TokenRequest API and if the security exposure of a nonexpiring token in a readable API object is acceptable to you. Bound service account tokens are more secure than service account token secrets for the following reasons: Bound service account tokens have a bounded lifetime. Bound service account tokens contain audiences. Bound service account tokens can be bound to pods or secrets and the bound tokens are invalidated when the bound object is removed. Workloads are automatically injected with a projected volume to obtain a bound service account token. If your workload needs an additional service account token, add an additional projected volume in your workload manifest. For more information, see "Configuring bound service account tokens using volume projection". Procedure Create a Secret object in a YAML file on a control plane node: Example Secret object apiVersion: v1 kind: Secret metadata: name: secret-sa-sample annotations: kubernetes.io/service-account.name: "sa-name" 1 type: kubernetes.io/service-account-token 2 1 Specifies an existing service account name. If you are creating both the ServiceAccount and the Secret objects, create the ServiceAccount object first. 2 Specifies a service account token secret. Use the following command to create the Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets Configuring bound service account tokens using volume projection Understanding and creating service accounts 2.6.2.4. Creating a basic authentication secret As an administrator, you can create a basic authentication secret, which allows you to store the credentials needed for basic authentication. When using this secret type, the data parameter of the Secret object must contain the following keys encoded in the base64 format: username : the user name for authentication password : the password or token for authentication Note You can use the stringData parameter to use clear text content. Procedure Create a Secret object in a YAML file on a control plane node: Example secret object apiVersion: v1 kind: Secret metadata: name: secret-basic-auth type: kubernetes.io/basic-auth 1 data: stringData: 2 username: admin password: <password> 1 Specifies a basic authentication secret. 2 Specifies the basic authentication values to use. Use the following command to create the Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets 2.6.2.5. Creating an SSH authentication secret As an administrator, you can create an SSH authentication secret, which allows you to store data used for SSH authentication. When using this secret type, the data parameter of the Secret object must contain the SSH credential to use. Procedure Create a Secret object in a YAML file on a control plane node: Example secret object apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 MIIEpQIBAAKCAQEAulqb/Y ... 1 Specifies an SSH authentication secret. 2 Specifies the SSH key/value pair as the SSH credentials to use. Use the following command to create the Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets 2.6.2.6. Creating a Docker configuration secret As an administrator, you can create a Docker configuration secret, which allows you to store the credentials for accessing a container image registry. kubernetes.io/dockercfg . Use this secret type to store your local Docker configuration file. The data parameter of the secret object must contain the contents of a .dockercfg file encoded in the base64 format. kubernetes.io/dockerconfigjson . Use this secret type to store your local Docker configuration JSON file. The data parameter of the secret object must contain the contents of a .docker/config.json file encoded in the base64 format. Procedure Create a Secret object in a YAML file on a control plane node. Example Docker configuration secret object apiVersion: v1 kind: Secret metadata: name: secret-docker-cfg namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfig:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2 1 Specifies that the secret is using a Docker configuration file. 2 The output of a base64-encoded Docker configuration file Example Docker configuration JSON secret object apiVersion: v1 kind: Secret metadata: name: secret-docker-json namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2 1 Specifies that the secret is using a Docker configuration JSONfile. 2 The output of a base64-encoded Docker configuration JSON file Use the following command to create the Secret object USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets 2.6.2.7. Creating a secret using the web console You can create secrets using the web console. Procedure Navigate to Workloads Secrets . Click Create From YAML . Edit the YAML manually to your specifications, or drag and drop a file into the YAML editor. For example: apiVersion: v1 kind: Secret metadata: name: example namespace: <namespace> type: Opaque 1 data: username: <base64 encoded username> password: <base64 encoded password> stringData: 2 hostname: myapp.mydomain.com 1 This example specifies an opaque secret; however, you may see other secret types such as service account token secret, basic authentication secret, SSH authentication secret, or a secret that uses Docker configuration. 2 Entries in the stringData map are converted to base64 and the entry will then be moved to the data map automatically. This field is write-only; the value will only be returned via the data field. Click Create . Click Add Secret to workload . From the drop-down menu, select the workload to add. Click Save . 2.6.3. Understanding how to update secrets When you modify the value of a secret, the value (used by an already running pod) will not dynamically change. To change a secret, you must delete the original pod and create a new pod (perhaps with an identical PodSpec). Updating a secret follows the same workflow as deploying a new Container image. You can use the kubectl rolling-update command. The resourceVersion value in a secret is not specified when it is referenced. Therefore, if a secret is updated at the same time as pods are starting, the version of the secret that is used for the pod is not defined. Note Currently, it is not possible to check the resource version of a secret object that was used when a pod was created. It is planned that pods will report this information, so that a controller could restart ones using an old resourceVersion . In the interim, do not update the data of existing secrets, but create new ones with distinct names. 2.6.4. Creating and using secrets As an administrator, you can create a service account token secret. This allows you to distribute a service account token to applications that must authenticate to the API. Procedure Create a service account in your namespace by running the following command: USD oc create sa <service_account_name> -n <your_namespace> Save the following YAML example to a file named service-account-token-secret.yaml . The example includes a Secret object configuration that you can use to generate a service account token: apiVersion: v1 kind: Secret metadata: name: <secret_name> 1 annotations: kubernetes.io/service-account.name: "sa-name" 2 type: kubernetes.io/service-account-token 3 1 Replace <secret_name> with the name of your service token secret. 2 Specifies an existing service account name. If you are creating both the ServiceAccount and the Secret objects, create the ServiceAccount object first. 3 Specifies a service account token secret type. Generate the service account token by applying the file: USD oc apply -f service-account-token-secret.yaml Get the service account token from the secret by running the following command: USD oc get secret <sa_token_secret> -o jsonpath='{.data.token}' | base64 --decode 1 Example output ayJhbGciOiJSUzI1NiIsImtpZCI6IklOb2dtck1qZ3hCSWpoNnh5YnZhSE9QMkk3YnRZMVZoclFfQTZfRFp1YlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImJ1aWxkZXItdG9rZW4tdHZrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYnVpbGRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNmZGU2MGZmLTA1NGYtNDkyZi04YzhjLTNlZjE0NDk3MmFmNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmJ1aWxkZXIifQ.OmqFTDuMHC_lYvvEUrjr1x453hlEEHYcxS9VKSzmRkP1SiVZWPNPkTWlfNRp6bIUZD3U6aN3N7dMSN0eI5hu36xPgpKTdvuckKLTCnelMx6cxOdAbrcw1mCmOClNscwjS1KO1kzMtYnnq8rXHiMJELsNlhnRyyIXRTtNBsy4t64T3283s3SLsancyx0gy0ujx-Ch3uKAKdZi5iT-I8jnnQ-ds5THDs2h65RJhgglQEmSxpHrLGZFmyHAQI-_SjvmHZPXEc482x3SkaQHNLqpmrpJorNqh1M8ZHKzlujhZgVooMvJmWPXTb2vnvi3DGn2XI-hZxl1yD2yGH1RBpYUHA 1 Replace <sa_token_secret> with the name of your service token secret. Use your service account token to authenticate with the API of your cluster: USD curl -X GET <openshift_cluster_api> --header "Authorization: Bearer <token>" 1 2 1 Replace <openshift_cluster_api> with the OpenShift cluster API. 2 Replace <token> with the service account token that is output in the preceding command. 2.6.5. About using signed certificates with secrets To secure communication to your service, you can configure OpenShift Container Platform to generate a signed serving certificate/key pair that you can add into a secret in a project. A service serving certificate secret is intended to support complex middleware applications that need out-of-the-box certificates. It has the same settings as the server certificates generated by the administrator tooling for nodes and masters. Service Pod spec configured for a service serving certificates secret. apiVersion: v1 kind: Service metadata: name: registry annotations: service.beta.openshift.io/serving-cert-secret-name: registry-cert 1 # ... 1 Specify the name for the certificate Other pods can trust cluster-created certificates (which are only signed for internal DNS names), by using the CA bundle in the /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt file that is automatically mounted in their pod. The signature algorithm for this feature is x509.SHA256WithRSA . To manually rotate, delete the generated secret. A new certificate is created. 2.6.5.1. Generating signed certificates for use with secrets To use a signed serving certificate/key pair with a pod, create or edit the service to add the service.beta.openshift.io/serving-cert-secret-name annotation, then add the secret to the pod. Procedure To create a service serving certificate secret : Edit the Pod spec for your service. Add the service.beta.openshift.io/serving-cert-secret-name annotation with the name you want to use for your secret. kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.openshift.io/serving-cert-secret-name: my-cert 1 spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376 The certificate and key are in PEM format, stored in tls.crt and tls.key respectively. Create the service: USD oc create -f <file-name>.yaml View the secret to make sure it was created: View a list of all secrets: USD oc get secrets Example output NAME TYPE DATA AGE my-cert kubernetes.io/tls 2 9m View details on your secret: USD oc describe secret my-cert Example output Name: my-cert Namespace: openshift-console Labels: <none> Annotations: service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z service.beta.openshift.io/originating-service-name: my-service service.beta.openshift.io/originating-service-uid: 640f0ec3-afc2-4380-bf31-a8c784846a11 service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z Type: kubernetes.io/tls Data ==== tls.key: 1679 bytes tls.crt: 2595 bytes Edit your Pod spec with that secret. apiVersion: v1 kind: Pod metadata: name: my-service-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mypod image: redis volumeMounts: - name: my-container mountPath: "/etc/my-path" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: my-volume secret: secretName: my-cert items: - key: username path: my-group/my-username mode: 511 When it is available, your pod will run. The certificate will be good for the internal service DNS name, <service.name>.<service.namespace>.svc . The certificate/key pair is automatically replaced when it gets close to expiration. View the expiration date in the service.beta.openshift.io/expiry annotation on the secret, which is in RFC3339 format. Note In most cases, the service DNS name <service.name>.<service.namespace>.svc is not externally routable. The primary use of <service.name>.<service.namespace>.svc is for intracluster or intraservice communication, and with re-encrypt routes. 2.6.6. Troubleshooting secrets If a service certificate generation fails with (service's service.beta.openshift.io/serving-cert-generation-error annotation contains): secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60 The service that generated the certificate no longer exists, or has a different serviceUID . You must force certificates regeneration by removing the old secret, and clearing the following annotations on the service service.beta.openshift.io/serving-cert-generation-error , service.beta.openshift.io/serving-cert-generation-error-num : Delete the secret: USD oc delete secret <secret_name> Clear the annotations: USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error- USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num- Note The command removing annotation has a - after the annotation name to be removed. 2.7. Providing sensitive data to pods by using an external secrets store Some applications need sensitive information, such as passwords and user names, that you do not want developers to have. As an alternative to using Kubernetes Secret objects to provide sensitive information, you can use an external secrets store to store the sensitive information. You can use the Secrets Store CSI Driver Operator to integrate with an external secrets store and mount the secret content as a pod volume. Important The Secrets Store CSI Driver Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.7.1. About the Secrets Store CSI Driver Operator Kubernetes secrets are stored with Base64 encoding. etcd provides encryption at rest for these secrets, but when secrets are retrieved, they are decrypted and presented to the user. If role-based access control is not configured properly on your cluster, anyone with API or etcd access can retrieve or modify a secret. Additionally, anyone who is authorized to create a pod in a namespace can use that access to read any secret in that namespace. To store and manage your secrets securely, you can configure the OpenShift Container Platform Secrets Store Container Storage Interface (CSI) Driver Operator to mount secrets from an external secret management system, such as Azure Key Vault, by using a provider plugin. Applications can then use the secret, but the secret does not persist on the system after the application pod is destroyed. The Secrets Store CSI Driver Operator, secrets-store.csi.k8s.io , enables OpenShift Container Platform to mount multiple secrets, keys, and certificates stored in enterprise-grade external secrets stores into pods as a volume. The Secrets Store CSI Driver Operator communicates with the provider using gRPC to fetch the mount contents from the specified external secrets store. After the volume is attached, the data in it is mounted into the container's file system. Secrets store volumes are mounted in-line. 2.7.1.1. Secrets store providers The following secrets store providers are available for use with the Secrets Store CSI Driver Operator: AWS Secrets Manager AWS Systems Manager Parameter Store Azure Key Vault HashiCorp Vault 2.7.1.2. Automatic rotation The Secrets Store CSI driver periodically rotates the content in the mounted volume with the content from the external secrets store. If a secret is updated in the external secrets store, the secret will be updated in the mounted volume. The Secrets Store CSI Driver Operator polls for updates every 2 minutes. If you enabled synchronization of mounted content as Kubernetes secrets, the Kubernetes secrets are also rotated. Applications consuming the secret data must watch for updates to the secrets. 2.7.2. Installing the Secrets Store CSI driver Prerequisites Access to the OpenShift Container Platform web console. Administrator access to the cluster. Procedure To install the Secrets Store CSI driver: Install the Secrets Store CSI Driver Operator: Log in to the web console. Click Operators OperatorHub . Locate the Secrets Store CSI Driver Operator by typing "Secrets Store CSI" in the filter box. Click the Secrets Store CSI Driver Operator button. On the Secrets Store CSI Driver Operator page, click Install . On the Install Operator page, ensure that: All namespaces on the cluster (default) is selected. Installed Namespace is set to openshift-cluster-csi-drivers . Click Install . After the installation finishes, the Secrets Store CSI Driver Operator is listed in the Installed Operators section of the web console. Create the ClusterCSIDriver instance for the driver ( secrets-store.csi.k8s.io ): Click Administration CustomResourceDefinitions ClusterCSIDriver . On the Instances tab, click Create ClusterCSIDriver . Use the following YAML file: apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: secrets-store.csi.k8s.io spec: managementState: Managed Click Create . 2.7.3. Mounting secrets from an external secrets store to a CSI volume After installing the Secrets Store CSI Driver Operator, you can mount secrets from one of the following external secrets stores to a CSI volume: AWS Secrets Manager AWS Systems Manager Parameter Store Azure Key Vault HashiCorp Vault 2.7.3.1. Mounting secrets from AWS Secrets Manager You can use the Secrets Store CSI Driver Operator to mount secrets from AWS Secrets Manager to a CSI volume in OpenShift Container Platform. To mount secrets from AWS Secrets Manager, your cluster must be installed on AWS and use AWS Security Token Service (STS). Prerequisites Your cluster is installed on AWS and uses AWS Security Token Service (STS). You have installed the Secrets Store CSI Driver Operator. See Installing the Secrets Store CSI driver for instructions. You have configured AWS Secrets Manager to store the required secrets. You have extracted and prepared the ccoctl binary. You have installed the jq CLI tool. You have access to the cluster as a user with the cluster-admin role. Procedure Install the AWS Secrets Manager provider: Create a YAML file with the following configuration for the provider resources: Important The AWS Secrets Manager provider for the Secrets Store CSI driver is an upstream provider. This configuration is modified from the configuration provided in the upstream AWS documentation so that it works properly with OpenShift Container Platform. Changes to this configuration might impact functionality. Example aws-provider.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [""] resources: ["serviceaccounts/token"] verbs: ["create"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get"] - apiGroups: [""] resources: ["pods"] verbs: ["get"] - apiGroups: [""] resources: ["nodes"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: "/etc/kubernetes/secrets-store-csi-providers" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: "/etc/kubernetes/secrets-store-csi-providers" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux Grant privileged access to the csi-secrets-store-provider-aws service account by running the following command: USD oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers Create the provider resources by running the following command: USD oc apply -f aws-provider.yaml Grant permission to allow the service account to read the AWS secret object: Create a directory to contain the credentials request by running the following command: USD mkdir credentialsrequest-dir-aws Create a YAML file with the following configuration for the credentials request: Example credentialsrequest.yaml file apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - "secretsmanager:GetSecretValue" - "secretsmanager:DescribeSecret" effect: Allow resource: "arn:*:secretsmanager:*:*:secret:testSecret-??????" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider Retrieve the OIDC provider by running the following command: USD oc get --raw=/.well-known/openid-configuration | jq -r '.issuer' Example output https://<oidc_provider_name> Copy the OIDC provider name <oidc_provider_name> from the output to use in the step. Use the ccoctl tool to process the credentials request by running the following command: USD ccoctl aws create-iam-roles \ --name my-role --region=<aws_region> \ --credentials-requests-dir=credentialsrequest-dir-aws \ --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output Example output 2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds Copy the <aws_role_arn> from the output to use in the step. For example, arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds . Bind the service account with the role ARN by running the following command: USD oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn="<aws_role_arn>" Create a secret provider class to define your secrets store provider: Create a YAML file that defines the SecretProviderClass object: Example secret-provider-class-aws.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: "testSecret" objectType: "secretsmanager" 1 1 Specify the name for the secret provider class. 2 Specify the namespace for the secret provider class. 3 Specify the provider as aws . 4 Specify the provider-specific configuration parameters. Create the SecretProviderClass object by running the following command: USD oc create -f secret-provider-class-aws.yaml Create a deployment to use this secret provider class: Create a YAML file that defines the Deployment object: Example deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - "/bin/sleep" - "10000" volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-aws-provider" 3 1 Specify the name for the deployment. 2 Specify the namespace for the deployment. This must be the same namespace as the secret provider class. 3 Specify the name of the secret provider class. Create the Deployment object by running the following command: USD oc create -f deployment.yaml Verification Verify that you can access the secrets from AWS Secrets Manager in the pod volume mount: List the secrets in the pod mount: USD oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/ Example output testSecret View a secret in the pod mount: USD oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret Example output <secret_value> Additional resources Configuring the Cloud Credential Operator utility 2.7.3.2. Mounting secrets from AWS Systems Manager Parameter Store You can use the Secrets Store CSI Driver Operator to mount secrets from AWS Systems Manager Parameter Store to a CSI volume in OpenShift Container Platform. To mount secrets from AWS Systems Manager Parameter Store, your cluster must be installed on AWS and use AWS Security Token Service (STS). Prerequisites Your cluster is installed on AWS and uses AWS Security Token Service (STS). You have installed the Secrets Store CSI Driver Operator. See Installing the Secrets Store CSI driver for instructions. You have configured AWS Systems Manager Parameter Store to store the required secrets. You have extracted and prepared the ccoctl binary. You have installed the jq CLI tool. You have access to the cluster as a user with the cluster-admin role. Procedure Install the AWS Systems Manager Parameter Store provider: Create a YAML file with the following configuration for the provider resources: Important The AWS Systems Manager Parameter Store provider for the Secrets Store CSI driver is an upstream provider. This configuration is modified from the configuration provided in the upstream AWS documentation so that it works properly with OpenShift Container Platform. Changes to this configuration might impact functionality. Example aws-provider.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [""] resources: ["serviceaccounts/token"] verbs: ["create"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get"] - apiGroups: [""] resources: ["pods"] verbs: ["get"] - apiGroups: [""] resources: ["nodes"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: "/etc/kubernetes/secrets-store-csi-providers" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: "/etc/kubernetes/secrets-store-csi-providers" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux Grant privileged access to the csi-secrets-store-provider-aws service account by running the following command: USD oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers Create the provider resources by running the following command: USD oc apply -f aws-provider.yaml Grant permission to allow the service account to read the AWS secret object: Create a directory to contain the credentials request by running the following command: USD mkdir credentialsrequest-dir-aws Create a YAML file with the following configuration for the credentials request: Example credentialsrequest.yaml file apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - "ssm:GetParameter" - "ssm:GetParameters" effect: Allow resource: "arn:*:ssm:*:*:parameter/testParameter*" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider Retrieve the OIDC provider by running the following command: USD oc get --raw=/.well-known/openid-configuration | jq -r '.issuer' Example output https://<oidc_provider_name> Copy the OIDC provider name <oidc_provider_name> from the output to use in the step. Use the ccoctl tool to process the credentials request by running the following command: USD ccoctl aws create-iam-roles \ --name my-role --region=<aws_region> \ --credentials-requests-dir=credentialsrequest-dir-aws \ --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output Example output 2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds Copy the <aws_role_arn> from the output to use in the step. For example, arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds . Bind the service account with the role ARN by running the following command: USD oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn="<aws_role_arn>" Create a secret provider class to define your secrets store provider: Create a YAML file that defines the SecretProviderClass object: Example secret-provider-class-aws.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: "testParameter" objectType: "ssmparameter" 1 Specify the name for the secret provider class. 2 Specify the namespace for the secret provider class. 3 Specify the provider as aws . 4 Specify the provider-specific configuration parameters. Create the SecretProviderClass object by running the following command: USD oc create -f secret-provider-class-aws.yaml Create a deployment to use this secret provider class: Create a YAML file that defines the Deployment object: Example deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - "/bin/sleep" - "10000" volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-aws-provider" 3 1 Specify the name for the deployment. 2 Specify the namespace for the deployment. This must be the same namespace as the secret provider class. 3 Specify the name of the secret provider class. Create the Deployment object by running the following command: USD oc create -f deployment.yaml Verification Verify that you can access the secrets from AWS Systems Manager Parameter Store in the pod volume mount: List the secrets in the pod mount: USD oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/ Example output testParameter View a secret in the pod mount: USD oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret Example output <secret_value> Additional resources Configuring the Cloud Credential Operator utility 2.7.3.3. Mounting secrets from Azure Key Vault You can use the Secrets Store CSI Driver Operator to mount secrets from Azure Key Vault to a CSI volume in OpenShift Container Platform. To mount secrets from Azure Key Vault, your cluster must be installed on Microsoft Azure. Prerequisites Your cluster is installed on Azure. You have installed the Secrets Store CSI Driver Operator. See Installing the Secrets Store CSI driver for instructions. You have configured Azure Key Vault to store the required secrets. You have installed the Azure CLI ( az ). You have access to the cluster as a user with the cluster-admin role. Procedure Install the Azure Key Vault provider: Create a YAML file with the following configuration for the provider resources: Important The Azure Key Vault provider for the Secrets Store CSI driver is an upstream provider. This configuration is modified from the configuration provided in the upstream Azure documentation so that it works properly with OpenShift Container Platform. Changes to this configuration might impact functionality. Example azure-provider.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-azure-cluster-role rules: - apiGroups: [""] resources: ["serviceaccounts/token"] verbs: ["create"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get"] - apiGroups: [""] resources: ["pods"] verbs: ["get"] - apiGroups: [""] resources: ["nodes"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-azure-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-azure-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-azure labels: app: csi-secrets-store-provider-azure spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-azure template: metadata: labels: app: csi-secrets-store-provider-azure spec: serviceAccountName: csi-secrets-store-provider-azure hostNetwork: true containers: - name: provider-azure-installer image: mcr.microsoft.com/oss/azure/secrets-store/provider-azure:v1.4.1 imagePullPolicy: IfNotPresent args: - --endpoint=unix:///provider/azure.sock - --construct-pem-chain=true - --healthz-port=8989 - --healthz-path=/healthz - --healthz-timeout=5s livenessProbe: httpGet: path: /healthz port: 8989 failureThreshold: 3 initialDelaySeconds: 5 timeoutSeconds: 10 periodSeconds: 30 resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 0 capabilities: drop: - ALL volumeMounts: - mountPath: "/provider" name: providervol affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: type operator: NotIn values: - virtual-kubelet volumes: - name: providervol hostPath: path: "/var/run/secrets-store-csi-providers" tolerations: - operator: Exists nodeSelector: kubernetes.io/os: linux Grant privileged access to the csi-secrets-store-provider-azure service account by running the following command: USD oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-azure -n openshift-cluster-csi-drivers Create the provider resources by running the following command: USD oc apply -f azure-provider.yaml Create a service principal to access the key vault: Set the service principal client secret as an environment variable by running the following command: USD SERVICE_PRINCIPAL_CLIENT_SECRET="USD(az ad sp create-for-rbac --name https://USDKEYVAULT_NAME --query 'password' -otsv)" Set the service principal client ID as an environment variable by running the following command: USD SERVICE_PRINCIPAL_CLIENT_ID="USD(az ad sp list --display-name https://USDKEYVAULT_NAME --query '[0].appId' -otsv)" Create a generic secret with the service principal client secret and ID by running the following command: USD oc create secret generic secrets-store-creds -n my-namespace --from-literal clientid=USD{SERVICE_PRINCIPAL_CLIENT_ID} --from-literal clientsecret=USD{SERVICE_PRINCIPAL_CLIENT_SECRET} Apply the secrets-store.csi.k8s.io/used=true label to allow the provider to find this nodePublishSecretRef secret: USD oc -n my-namespace label secret secrets-store-creds secrets-store.csi.k8s.io/used=true Create a secret provider class to define your secrets store provider: Create a YAML file that defines the SecretProviderClass object: Example secret-provider-class-azure.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider 1 namespace: my-namespace 2 spec: provider: azure 3 parameters: 4 usePodIdentity: "false" useVMManagedIdentity: "false" userAssignedIdentityID: "" keyvaultName: "kvname" objects: | array: - | objectName: secret1 objectType: secret tenantId: "tid" 1 Specify the name for the secret provider class. 2 Specify the namespace for the secret provider class. 3 Specify the provider as azure . 4 Specify the provider-specific configuration parameters. Create the SecretProviderClass object by running the following command: USD oc create -f secret-provider-class-azure.yaml Create a deployment to use this secret provider class: Create a YAML file that defines the Deployment object: Example deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-azure-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - "/bin/sleep" - "10000" volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-azure-provider" 3 nodePublishSecretRef: name: secrets-store-creds 4 1 Specify the name for the deployment. 2 Specify the namespace for the deployment. This must be the same namespace as the secret provider class. 3 Specify the name of the secret provider class. 4 Specify the name of the Kubernetes secret that contains the service principal credentials to access Azure Key Vault. Create the Deployment object by running the following command: USD oc create -f deployment.yaml Verification Verify that you can access the secrets from Azure Key Vault in the pod volume mount: List the secrets in the pod mount: USD oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/ Example output secret1 View a secret in the pod mount: USD oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/secret1 Example output my-secret-value 2.7.3.4. Mounting secrets from HashiCorp Vault You can use the Secrets Store CSI Driver Operator to mount secrets from HashiCorp Vault to a CSI volume in OpenShift Container Platform. Important Mounting secrets from HashiCorp Vault by using the Secrets Store CSI Driver Operator has been tested with the following cloud providers: Amazon Web Services (AWS) Microsoft Azure Other cloud providers might work, but have not been tested yet. Additional cloud providers might be tested in the future. Prerequisites You have installed the Secrets Store CSI Driver Operator. See Installing the Secrets Store CSI driver for instructions. You have installed Helm. You have access to the cluster as a user with the cluster-admin role. Procedure Add the HashiCorp Helm repository by running the following command: USD helm repo add hashicorp https://helm.releases.hashicorp.com Update all repositories to ensure that Helm is aware of the latest versions by running the following command: USD helm repo update Install the HashiCorp Vault provider: Create a new project for Vault by running the following command: USD oc new-project vault Label the vault namespace for pod security admission by running the following command: USD oc label ns vault security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite Grant privileged access to the vault service account by running the following command: USD oc adm policy add-scc-to-user privileged -z vault -n vault Grant privileged access to the vault-csi-provider service account by running the following command: USD oc adm policy add-scc-to-user privileged -z vault-csi-provider -n vault Deploy HashiCorp Vault by running the following command: USD helm install vault hashicorp/vault --namespace=vault \ --set "server.dev.enabled=true" \ --set "injector.enabled=false" \ --set "csi.enabled=true" \ --set "global.openshift=true" \ --set "injector.agentImage.repository=docker.io/hashicorp/vault" \ --set "server.image.repository=docker.io/hashicorp/vault" \ --set "csi.image.repository=docker.io/hashicorp/vault-csi-provider" \ --set "csi.agent.image.repository=docker.io/hashicorp/vault" \ --set "csi.daemonSet.providersDir=/var/run/secrets-store-csi-providers" Patch the vault-csi-driver daemon set to set the securityContext to privileged by running the following command: USD oc patch daemonset -n vault vault-csi-provider --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/securityContext", "value": {"privileged": true} }]' Verify that the vault-csi-provider pods have started properly by running the following command: USD oc get pods -n vault Example output NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 24m vault-csi-provider-87rgw 1/2 Running 0 5s vault-csi-provider-bd6hp 1/2 Running 0 4s vault-csi-provider-smlv7 1/2 Running 0 5s Configure HashiCorp Vault to store the required secrets: Create a secret by running the following command: USD oc exec vault-0 --namespace=vault -- vault kv put secret/example1 testSecret1=my-secret-value Verify that the secret is readable at the path secret/example1 by running the following command: USD oc exec vault-0 --namespace=vault -- vault kv get secret/example1 Example output = Secret Path = secret/data/example1 ======= Metadata ======= Key Value --- ----- created_time 2024-04-05T07:05:16.713911211Z custom_metadata <nil> deletion_time n/a destroyed false version 1 === Data === Key Value --- ----- testSecret1 my-secret-value Configure Vault to use Kubernetes authentication: Enable the Kubernetes auth method by running the following command: USD oc exec vault-0 --namespace=vault -- vault auth enable kubernetes Example output Success! Enabled kubernetes auth method at: kubernetes/ Configure the Kubernetes auth method: Set the token reviewer as an environment variable by running the following command: USD TOKEN_REVIEWER_JWT="USD(oc exec vault-0 --namespace=vault -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)" Set the Kubernetes service IP address as an environment variable by running the following command: USD KUBERNETES_SERVICE_IP="USD(oc get svc kubernetes --namespace=default -o go-template="{{ .spec.clusterIP }}")" Update the Kubernetes auth method by running the following command: USD oc exec -i vault-0 --namespace=vault -- vault write auth/kubernetes/config \ issuer="https://kubernetes.default.svc.cluster.local" \ token_reviewer_jwt="USD{TOKEN_REVIEWER_JWT}" \ kubernetes_host="https://USD{KUBERNETES_SERVICE_IP}:443" \ kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt Example output Success! Data written to: auth/kubernetes/config Create a policy for the application by running the following command: USD oc exec -i vault-0 --namespace=vault -- vault policy write csi -<<EOF path "secret/data/*" { capabilities = ["read"] } EOF Example output Success! Uploaded policy: csi Create an authentication role to access the application by running the following command: USD oc exec -i vault-0 --namespace=vault -- vault write auth/kubernetes/role/csi \ bound_service_account_names=default \ bound_service_account_namespaces=default,test-ns,negative-test-ns,my-namespace \ policies=csi \ ttl=20m Example output Success! Data written to: auth/kubernetes/role/csi Verify that all of the vault pods are running properly by running the following command: USD oc get pods -n vault Example output NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 43m vault-csi-provider-87rgw 2/2 Running 0 19m vault-csi-provider-bd6hp 2/2 Running 0 19m vault-csi-provider-smlv7 2/2 Running 0 19m Verify that all of the secrets-store-csi-driver pods are running properly by running the following command: USD oc get pods -n openshift-cluster-csi-drivers | grep -E "secrets" Example output secrets-store-csi-driver-node-46d2g 3/3 Running 0 45m secrets-store-csi-driver-node-d2jjn 3/3 Running 0 45m secrets-store-csi-driver-node-drmt4 3/3 Running 0 45m secrets-store-csi-driver-node-j2wlt 3/3 Running 0 45m secrets-store-csi-driver-node-v9xv4 3/3 Running 0 45m secrets-store-csi-driver-node-vlz28 3/3 Running 0 45m secrets-store-csi-driver-operator-84bd699478-fpxrw 1/1 Running 0 47m Create a secret provider class to define your secrets store provider: Create a YAML file that defines the SecretProviderClass object: Example secret-provider-class-vault.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-vault-provider 1 namespace: my-namespace 2 spec: provider: vault 3 parameters: 4 roleName: "csi" vaultAddress: "http://vault.vault:8200" objects: | - secretPath: "secret/data/example1" objectName: "testSecret1" secretKey: "testSecret1 1 Specify the name for the secret provider class. 2 Specify the namespace for the secret provider class. 3 Specify the provider as vault . 4 Specify the provider-specific configuration parameters. Create the SecretProviderClass object by running the following command: USD oc create -f secret-provider-class-vault.yaml Create a deployment to use this secret provider class: Create a YAML file that defines the Deployment object: Example deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: busybox-deployment 1 namespace: my-namespace 2 labels: app: busybox spec: replicas: 1 selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: terminationGracePeriodSeconds: 0 containers: - image: registry.k8s.io/e2e-test-images/busybox:1.29-4 name: busybox imagePullPolicy: IfNotPresent command: - "/bin/sleep" - "10000" volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-vault-provider" 3 1 Specify the name for the deployment. 2 Specify the namespace for the deployment. This must be the same namespace as the secret provider class. 3 Specify the name of the secret provider class. Create the Deployment object by running the following command: USD oc create -f deployment.yaml Verification Verify that you can access the secrets from your HashiCorp Vault in the pod volume mount: List the secrets in the pod mount by running the following command: USD oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/ Example output testSecret1 View a secret in the pod mount by running the following command: USD oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret1 Example output my-secret-value 2.7.4. Enabling synchronization of mounted content as Kubernetes secrets You can enable synchronization to create Kubernetes secrets from the content on a mounted volume. An example where you might want to enable synchronization is to use an environment variable in your deployment to reference the Kubernetes secret. Warning Do not enable synchronization if you do not want to store your secrets on your OpenShift Container Platform cluster and in etcd. Enable this functionality only if you require it, such as when you want to use environment variables to refer to the secret. If you enable synchronization, the secrets from the mounted volume are synchronized as Kubernetes secrets after you start a pod that mounts the secrets. The synchronized Kubernetes secret is deleted when all pods that mounted the content are deleted. Prerequisites You have installed the Secrets Store CSI Driver Operator. You have installed a secrets store provider. You have created the secret provider class. You have access to the cluster as a user with the cluster-admin role. Procedure Edit the SecretProviderClass resource by running the following command: USD oc edit secretproviderclass my-azure-provider 1 1 Replace my-azure-provider with the name of your secret provider class. Add the secretsObjects section with the configuration for the synchronized Kubernetes secrets: apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider namespace: my-namespace spec: provider: azure secretObjects: 1 - secretName: tlssecret 2 type: kubernetes.io/tls 3 labels: environment: "test" data: - objectName: tlskey 4 key: tls.key 5 - objectName: tlscrt key: tls.crt parameters: usePodIdentity: "false" keyvaultName: "kvname" objects: | array: - | objectName: tlskey objectType: secret - | objectName: tlscrt objectType: secret tenantId: "tid" 1 Specify the configuration for synchronized Kubernetes secrets. 2 Specify the name of the Kubernetes Secret object to create. 3 Specify the type of Kubernetes Secret object to create. For example, Opaque or kubernetes.io/tls . 4 Specify the object name or alias of the mounted content to synchronize. 5 Specify the data field from the specified objectName to populate the Kubernetes secret with. Save the file to apply the changes. 2.7.5. Viewing the status of secrets in the pod volume mount You can view detailed information, including the versions, of the secrets in the pod volume mount. The Secrets Store CSI Driver Operator creates a SecretProviderClassPodStatus resource in the same namespace as the pod. You can review this resource to see detailed information, including versions, about the secrets in the pod volume mount. Prerequisites You have installed the Secrets Store CSI Driver Operator. You have installed a secrets store provider. You have created the secret provider class. You have deployed a pod that mounts a volume from the Secrets Store CSI Driver Operator. You have access to the cluster as a user with the cluster-admin role. Procedure View detailed information about the secrets in a pod volume mount by running the following command: USD oc get secretproviderclasspodstatus <secret_provider_class_pod_status_name> -o yaml 1 1 The name of the secret provider class pod status object is in the format of <pod_name>-<namespace>-<secret_provider_class_name> . Example output ... status: mounted: true objects: - id: secret/tlscrt version: f352293b97da4fa18d96a9528534cb33 - id: secret/tlskey version: 02534bc3d5df481cb138f8b2a13951ef podName: busybox-<hash> secretProviderClassName: my-azure-provider targetPath: /var/lib/kubelet/pods/f0d49c1e-c87a-4beb-888f-37798456a3e7/volumes/kubernetes.io~csi/secrets-store-inline/mount 2.7.6. Uninstalling the Secrets Store CSI Driver Operator Prerequisites Access to the OpenShift Container Platform web console. Administrator access to the cluster. Procedure To uninstall the Secrets Store CSI Driver Operator: Stop all application pods that use the secrets-store.csi.k8s.io provider. Remove any third-party provider plug-in for your chosen secret store. Remove the Container Storage Interface (CSI) driver and associated manifests: Click Administration CustomResourceDefinitions ClusterCSIDriver . On the Instances tab, for secrets-store.csi.k8s.io , on the far left side, click the drop-down menu, and then click Delete ClusterCSIDriver . When prompted, click Delete . Verify that the CSI driver pods are no longer running. Uninstall the Secrets Store CSI Driver Operator: Note Before you can uninstall the Operator, you must remove the CSI driver first. Click Operators Installed Operators . On the Installed Operators page, scroll or type "Secrets Store CSI" into the Search by name box to find the Operator, and then click it. On the upper, right of the Installed Operators > Operator details page, click Actions Uninstall Operator . When prompted on the Uninstall Operator window, click the Uninstall button to remove the Operator from the namespace. Any applications deployed by the Operator on the cluster need to be cleaned up manually. After uninstalling, the Secrets Store CSI Driver Operator is no longer listed in the Installed Operators section of the web console. 2.8. Creating and using config maps The following sections define config maps and how to create and use them. 2.8.1. Understanding config maps Many applications require configuration by using some combination of configuration files, command line arguments, and environment variables. In OpenShift Container Platform, these configuration artifacts are decoupled from image content to keep containerized applications portable. The ConfigMap object provides mechanisms to inject containers with configuration data while keeping containers agnostic of OpenShift Container Platform. A config map can be used to store fine-grained information like individual properties or coarse-grained information like entire configuration files or JSON blobs. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. For example: ConfigMap Object Definition kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2 1 Contains the configuration data. 2 Points to a file that contains non-UTF8 data, for example, a binary Java keystore file. Enter the file data in Base 64. Note You can use the binaryData field when you create a config map from a binary file, such as an image. Configuration data can be consumed in pods in a variety of ways. A config map can be used to: Populate environment variable values in containers Set command-line arguments in a container Populate configuration files in a volume Users and system components can store configuration data in a config map. A config map is similar to a secret, but designed to more conveniently support working with strings that do not contain sensitive information. Config map restrictions A config map must be created before its contents can be consumed in pods. Controllers can be written to tolerate missing configuration data. Consult individual components configured by using config maps on a case-by-case basis. ConfigMap objects reside in a project. They can only be referenced by pods in the same project. The Kubelet only supports the use of a config map for pods it gets from the API server. This includes any pods created by using the CLI, or indirectly from a replication controller. It does not include pods created by using the OpenShift Container Platform node's --manifest-url flag, its --config flag, or its REST API because these are not common ways to create pods. 2.8.2. Creating a config map in the OpenShift Container Platform web console You can create a config map in the OpenShift Container Platform web console. Procedure To create a config map as a cluster administrator: In the Administrator perspective, select Workloads Config Maps . At the top right side of the page, select Create Config Map . Enter the contents of your config map. Select Create . To create a config map as a developer: In the Developer perspective, select Config Maps . At the top right side of the page, select Create Config Map . Enter the contents of your config map. Select Create . 2.8.3. Creating a config map by using the CLI You can use the following command to create a config map from directories, specific files, or literal values. Procedure Create a config map: USD oc create configmap <configmap_name> [options] 2.8.3.1. Creating a config map from a directory You can create a config map from a directory by using the --from-file flag. This method allows you to use multiple files within a directory to create a config map. Each file in the directory is used to populate a key in the config map, where the name of the key is the file name, and the value of the key is the content of the file. For example, the following command creates a config map with the contents of the example-files directory: USD oc create configmap game-config --from-file=example-files/ View the keys in the config map: USD oc describe configmaps game-config Example output Name: game-config Namespace: default Labels: <none> Annotations: <none> Data game.properties: 158 bytes ui.properties: 83 bytes You can see that the two keys in the map are created from the file names in the directory specified in the command. The content of those keys might be large, so the output of oc describe only shows the names of the keys and their sizes. Prerequisite You must have a directory with files that contain the data you want to populate a config map with. The following procedure uses these example files: game.properties and ui.properties : USD cat example-files/game.properties Example output enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 USD cat example-files/ui.properties Example output color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice Procedure Create a config map holding the content of each file in this directory by entering the following command: USD oc create configmap game-config \ --from-file=example-files/ Verification Enter the oc get command for the object with the -o option to see the values of the keys: USD oc get configmaps game-config -o yaml Example output apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:34:05Z name: game-config namespace: default resourceVersion: "407" selflink: /api/v1/namespaces/default/configmaps/game-config uid: 30944725-d66e-11e5-8cd0-68f728db1985 2.8.3.2. Creating a config map from a file You can create a config map from a file by using the --from-file flag. You can pass the --from-file option multiple times to the CLI. You can also specify the key to set in a config map for content imported from a file by passing a key=value expression to the --from-file option. For example: USD oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties Note If you create a config map from a file, you can include files containing non-UTF8 data that are placed in this field without corrupting the non-UTF8 data. OpenShift Container Platform detects binary files and transparently encodes the file as MIME . On the server, the MIME payload is decoded and stored without corrupting the data. Prerequisite You must have a directory with files that contain the data you want to populate a config map with. The following procedure uses these example files: game.properties and ui.properties : USD cat example-files/game.properties Example output enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 USD cat example-files/ui.properties Example output color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice Procedure Create a config map by specifying a specific file: USD oc create configmap game-config-2 \ --from-file=example-files/game.properties \ --from-file=example-files/ui.properties Create a config map by specifying a key-value pair: USD oc create configmap game-config-3 \ --from-file=game-special-key=example-files/game.properties Verification Enter the oc get command for the object with the -o option to see the values of the keys from the file: USD oc get configmaps game-config-2 -o yaml Example output apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:52:05Z name: game-config-2 namespace: default resourceVersion: "516" selflink: /api/v1/namespaces/default/configmaps/game-config-2 uid: b4952dc3-d670-11e5-8cd0-68f728db1985 Enter the oc get command for the object with the -o option to see the values of the keys from the key-value pair: USD oc get configmaps game-config-3 -o yaml Example output apiVersion: v1 data: game-special-key: |- 1 enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: "530" selflink: /api/v1/namespaces/default/configmaps/game-config-3 uid: 05f8da22-d671-11e5-8cd0-68f728db1985 1 This is the key that you set in the preceding step. 2.8.3.3. Creating a config map from literal values You can supply literal values for a config map. The --from-literal option takes a key=value syntax, which allows literal values to be supplied directly on the command line. Procedure Create a config map by specifying a literal value: USD oc create configmap special-config \ --from-literal=special.how=very \ --from-literal=special.type=charm Verification Enter the oc get command for the object with the -o option to see the values of the keys: USD oc get configmaps special-config -o yaml Example output apiVersion: v1 data: special.how: very special.type: charm kind: ConfigMap metadata: creationTimestamp: 2016-02-18T19:14:38Z name: special-config namespace: default resourceVersion: "651" selflink: /api/v1/namespaces/default/configmaps/special-config uid: dadce046-d673-11e5-8cd0-68f728db1985 2.8.4. Use cases: Consuming config maps in pods The following sections describe some uses cases when consuming ConfigMap objects in pods. 2.8.4.1. Populating environment variables in containers by using config maps You can use config maps to populate individual environment variables in containers or to populate environment variables in containers from all keys that form valid environment variable names. As an example, consider the following config map: ConfigMap with two environment variables apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4 1 Name of the config map. 2 The project in which the config map resides. Config maps can only be referenced by pods in the same project. 3 4 Environment variables to inject. ConfigMap with one environment variable apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2 1 Name of the config map. 2 Environment variable to inject. Procedure You can consume the keys of this ConfigMap in a pod using configMapKeyRef sections. Sample Pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Stanza to pull the specified environment variables from a ConfigMap . 2 Name of a pod environment variable that you are injecting a key's value into. 3 5 Name of the ConfigMap to pull specific environment variables from. 4 6 Environment variable to pull from the ConfigMap . 7 Makes the environment variable optional. As optional, the pod will be started even if the specified ConfigMap and keys do not exist. 8 Stanza to pull all environment variables from a ConfigMap . 9 Name of the ConfigMap to pull all environment variables from. When this pod is run, the pod logs will include the following output: Note SPECIAL_TYPE_KEY=charm is not listed in the example output because optional: true is set. 2.8.4.2. Setting command-line arguments for container commands with config maps You can use a config map to set the value of the commands or arguments in a container by using the Kubernetes substitution syntax USD(VAR_NAME) . As an example, consider the following config map: apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure To inject values into a command in a container, you must consume the keys you want to use as environment variables. Then you can refer to them in a container's command using the USD(VAR_NAME) syntax. Sample pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Inject the values into a command in a container using the keys you want to use as environment variables. When this pod is run, the output from the echo command run in the test-container container is as follows: 2.8.4.3. Injecting content into a volume by using config maps You can inject content into a volume by using config maps. Example ConfigMap custom resource (CR) apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure You have a couple different options for injecting content into a volume by using config maps. The most basic way to inject content into a volume by using a config map is to populate the volume with files where the key is the file name and the content of the file is the value of the key: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/special.how" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never 1 File containing key. When this pod is run, the output of the cat command will be: You can also control the paths within the volume where config map keys are projected: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/path/to/special-key" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never 1 Path to config map key. When this pod is run, the output of the cat command will be: 2.9. Using device plugins to access external resources with pods Device plugins allow you to use a particular device type (GPU, InfiniBand, or other similar computing resources that require vendor-specific initialization and setup) in your OpenShift Container Platform pod without needing to write custom code. 2.9.1. Understanding device plugins The device plugin provides a consistent and portable solution to consume hardware devices across clusters. The device plugin provides support for these devices through an extension mechanism, which makes these devices available to Containers, provides health checks of these devices, and securely shares them. Important OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors. A device plugin is a gRPC service running on the nodes (external to the kubelet ) that is responsible for managing specific hardware resources. Any device plugin must support following remote procedure calls (RPCs): service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} } Example device plugins Nvidia GPU device plugin for COS-based operating system Nvidia official GPU device plugin Solarflare device plugin KubeVirt device plugins: vfio and kvm Kubernetes device plugin for IBM(R) Crypto Express (CEX) cards Note For easy device plugin reference implementation, there is a stub device plugin in the Device Manager code: vendor/k8s.io/kubernetes/pkg/kubelet/cm/deviceplugin/device_plugin_stub.go . 2.9.1.1. Methods for deploying a device plugin Daemon sets are the recommended approach for device plugin deployments. Upon start, the device plugin will try to create a UNIX domain socket at /var/lib/kubelet/device-plugin/ on the node to serve RPCs from Device Manager. Since device plugins must manage hardware resources, access to the host file system, as well as socket creation, they must be run in a privileged security context. More specific details regarding deployment steps can be found with each device plugin implementation. 2.9.2. Understanding the Device Manager Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins. You can advertise specialized hardware without requiring any upstream code changes. Important OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors. Device Manager advertises devices as Extended Resources . User pods can consume devices, advertised by Device Manager, using the same Limit/Request mechanism, which is used for requesting any other Extended Resource . Upon start, the device plugin registers itself with Device Manager invoking Register on the /var/lib/kubelet/device-plugins/kubelet.sock and starts a gRPC service at /var/lib/kubelet/device-plugins/<plugin>.sock for serving Device Manager requests. Device Manager, while processing a new registration request, invokes ListAndWatch remote procedure call (RPC) at the device plugin service. In response, Device Manager gets a list of Device objects from the plugin over a gRPC stream. Device Manager will keep watching on the stream for new updates from the plugin. On the plugin side, the plugin will also keep the stream open and whenever there is a change in the state of any of the devices, a new device list is sent to the Device Manager over the same streaming connection. While handling a new pod admission request, Kubelet passes requested Extended Resources to the Device Manager for device allocation. Device Manager checks in its database to verify if a corresponding plugin exists or not. If the plugin exists and there are free allocatable devices as well as per local cache, Allocate RPC is invoked at that particular device plugin. Additionally, device plugins can also perform several other device-specific operations, such as driver installation, device initialization, and device resets. These functionalities vary from implementation to implementation. 2.9.3. Enabling Device Manager Enable Device Manager to implement a device plugin to advertise specialized hardware without any upstream code changes. Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins. Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command. Perform one of the following steps: View the machine config: # oc describe machineconfig <name> For example: # oc describe machineconfig 00-worker Example output Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1 1 Label required for the Device Manager. Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a Device Manager CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3 1 Assign a name to CR. 2 Enter the label from the Machine Config Pool. 3 Set DevicePlugins to 'true`. Create the Device Manager: USD oc create -f devicemgr.yaml Example output kubeletconfig.machineconfiguration.openshift.io/devicemgr created Ensure that Device Manager was actually enabled by confirming that /var/lib/kubelet/device-plugins/kubelet.sock is created on the node. This is the UNIX domain socket on which the Device Manager gRPC server listens for new plugin registrations. This sock file is created when the Kubelet is started only if Device Manager is enabled. 2.10. Including pod priority in pod scheduling decisions You can enable pod priority and preemption in your cluster. Pod priority indicates the importance of a pod relative to other pods and queues the pods based on that priority. pod preemption allows the cluster to evict, or preempt, lower-priority pods so that higher-priority pods can be scheduled if there is no available space on a suitable node pod priority also affects the scheduling order of pods and out-of-resource eviction ordering on the node. To use priority and preemption, you create priority classes that define the relative weight of your pods. Then, reference a priority class in the pod specification to apply that weight for scheduling. 2.10.1. Understanding pod priority When you use the Pod Priority and Preemption feature, the scheduler orders pending pods by their priority, and a pending pod is placed ahead of other pending pods with lower priority in the scheduling queue. As a result, the higher priority pod might be scheduled sooner than pods with lower priority if its scheduling requirements are met. If a pod cannot be scheduled, scheduler continues to schedule other lower priority pods. 2.10.1.1. Pod priority classes You can assign pods a priority class, which is a non-namespaced object that defines a mapping from a name to the integer value of the priority. The higher the value, the higher the priority. A priority class object can take any 32-bit integer value smaller than or equal to 1000000000 (one billion). Reserve numbers larger than or equal to one billion for critical pods that must not be preempted or evicted. By default, OpenShift Container Platform has two reserved priority classes for critical system pods to have guaranteed scheduling. USD oc get priorityclasses Example output NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s system-node-critical - This priority class has a value of 2000001000 and is used for all pods that should never be evicted from a node. Examples of pods that have this priority class are sdn-ovs , sdn , and so forth. A number of critical components include the system-node-critical priority class by default, for example: master-api master-controller master-etcd sdn sdn-ovs sync system-cluster-critical - This priority class has a value of 2000000000 (two billion) and is used with pods that are important for the cluster. Pods with this priority class can be evicted from a node in certain circumstances. For example, pods configured with the system-node-critical priority class can take priority. However, this priority class does ensure guaranteed scheduling. Examples of pods that can have this priority class are fluentd, add-on components like descheduler, and so forth. A number of critical components include the system-cluster-critical priority class by default, for example: fluentd metrics-server descheduler openshift-user-critical - You can use the priorityClassName field with important pods that cannot bind their resource consumption and do not have predictable resource consumption behavior. Prometheus pods under the openshift-monitoring and openshift-user-workload-monitoring namespaces use the openshift-user-critical priorityClassName . Monitoring workloads use system-critical as their first priorityClass , but this causes problems when monitoring uses excessive memory and the nodes cannot evict them. As a result, monitoring drops priority to give the scheduler flexibility, moving heavy workloads around to keep critical nodes operating. cluster-logging - This priority is used by Fluentd to make sure Fluentd pods are scheduled to nodes over other apps. 2.10.1.2. Pod priority names After you have one or more priority classes, you can create pods that specify a priority class name in a Pod spec. The priority admission controller uses the priority class name field to populate the integer value of the priority. If the named priority class is not found, the pod is rejected. 2.10.2. Understanding pod preemption When a developer creates a pod, the pod goes into a queue. If the developer configured the pod for pod priority or preemption, the scheduler picks a pod from the queue and tries to schedule the pod on a node. If the scheduler cannot find space on an appropriate node that satisfies all the specified requirements of the pod, preemption logic is triggered for the pending pod. When the scheduler preempts one or more pods on a node, the nominatedNodeName field of higher-priority Pod spec is set to the name of the node, along with the nodename field. The scheduler uses the nominatedNodeName field to keep track of the resources reserved for pods and also provides information to the user about preemptions in the clusters. After the scheduler preempts a lower-priority pod, the scheduler honors the graceful termination period of the pod. If another node becomes available while scheduler is waiting for the lower-priority pod to terminate, the scheduler can schedule the higher-priority pod on that node. As a result, the nominatedNodeName field and nodeName field of the Pod spec might be different. Also, if the scheduler preempts pods on a node and is waiting for termination, and a pod with a higher-priority pod than the pending pod needs to be scheduled, the scheduler can schedule the higher-priority pod instead. In such a case, the scheduler clears the nominatedNodeName of the pending pod, making the pod eligible for another node. Preemption does not necessarily remove all lower-priority pods from a node. The scheduler can schedule a pending pod by removing a portion of the lower-priority pods. The scheduler considers a node for pod preemption only if the pending pod can be scheduled on the node. 2.10.2.1. Non-preempting priority classes Pods with the preemption policy set to Never are placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled stays in the scheduling queue until sufficient resources are free and it can be scheduled. Non-preempting pods, like other pods, are subject to scheduler back-off. This means that if the scheduler tries unsuccessfully to schedule these pods, they are retried with lower frequency, allowing other pods with lower priority to be scheduled before them. Non-preempting pods can still be preempted by other, high-priority pods. 2.10.2.2. Pod preemption and other scheduler settings If you enable pod priority and preemption, consider your other scheduler settings: Pod priority and pod disruption budget A pod disruption budget specifies the minimum number or percentage of replicas that must be up at a time. If you specify pod disruption budgets, OpenShift Container Platform respects them when preempting pods at a best effort level. The scheduler attempts to preempt pods without violating the pod disruption budget. If no such pods are found, lower-priority pods might be preempted despite their pod disruption budget requirements. Pod priority and pod affinity Pod affinity requires a new pod to be scheduled on the same node as other pods with the same label. If a pending pod has inter-pod affinity with one or more of the lower-priority pods on a node, the scheduler cannot preempt the lower-priority pods without violating the affinity requirements. In this case, the scheduler looks for another node to schedule the pending pod. However, there is no guarantee that the scheduler can find an appropriate node and pending pod might not be scheduled. To prevent this situation, carefully configure pod affinity with equal-priority pods. 2.10.2.3. Graceful termination of preempted pods When preempting a pod, the scheduler waits for the pod graceful termination period to expire, allowing the pod to finish working and exit. If the pod does not exit after the period, the scheduler kills the pod. This graceful termination period creates a time gap between the point that the scheduler preempts the pod and the time when the pending pod can be scheduled on the node. To minimize this gap, configure a small graceful termination period for lower-priority pods. 2.10.3. Configuring priority and preemption You apply pod priority and preemption by creating a priority class object and associating pods to the priority by using the priorityClassName in your pod specs. Note You cannot add a priority class directly to an existing scheduled pod. Procedure To configure your cluster to use priority and preemption: Create one or more priority classes: Create a YAML file similar to the following: apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority 1 value: 1000000 2 preemptionPolicy: PreemptLowerPriority 3 globalDefault: false 4 description: "This priority class should be used for XYZ service pods only." 5 1 The name of the priority class object. 2 The priority value of the object. 3 Optional. Specifies whether this priority class is preempting or non-preempting. The preemption policy defaults to PreemptLowerPriority , which allows pods of that priority class to preempt lower-priority pods. If the preemption policy is set to Never , pods in that priority class are non-preempting. 4 Optional. Specifies whether this priority class should be used for pods without a priority class name specified. This field is false by default. Only one priority class with globalDefault set to true can exist in the cluster. If there is no priority class with globalDefault:true , the priority of pods with no priority class name is zero. Adding a priority class with globalDefault:true affects only pods created after the priority class is added and does not change the priorities of existing pods. 5 Optional. Describes which pods developers should use with this priority class. Enter an arbitrary text string. Create the priority class: USD oc create -f <file-name>.yaml Create a pod spec to include the name of a priority class: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] priorityClassName: high-priority 1 1 Specify the priority class to use with this pod. Create the pod: USD oc create -f <file-name>.yaml You can add the priority name directly to the pod configuration or to a pod template. 2.11. Placing pods on specific nodes using node selectors A node selector specifies a map of key-value pairs. The rules are defined using custom labels on nodes and selectors specified in pods. For the pod to be eligible to run on a node, the pod must have the indicated key-value pairs as the label on the node. If you are using node affinity and node selectors in the same pod configuration, see the important considerations below. 2.11.1. Using node selectors to control pod placement You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. To add node selectors to an existing pod, add a node selector to the controlling object for that pod, such as a ReplicaSet object, DaemonSet object, StatefulSet object, Deployment object, or DeploymentConfig object. Any existing pods under that controlling object are recreated on a node with a matching label. If you are creating a new pod, you can add the node selector directly to the pod spec. If the pod does not have a controlling object, you must delete the pod, edit the pod spec, and recreate the pod. Note You cannot add a node selector directly to an existing scheduled pod. Prerequisites To add a node selector to existing pods, determine the controlling object for that pod. For example, the router-default-66d5cf9464-m2g75 pod is controlled by the router-default-66d5cf9464 replica set: USD oc describe pod router-default-66d5cf9464-7pwkc Example output kind: Pod apiVersion: v1 metadata: # ... Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress # ... Controlled By: ReplicaSet/router-default-66d5cf9464 # ... The web console lists the controlling object under ownerReferences in the pod YAML: apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc # ... ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true # ... Procedure Add labels to a node by using a compute machine set or editing the node directly: Use a MachineSet object to add labels to nodes managed by the compute machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api For example: USD oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" # ... Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api Example MachineSet object apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: # ... template: metadata: # ... spec: metadata: labels: region: east type: user-node # ... Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: "user-node" region: "east" # ... Verify that the labels are added to the node: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.29.4 Add the matching node selector to a pod: To add a node selector to existing and future pods, add a node selector to the controlling object for the pods: Example ReplicaSet object with labels kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 # ... spec: # ... template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1 # ... 1 Add the node selector. To add a node selector to a specific, new pod, add the selector to the Pod object directly: Example Pod object with a node selector apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 # ... spec: nodeSelector: region: east type: user-node # ... Note You cannot add a node selector directly to an existing scheduled pod. 2.12. Run Once Duration Override Operator 2.12.1. Run Once Duration Override Operator overview You can use the Run Once Duration Override Operator to specify a maximum time limit that run-once pods can be active for. 2.12.1.1. About the Run Once Duration Override Operator OpenShift Container Platform relies on run-once pods to perform tasks such as deploying a pod or performing a build. Run-once pods are pods that have a RestartPolicy of Never or OnFailure . Cluster administrators can use the Run Once Duration Override Operator to force a limit on the time that those run-once pods can be active. After the time limit expires, the cluster will try to actively terminate those pods. The main reason to have such a limit is to prevent tasks such as builds to run for an excessive amount of time. To apply the run-once duration override from the Run Once Duration Override Operator to run-once pods, you must enable it on each applicable namespace. If both the run-once pod and the Run Once Duration Override Operator have their activeDeadlineSeconds value set, the lower of the two values is used. 2.12.2. Run Once Duration Override Operator release notes Cluster administrators can use the Run Once Duration Override Operator to force a limit on the time that run-once pods can be active. After the time limit expires, the cluster tries to terminate the run-once pods. The main reason to have such a limit is to prevent tasks such as builds to run for an excessive amount of time. To apply the run-once duration override from the Run Once Duration Override Operator to run-once pods, you must enable it on each applicable namespace. These release notes track the development of the Run Once Duration Override Operator for OpenShift Container Platform. For an overview of the Run Once Duration Override Operator, see About the Run Once Duration Override Operator . 2.12.2.1. Run Once Duration Override Operator 1.1.2 Issued: 31 October 2024 The following advisory is available for the Run Once Duration Override Operator 1.1.2: RHSA-2024:8337 2.12.2.1.1. Bug fixes This release of the Run Once Duration Override Operator addresses several Common Vulnerabilities and Exposures (CVEs). 2.12.2.2. Run Once Duration Override Operator 1.1.1 Issued: 1 July 2024 The following advisory is available for the Run Once Duration Override Operator 1.1.1: RHSA-2024:1616 2.12.2.2.1. New features and enhancements You can install and use the Run Once Duration Override Operator in an OpenShift Container Platform cluster running in FIPS mode. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 2.12.2.2.2. Bug fixes This release of the Run Once Duration Override Operator addresses several Common Vulnerabilities and Exposures (CVEs). 2.12.2.3. Run Once Duration Override Operator 1.1.0 Issued: 28 February 2024 The following advisory is available for the Run Once Duration Override Operator 1.1.0: RHSA-2024:0269 2.12.2.3.1. Bug fixes This release of the Run Once Duration Override Operator addresses several Common Vulnerabilities and Exposures (CVEs). 2.12.3. Overriding the active deadline for run-once pods You can use the Run Once Duration Override Operator to specify a maximum time limit that run-once pods can be active for. By enabling the run-once duration override on a namespace, all future run-once pods created or updated in that namespace have their activeDeadlineSeconds field set to the value specified by the Run Once Duration Override Operator. Note If both the run-once pod and the Run Once Duration Override Operator have their activeDeadlineSeconds value set, the lower of the two values is used. 2.12.3.1. Installing the Run Once Duration Override Operator You can use the web console to install the Run Once Duration Override Operator. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Run Once Duration Override Operator. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-run-once-duration-override-operator in the Name field and click Create . Install the Run Once Duration Override Operator. Navigate to Operators OperatorHub . Enter Run Once Duration Override Operator into the filter box. Select the Run Once Duration Override Operator and click Install . On the Install Operator page: The Update channel is set to stable , which installs the latest stable release of the Run Once Duration Override Operator. Select A specific namespace on the cluster . Choose openshift-run-once-duration-override-operator from the dropdown menu under Installed namespace . Select an Update approval strategy. The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Create a RunOnceDurationOverride instance. From the Operators Installed Operators page, click Run Once Duration Override Operator . Select the Run Once Duration Override tab and click Create RunOnceDurationOverride . Edit the settings as necessary. Under the runOnceDurationOverride section, you can update the spec.activeDeadlineSeconds value, if required. The predefined value is 3600 seconds, or 1 hour. Click Create . Verification Log in to the OpenShift CLI. Verify all pods are created and running properly. USD oc get pods -n openshift-run-once-duration-override-operator Example output NAME READY STATUS RESTARTS AGE run-once-duration-override-operator-7b88c676f6-lcxgc 1/1 Running 0 7m46s runoncedurationoverride-62blp 1/1 Running 0 41s runoncedurationoverride-h8h8b 1/1 Running 0 41s runoncedurationoverride-tdsqk 1/1 Running 0 41s 2.12.3.2. Enabling the run-once duration override on a namespace To apply the run-once duration override from the Run Once Duration Override Operator to run-once pods, you must enable it on each applicable namespace. Prerequisites The Run Once Duration Override Operator is installed. Procedure Log in to the OpenShift CLI. Add the label to enable the run-once duration override to your namespace: USD oc label namespace <namespace> \ 1 runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true 1 Specify the namespace to enable the run-once duration override on. After you enable the run-once duration override on this namespace, future run-once pods that are created in this namespace will have their activeDeadlineSeconds field set to the override value from the Run Once Duration Override Operator. Existing pods in this namespace will also have their activeDeadlineSeconds value set when they are updated . Verification Create a test run-once pod in the namespace that you enabled the run-once duration override on: apiVersion: v1 kind: Pod metadata: name: example namespace: <namespace> 1 spec: restartPolicy: Never 2 securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: busybox securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] image: busybox:1.25 command: - /bin/sh - -ec - | while sleep 5; do date; done 1 Replace <namespace> with the name of your namespace. 2 The restartPolicy must be Never or OnFailure to be a run-once pod. Verify that the pod has its activeDeadlineSeconds field set: USD oc get pods -n <namespace> -o yaml | grep activeDeadlineSeconds Example output activeDeadlineSeconds: 3600 2.12.3.3. Updating the run-once active deadline override value You can customize the override value that the Run Once Duration Override Operator applies to run-once pods. The predefined value is 3600 seconds, or 1 hour. Prerequisites You have access to the cluster with cluster-admin privileges. You have installed the Run Once Duration Override Operator. Procedure Log in to the OpenShift CLI. Edit the RunOnceDurationOverride resource: USD oc edit runoncedurationoverride cluster Update the activeDeadlineSeconds field: apiVersion: operator.openshift.io/v1 kind: RunOnceDurationOverride metadata: # ... spec: runOnceDurationOverride: spec: activeDeadlineSeconds: 1800 1 # ... 1 Set the activeDeadlineSeconds field to the desired value, in seconds. Save the file to apply the changes. Any future run-once pods created in namespaces where the run-once duration override is enabled will have their activeDeadlineSeconds field set to this new value. Existing run-once pods in these namespaces will receive this new value when they are updated. 2.12.4. Uninstalling the Run Once Duration Override Operator You can remove the Run Once Duration Override Operator from OpenShift Container Platform by uninstalling the Operator and removing its related resources. 2.12.4.1. Uninstalling the Run Once Duration Override Operator You can use the web console to uninstall the Run Once Duration Override Operator. Uninstalling the Run Once Duration Override Operator does not unset the activeDeadlineSeconds field for run-once pods, but it will no longer apply the override value to future run-once pods. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have installed the Run Once Duration Override Operator. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Select openshift-run-once-duration-override-operator from the Project dropdown list. Delete the RunOnceDurationOverride instance. Click Run Once Duration Override Operator and select the Run Once Duration Override tab. Click the Options menu to the cluster entry and select Delete RunOnceDurationOverride . In the confirmation dialog, click Delete . Uninstall the Run Once Duration Override Operator Operator. Navigate to Operators Installed Operators . Click the Options menu to the Run Once Duration Override Operator entry and click Uninstall Operator . In the confirmation dialog, click Uninstall . 2.12.4.2. Uninstalling Run Once Duration Override Operator resources Optionally, after uninstalling the Run Once Duration Override Operator, you can remove its related resources from your cluster. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have uninstalled the Run Once Duration Override Operator. Procedure Log in to the OpenShift Container Platform web console. Remove CRDs that were created when the Run Once Duration Override Operator was installed: Navigate to Administration CustomResourceDefinitions . Enter RunOnceDurationOverride in the Name field to filter the CRDs. Click the Options menu to the RunOnceDurationOverride CRD and select Delete CustomResourceDefinition . In the confirmation dialog, click Delete . Delete the openshift-run-once-duration-override-operator namespace. Navigate to Administration Namespaces . Enter openshift-run-once-duration-override-operator into the filter box. Click the Options menu to the openshift-run-once-duration-override-operator entry and select Delete Namespace . In the confirmation dialog, enter openshift-run-once-duration-override-operator and click Delete . Remove the run-once duration override label from the namespaces that it was enabled on. Navigate to Administration Namespaces . Select your namespace. Click Edit to the Labels field. Remove the runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true label and click Save .
[ "kind: Pod apiVersion: v1 metadata: name: example labels: environment: production app: abc 1 spec: restartPolicy: Always 2 securityContext: 3 runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: 4 - name: abc args: - sleep - \"1000000\" volumeMounts: 5 - name: cache-volume mountPath: /cache 6 image: registry.access.redhat.com/ubi7/ubi-init:latest 7 securityContext: allowPrivilegeEscalation: false runAsNonRoot: true capabilities: drop: [\"ALL\"] resources: limits: memory: \"100Mi\" cpu: \"1\" requests: memory: \"100Mi\" cpu: \"1\" volumes: 8 - name: cache-volume emptyDir: sizeLimit: 500Mi", "oc project <project-name>", "oc get pods", "oc get pods", "NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none>", "oc adm top pods", "oc adm top pods -n openshift-console", "NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi", "oc adm top pod --selector=''", "oc adm top pod --selector='name=my-pod'", "oc logs -f <pod_name> -c <container_name>", "oc logs ruby-58cd97df55-mww7r", "oc logs -f ruby-57f7f4855b-znl92 -c ruby", "oc logs <object_type>/<resource_name> 1", "oc logs deployment/ruby", "{ \"kind\": \"Pod\", \"spec\": { \"containers\": [ { \"image\": \"openshift/hello-openshift\", \"name\": \"hello-openshift\" } ] }, \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"iperf-slow\", \"annotations\": { \"kubernetes.io/ingress-bandwidth\": \"10M\", \"kubernetes.io/egress-bandwidth\": \"10M\" } } }", "oc create -f <file_or_dir_path>", "oc get poddisruptionbudget --all-namespaces", "NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod", "oc create -f </path/to/file> -n <project_name>", "apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1", "oc create -f pod-disruption-budget.yaml", "apiVersion: v1 kind: Pod metadata: name: my-pdb spec: template: metadata: name: critical-pod priorityClassName: system-cluster-critical 1", "oc create -f <file-name>.yaml", "oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75", "horizontalpodautoscaler.autoscaling/hello-node autoscaled", "apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hello-node namespace: default spec: maxReplicas: 7 minReplicas: 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hello-node targetCPUUtilizationPercentage: 75 status: currentReplicas: 5 desiredReplicas: 0", "oc get deployment hello-node", "NAME REVISION DESIRED CURRENT TRIGGERED BY hello-node 1 5 5 config", "type: Resource resource: name: cpu target: type: Utilization averageUtilization: 60", "behavior: scaleDown: stabilizationWindowSeconds: 300", "apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: behavior: scaleDown: 1 policies: 2 - type: Pods 3 value: 4 4 periodSeconds: 60 5 - type: Percent value: 10 6 periodSeconds: 60 selectPolicy: Min 7 stabilizationWindowSeconds: 300 8 scaleUp: 9 policies: - type: Pods value: 5 10 periodSeconds: 70 - type: Percent value: 12 11 periodSeconds: 80 selectPolicy: Max stabilizationWindowSeconds: 0", "apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: minReplicas: 20 behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 30 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max scaleUp: selectPolicy: Disabled", "oc edit hpa hpa-resource-metrics-memory", "apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: annotations: autoscaling.alpha.kubernetes.io/behavior: '{\"ScaleUp\":{\"StabilizationWindowSeconds\":0,\"SelectPolicy\":\"Max\",\"Policies\":[{\"Type\":\"Pods\",\"Value\":4,\"PeriodSeconds\":15},{\"Type\":\"Percent\",\"Value\":100,\"PeriodSeconds\":15}]}, \"ScaleDown\":{\"StabilizationWindowSeconds\":300,\"SelectPolicy\":\"Min\",\"Policies\":[{\"Type\":\"Pods\",\"Value\":4,\"PeriodSeconds\":60},{\"Type\":\"Percent\",\"Value\":10,\"PeriodSeconds\":60}]}}'", "oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal", "Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>", "oc autoscale <object_type>/<name> \\ 1 --min <number> \\ 2 --max <number> \\ 3 --cpu-percent=<percent> 4", "oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75", "apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: cpu-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: cpu 9 target: type: AverageValue 10 averageValue: 500m 11", "oc create -f <file-name>.yaml", "oc get hpa cpu-autoscale", "NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE cpu-autoscale Deployment/example 173m/500m 1 10 1 20m", "oc describe PodMetrics openshift-kube-scheduler-ip-10-0-129-223.compute.internal -n openshift-kube-scheduler", "Name: openshift-kube-scheduler-ip-10-0-129-223.compute.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Cpu: 0 Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2020-02-14T22:21:14Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-129-223.compute.internal Timestamp: 2020-02-14T22:21:14Z Window: 5m0s Events: <none>", "apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: AverageValue 10 averageValue: 500Mi 11 behavior: 12 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 60 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max", "apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: memory-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: Utilization 10 averageUtilization: 50 11 behavior: 12 scaleUp: stabilizationWindowSeconds: 180 policies: - type: Pods value: 6 periodSeconds: 120 - type: Percent value: 10 periodSeconds: 120 selectPolicy: Max", "oc create -f <file-name>.yaml", "oc create -f hpa.yaml", "horizontalpodautoscaler.autoscaling/hpa-resource-metrics-memory created", "oc get hpa hpa-resource-metrics-memory", "NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-resource-metrics-memory Deployment/example 2441216/500Mi 1 10 1 20m", "oc describe hpa hpa-resource-metrics-memory", "Name: hpa-resource-metrics-memory Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Wed, 04 Mar 2020 16:31:37 +0530 Reference: Deployment/example Metrics: ( current / target ) resource memory on pods: 2441216 / 500Mi Min replicas: 1 Max replicas: 10 ReplicationController pods: 1 current / 1 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale recommended size matches current size ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 6m34s horizontal-pod-autoscaler New size: 1; reason: All metrics below target", "oc describe hpa cm-test", "Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) \"http_requests\" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range Events:", "Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: no matches for kind \"ReplicationController\" in group \"apps\" Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 6s (x3 over 36s) horizontal-pod-autoscaler no matches for kind \"ReplicationController\" in group \"apps\"", "Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API", "Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range", "oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal", "Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>", "oc describe hpa <pod-name>", "oc describe hpa cm-test", "Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) \"http_requests\" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range", "oc get all -n openshift-vertical-pod-autoscaler", "NAME READY STATUS RESTARTS AGE pod/vertical-pod-autoscaler-operator-85b4569c47-2gmhc 1/1 Running 0 3m13s pod/vpa-admission-plugin-default-67644fc87f-xq7k9 1/1 Running 0 2m56s pod/vpa-recommender-default-7c54764b59-8gckt 1/1 Running 0 2m56s pod/vpa-updater-default-7f6cc87858-47vw9 1/1 Running 0 2m56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/vpa-webhook ClusterIP 172.30.53.206 <none> 443/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/vertical-pod-autoscaler-operator 1/1 1 1 3m13s deployment.apps/vpa-admission-plugin-default 1/1 1 1 2m56s deployment.apps/vpa-recommender-default 1/1 1 1 2m56s deployment.apps/vpa-updater-default 1/1 1 1 2m56s NAME DESIRED CURRENT READY AGE replicaset.apps/vertical-pod-autoscaler-operator-85b4569c47 1 1 1 3m13s replicaset.apps/vpa-admission-plugin-default-67644fc87f 1 1 1 2m56s replicaset.apps/vpa-recommender-default-7c54764b59 1 1 1 2m56s replicaset.apps/vpa-updater-default-7f6cc87858 1 1 1 2m56s", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-master-1 <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-master-1 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-master-0 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-master-1 <none> <none>", "oc edit Subscription vertical-pod-autoscaler -n openshift-vertical-pod-autoscaler", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"node-role.kubernetes.io/infra\" operator: \"Exists\" effect: \"NoSchedule\"", "oc edit VerticalPodAutoscalerController default -n openshift-vertical-pod-autoscaler", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 1 recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 2 updater: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 3", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 1 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 2 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" updater: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 3 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\"", "oc get pods -n openshift-vertical-pod-autoscaler -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-infra-eastus3-2bndt <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-infra-eastus1-lrgj8 <none> <none>", "resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi", "resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k", "oc get vpa <vpa-name> --output yaml", "status: recommendation: containerRecommendations: - containerName: frontend lowerBound: cpu: 25m memory: 262144k target: cpu: 25m memory: 262144k uncappedTarget: cpu: 25m memory: 262144k upperBound: cpu: 262m memory: \"274357142\" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: \"498558823\"", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: creationTimestamp: \"2021-04-21T19:29:49Z\" generation: 2 name: default namespace: openshift-vertical-pod-autoscaler resourceVersion: \"142172\" uid: 180e17e9-03cc-427f-9955-3b4d7aeb2d59 spec: minReplicas: 3 1 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Initial\" 3", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Off\" 3", "oc get vpa <vpa-name> --output yaml", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: \"Off\"", "spec: containers: - name: frontend resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi - name: backend resources: limits: cpu: \"1\" memory: 500Mi requests: cpu: 500m memory: 100Mi", "spec: containers: name: frontend resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k name: backend resources: limits: cpu: \"1\" memory: 500Mi requests: cpu: 500m memory: 100Mi", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: 1 container: args: 2 - '--kube-api-qps=50.0' - '--kube-api-burst=100.0' resources: requests: 3 cpu: 40m memory: 150Mi limits: memory: 300Mi recommender: 4 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' - '--memory-saver=true' 5 resources: requests: cpu: 75m memory: 275Mi limits: memory: 550Mi updater: 6 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' resources: requests: cpu: 80m memory: 350M limits: memory: 700Mi minReplicas: 2 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15", "apiVersion: v1 kind: Pod metadata: name: vpa-updater-default-d65ffb9dc-hgw44 namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --min-replicas=2 - --kube-api-qps=60.0 - --kube-api-burst=120.0 resources: requests: cpu: 80m memory: 350M", "apiVersion: v1 kind: Pod metadata: name: vpa-admission-plugin-default-756999448c-l7tsd namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --tls-cert-file=/data/tls-certs/tls.crt - --tls-private-key=/data/tls-certs/tls.key - --client-ca-file=/data/tls-ca-certs/service-ca.crt - --webhook-timeout-seconds=10 - --kube-api-qps=50.0 - --kube-api-burst=100.0 resources: requests: cpu: 40m memory: 150Mi", "apiVersion: v1 kind: Pod metadata: name: vpa-recommender-default-74c979dbbc-znrd2 namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --recommendation-margin-fraction=0.15 - --pod-recommendation-min-cpu-millicores=25 - --pod-recommendation-min-memory-mb=250 - --kube-api-qps=60.0 - --kube-api-burst=120.0 - --memory-saver=true resources: requests: cpu: 75m memory: 275Mi", "apiVersion: v1 1 kind: ServiceAccount metadata: name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRoleBinding metadata: name: system:example-metrics-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 3 kind: ClusterRoleBinding metadata: name: system:example-vpa-actor roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-actor subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRoleBinding metadata: name: system:example-vpa-target-reader-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-target-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name>", "apiVersion: apps/v1 kind: Deployment metadata: name: alt-vpa-recommender namespace: <namespace_name> spec: replicas: 1 selector: matchLabels: app: alt-vpa-recommender template: metadata: labels: app: alt-vpa-recommender spec: containers: 1 - name: recommender image: quay.io/example/alt-recommender:latest 2 imagePullPolicy: Always resources: limits: cpu: 200m memory: 1000Mi requests: cpu: 50m memory: 500Mi ports: - name: prometheus containerPort: 8942 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL seccompProfile: type: RuntimeDefault serviceAccountName: alt-vpa-recommender-sa 3 securityContext: runAsNonRoot: true", "oc get pods", "NAME READY STATUS RESTARTS AGE frontend-845d5478d-558zf 1/1 Running 0 4m25s frontend-845d5478d-7z9gx 1/1 Running 0 4m25s frontend-845d5478d-b7l4j 1/1 Running 0 4m25s vpa-alt-recommender-55878867f9-6tp5v 1/1 Running 0 9s", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender namespace: <namespace_name> spec: recommenders: - name: alt-vpa-recommender 1 targetRef: apiVersion: \"apps/v1\" kind: Deployment 2 name: frontend", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: \"Off\" recommenders: 5 - name: my-recommender", "oc create -f <file-name>.yaml", "oc get vpa <vpa-name> --output yaml", "status: recommendation: containerRecommendations: - containerName: frontend lowerBound: 1 cpu: 25m memory: 262144k target: 2 cpu: 25m memory: 262144k uncappedTarget: 3 cpu: 25m memory: 262144k upperBound: 4 cpu: 262m memory: \"274357142\" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: \"498558823\"", "apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: scalablepods.testing.openshift.io spec: group: testing.openshift.io versions: - name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: replicas: type: integer minimum: 0 selector: type: string status: type: object properties: replicas: type: integer subresources: status: {} scale: specReplicasPath: .spec.replicas statusReplicasPath: .status.replicas labelSelectorPath: .spec.selector 1 scope: Namespaced names: plural: scalablepods singular: scalablepod kind: ScalablePod shortNames: - spod", "apiVersion: testing.openshift.io/v1 kind: ScalablePod metadata: name: scalable-cr namespace: default spec: selector: \"app=scalable-cr\" 1 replicas: 1", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: scalable-cr namespace: default spec: targetRef: apiVersion: testing.openshift.io/v1 kind: ScalablePod name: scalable-cr updatePolicy: updateMode: \"Auto\"", "oc delete namespace openshift-vertical-pod-autoscaler", "oc delete crd verticalpodautoscalercheckpoints.autoscaling.k8s.io", "oc delete crd verticalpodautoscalercontrollers.autoscaling.openshift.io", "oc delete crd verticalpodautoscalers.autoscaling.k8s.io", "oc delete MutatingWebhookConfiguration vpa-webhook-config", "oc delete operator/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler", "apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5", "apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque 1 data: 2 username: <username> password: <password> stringData: 3 hostname: myapp.mydomain.com secret.properties: | property1=valueA property2=valueB", "apiVersion: v1 kind: ServiceAccount secrets: - name: test-secret", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: 1 - name: secret-volume mountPath: /etc/secret-volume 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: secret-volume secret: secretName: test-secret 4 restartPolicy: Never", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username from: kind: ImageStreamTag namespace: openshift name: 'cli:latest'", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-sa-sample annotations: kubernetes.io/service-account.name: \"sa-name\" 1 type: kubernetes.io/service-account-token 2", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-basic-auth type: kubernetes.io/basic-auth 1 data: stringData: 2 username: admin password: <password>", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 MIIEpQIBAAKCAQEAulqb/Y", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-docker-cfg namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfig:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2", "apiVersion: v1 kind: Secret metadata: name: secret-docker-json namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: example namespace: <namespace> type: Opaque 1 data: username: <base64 encoded username> password: <base64 encoded password> stringData: 2 hostname: myapp.mydomain.com", "oc create sa <service_account_name> -n <your_namespace>", "apiVersion: v1 kind: Secret metadata: name: <secret_name> 1 annotations: kubernetes.io/service-account.name: \"sa-name\" 2 type: kubernetes.io/service-account-token 3", "oc apply -f service-account-token-secret.yaml", "oc get secret <sa_token_secret> -o jsonpath='{.data.token}' | base64 --decode 1", "ayJhbGciOiJSUzI1NiIsImtpZCI6IklOb2dtck1qZ3hCSWpoNnh5YnZhSE9QMkk3YnRZMVZoclFfQTZfRFp1YlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImJ1aWxkZXItdG9rZW4tdHZrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYnVpbGRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNmZGU2MGZmLTA1NGYtNDkyZi04YzhjLTNlZjE0NDk3MmFmNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmJ1aWxkZXIifQ.OmqFTDuMHC_lYvvEUrjr1x453hlEEHYcxS9VKSzmRkP1SiVZWPNPkTWlfNRp6bIUZD3U6aN3N7dMSN0eI5hu36xPgpKTdvuckKLTCnelMx6cxOdAbrcw1mCmOClNscwjS1KO1kzMtYnnq8rXHiMJELsNlhnRyyIXRTtNBsy4t64T3283s3SLsancyx0gy0ujx-Ch3uKAKdZi5iT-I8jnnQ-ds5THDs2h65RJhgglQEmSxpHrLGZFmyHAQI-_SjvmHZPXEc482x3SkaQHNLqpmrpJorNqh1M8ZHKzlujhZgVooMvJmWPXTb2vnvi3DGn2XI-hZxl1yD2yGH1RBpYUHA", "curl -X GET <openshift_cluster_api> --header \"Authorization: Bearer <token>\" 1 2", "apiVersion: v1 kind: Service metadata: name: registry annotations: service.beta.openshift.io/serving-cert-secret-name: registry-cert 1", "kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.openshift.io/serving-cert-secret-name: my-cert 1 spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376", "oc create -f <file-name>.yaml", "oc get secrets", "NAME TYPE DATA AGE my-cert kubernetes.io/tls 2 9m", "oc describe secret my-cert", "Name: my-cert Namespace: openshift-console Labels: <none> Annotations: service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z service.beta.openshift.io/originating-service-name: my-service service.beta.openshift.io/originating-service-uid: 640f0ec3-afc2-4380-bf31-a8c784846a11 service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z Type: kubernetes.io/tls Data ==== tls.key: 1679 bytes tls.crt: 2595 bytes", "apiVersion: v1 kind: Pod metadata: name: my-service-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mypod image: redis volumeMounts: - name: my-container mountPath: \"/etc/my-path\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: my-volume secret: secretName: my-cert items: - key: username path: my-group/my-username mode: 511", "secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60", "oc delete secret <secret_name>", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: secrets-store.csi.k8s.io spec: managementState: Managed", "apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: \"/etc/kubernetes/secrets-store-csi-providers\" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux", "oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers", "oc apply -f aws-provider.yaml", "mkdir credentialsrequest-dir-aws", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"secretsmanager:GetSecretValue\" - \"secretsmanager:DescribeSecret\" effect: Allow resource: \"arn:*:secretsmanager:*:*:secret:testSecret-??????\" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider", "oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'", "https://<oidc_provider_name>", "ccoctl aws create-iam-roles --name my-role --region=<aws_region> --credentials-requests-dir=credentialsrequest-dir-aws --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output", "2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds", "oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn=\"<aws_role_arn>\"", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: \"testSecret\" objectType: \"secretsmanager\"", "oc create -f secret-provider-class-aws.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-aws-provider\" 3", "oc create -f deployment.yaml", "oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/", "testSecret", "oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret", "<secret_value>", "apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: \"/etc/kubernetes/secrets-store-csi-providers\" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux", "oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers", "oc apply -f aws-provider.yaml", "mkdir credentialsrequest-dir-aws", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"ssm:GetParameter\" - \"ssm:GetParameters\" effect: Allow resource: \"arn:*:ssm:*:*:parameter/testParameter*\" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider", "oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'", "https://<oidc_provider_name>", "ccoctl aws create-iam-roles --name my-role --region=<aws_region> --credentials-requests-dir=credentialsrequest-dir-aws --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output", "2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds", "oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn=\"<aws_role_arn>\"", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: \"testParameter\" objectType: \"ssmparameter\"", "oc create -f secret-provider-class-aws.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-aws-provider\" 3", "oc create -f deployment.yaml", "oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/", "testParameter", "oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret", "<secret_value>", "apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-azure-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-azure-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-azure-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-azure labels: app: csi-secrets-store-provider-azure spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-azure template: metadata: labels: app: csi-secrets-store-provider-azure spec: serviceAccountName: csi-secrets-store-provider-azure hostNetwork: true containers: - name: provider-azure-installer image: mcr.microsoft.com/oss/azure/secrets-store/provider-azure:v1.4.1 imagePullPolicy: IfNotPresent args: - --endpoint=unix:///provider/azure.sock - --construct-pem-chain=true - --healthz-port=8989 - --healthz-path=/healthz - --healthz-timeout=5s livenessProbe: httpGet: path: /healthz port: 8989 failureThreshold: 3 initialDelaySeconds: 5 timeoutSeconds: 10 periodSeconds: 30 resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 0 capabilities: drop: - ALL volumeMounts: - mountPath: \"/provider\" name: providervol affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: type operator: NotIn values: - virtual-kubelet volumes: - name: providervol hostPath: path: \"/var/run/secrets-store-csi-providers\" tolerations: - operator: Exists nodeSelector: kubernetes.io/os: linux", "oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-azure -n openshift-cluster-csi-drivers", "oc apply -f azure-provider.yaml", "SERVICE_PRINCIPAL_CLIENT_SECRET=\"USD(az ad sp create-for-rbac --name https://USDKEYVAULT_NAME --query 'password' -otsv)\"", "SERVICE_PRINCIPAL_CLIENT_ID=\"USD(az ad sp list --display-name https://USDKEYVAULT_NAME --query '[0].appId' -otsv)\"", "oc create secret generic secrets-store-creds -n my-namespace --from-literal clientid=USD{SERVICE_PRINCIPAL_CLIENT_ID} --from-literal clientsecret=USD{SERVICE_PRINCIPAL_CLIENT_SECRET}", "oc -n my-namespace label secret secrets-store-creds secrets-store.csi.k8s.io/used=true", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider 1 namespace: my-namespace 2 spec: provider: azure 3 parameters: 4 usePodIdentity: \"false\" useVMManagedIdentity: \"false\" userAssignedIdentityID: \"\" keyvaultName: \"kvname\" objects: | array: - | objectName: secret1 objectType: secret tenantId: \"tid\"", "oc create -f secret-provider-class-azure.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: my-azure-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-azure-provider\" 3 nodePublishSecretRef: name: secrets-store-creds 4", "oc create -f deployment.yaml", "oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/", "secret1", "oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/secret1", "my-secret-value", "helm repo add hashicorp https://helm.releases.hashicorp.com", "helm repo update", "oc new-project vault", "oc label ns vault security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite", "oc adm policy add-scc-to-user privileged -z vault -n vault", "oc adm policy add-scc-to-user privileged -z vault-csi-provider -n vault", "helm install vault hashicorp/vault --namespace=vault --set \"server.dev.enabled=true\" --set \"injector.enabled=false\" --set \"csi.enabled=true\" --set \"global.openshift=true\" --set \"injector.agentImage.repository=docker.io/hashicorp/vault\" --set \"server.image.repository=docker.io/hashicorp/vault\" --set \"csi.image.repository=docker.io/hashicorp/vault-csi-provider\" --set \"csi.agent.image.repository=docker.io/hashicorp/vault\" --set \"csi.daemonSet.providersDir=/var/run/secrets-store-csi-providers\"", "oc patch daemonset -n vault vault-csi-provider --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/template/spec/containers/0/securityContext\", \"value\": {\"privileged\": true} }]'", "oc get pods -n vault", "NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 24m vault-csi-provider-87rgw 1/2 Running 0 5s vault-csi-provider-bd6hp 1/2 Running 0 4s vault-csi-provider-smlv7 1/2 Running 0 5s", "oc exec vault-0 --namespace=vault -- vault kv put secret/example1 testSecret1=my-secret-value", "oc exec vault-0 --namespace=vault -- vault kv get secret/example1", "= Secret Path = secret/data/example1 ======= Metadata ======= Key Value --- ----- created_time 2024-04-05T07:05:16.713911211Z custom_metadata <nil> deletion_time n/a destroyed false version 1 === Data === Key Value --- ----- testSecret1 my-secret-value", "oc exec vault-0 --namespace=vault -- vault auth enable kubernetes", "Success! Enabled kubernetes auth method at: kubernetes/", "TOKEN_REVIEWER_JWT=\"USD(oc exec vault-0 --namespace=vault -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)\"", "KUBERNETES_SERVICE_IP=\"USD(oc get svc kubernetes --namespace=default -o go-template=\"{{ .spec.clusterIP }}\")\"", "oc exec -i vault-0 --namespace=vault -- vault write auth/kubernetes/config issuer=\"https://kubernetes.default.svc.cluster.local\" token_reviewer_jwt=\"USD{TOKEN_REVIEWER_JWT}\" kubernetes_host=\"https://USD{KUBERNETES_SERVICE_IP}:443\" kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt", "Success! Data written to: auth/kubernetes/config", "oc exec -i vault-0 --namespace=vault -- vault policy write csi -<<EOF path \"secret/data/*\" { capabilities = [\"read\"] } EOF", "Success! Uploaded policy: csi", "oc exec -i vault-0 --namespace=vault -- vault write auth/kubernetes/role/csi bound_service_account_names=default bound_service_account_namespaces=default,test-ns,negative-test-ns,my-namespace policies=csi ttl=20m", "Success! Data written to: auth/kubernetes/role/csi", "oc get pods -n vault", "NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 43m vault-csi-provider-87rgw 2/2 Running 0 19m vault-csi-provider-bd6hp 2/2 Running 0 19m vault-csi-provider-smlv7 2/2 Running 0 19m", "oc get pods -n openshift-cluster-csi-drivers | grep -E \"secrets\"", "secrets-store-csi-driver-node-46d2g 3/3 Running 0 45m secrets-store-csi-driver-node-d2jjn 3/3 Running 0 45m secrets-store-csi-driver-node-drmt4 3/3 Running 0 45m secrets-store-csi-driver-node-j2wlt 3/3 Running 0 45m secrets-store-csi-driver-node-v9xv4 3/3 Running 0 45m secrets-store-csi-driver-node-vlz28 3/3 Running 0 45m secrets-store-csi-driver-operator-84bd699478-fpxrw 1/1 Running 0 47m", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-vault-provider 1 namespace: my-namespace 2 spec: provider: vault 3 parameters: 4 roleName: \"csi\" vaultAddress: \"http://vault.vault:8200\" objects: | - secretPath: \"secret/data/example1\" objectName: \"testSecret1\" secretKey: \"testSecret1", "oc create -f secret-provider-class-vault.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: busybox-deployment 1 namespace: my-namespace 2 labels: app: busybox spec: replicas: 1 selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: terminationGracePeriodSeconds: 0 containers: - image: registry.k8s.io/e2e-test-images/busybox:1.29-4 name: busybox imagePullPolicy: IfNotPresent command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-vault-provider\" 3", "oc create -f deployment.yaml", "oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/", "testSecret1", "oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret1", "my-secret-value", "oc edit secretproviderclass my-azure-provider 1", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider namespace: my-namespace spec: provider: azure secretObjects: 1 - secretName: tlssecret 2 type: kubernetes.io/tls 3 labels: environment: \"test\" data: - objectName: tlskey 4 key: tls.key 5 - objectName: tlscrt key: tls.crt parameters: usePodIdentity: \"false\" keyvaultName: \"kvname\" objects: | array: - | objectName: tlskey objectType: secret - | objectName: tlscrt objectType: secret tenantId: \"tid\"", "oc get secretproviderclasspodstatus <secret_provider_class_pod_status_name> -o yaml 1", "status: mounted: true objects: - id: secret/tlscrt version: f352293b97da4fa18d96a9528534cb33 - id: secret/tlskey version: 02534bc3d5df481cb138f8b2a13951ef podName: busybox-<hash> secretProviderClassName: my-azure-provider targetPath: /var/lib/kubelet/pods/f0d49c1e-c87a-4beb-888f-37798456a3e7/volumes/kubernetes.io~csi/secrets-store-inline/mount", "kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2", "oc create configmap <configmap_name> [options]", "oc create configmap game-config --from-file=example-files/", "oc describe configmaps game-config", "Name: game-config Namespace: default Labels: <none> Annotations: <none> Data game.properties: 158 bytes ui.properties: 83 bytes", "cat example-files/game.properties", "enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30", "cat example-files/ui.properties", "color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice", "oc create configmap game-config --from-file=example-files/", "oc get configmaps game-config -o yaml", "apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:34:05Z name: game-config namespace: default resourceVersion: \"407\" selflink: /api/v1/namespaces/default/configmaps/game-config uid: 30944725-d66e-11e5-8cd0-68f728db1985", "oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties", "cat example-files/game.properties", "enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30", "cat example-files/ui.properties", "color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice", "oc create configmap game-config-2 --from-file=example-files/game.properties --from-file=example-files/ui.properties", "oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties", "oc get configmaps game-config-2 -o yaml", "apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:52:05Z name: game-config-2 namespace: default resourceVersion: \"516\" selflink: /api/v1/namespaces/default/configmaps/game-config-2 uid: b4952dc3-d670-11e5-8cd0-68f728db1985", "oc get configmaps game-config-3 -o yaml", "apiVersion: v1 data: game-special-key: |- 1 enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: \"530\" selflink: /api/v1/namespaces/default/configmaps/game-config-3 uid: 05f8da22-d671-11e5-8cd0-68f728db1985", "oc create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm", "oc get configmaps special-config -o yaml", "apiVersion: v1 data: special.how: very special.type: charm kind: ConfigMap metadata: creationTimestamp: 2016-02-18T19:14:38Z name: special-config namespace: default resourceVersion: \"651\" selflink: /api/v1/namespaces/default/configmaps/special-config uid: dadce046-d673-11e5-8cd0-68f728db1985", "apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4", "apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "SPECIAL_LEVEL_KEY=very log_level=INFO", "apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "very charm", "apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never", "very", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never", "very", "service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }", "oc describe machineconfig <name>", "oc describe machineconfig 00-worker", "Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3", "oc create -f devicemgr.yaml", "kubeletconfig.machineconfiguration.openshift.io/devicemgr created", "oc get priorityclasses", "NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s", "apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority 1 value: 1000000 2 preemptionPolicy: PreemptLowerPriority 3 globalDefault: false 4 description: \"This priority class should be used for XYZ service pods only.\" 5", "oc create -f <file-name>.yaml", "apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] priorityClassName: high-priority 1", "oc create -f <file-name>.yaml", "oc describe pod router-default-66d5cf9464-7pwkc", "kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464", "apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api", "oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc label nodes <name> <key>=<value>", "oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: \"user-node\" region: \"east\"", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.29.4", "kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1", "apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node", "oc get pods -n openshift-run-once-duration-override-operator", "NAME READY STATUS RESTARTS AGE run-once-duration-override-operator-7b88c676f6-lcxgc 1/1 Running 0 7m46s runoncedurationoverride-62blp 1/1 Running 0 41s runoncedurationoverride-h8h8b 1/1 Running 0 41s runoncedurationoverride-tdsqk 1/1 Running 0 41s", "oc label namespace <namespace> \\ 1 runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true", "apiVersion: v1 kind: Pod metadata: name: example namespace: <namespace> 1 spec: restartPolicy: Never 2 securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: busybox securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] image: busybox:1.25 command: - /bin/sh - -ec - | while sleep 5; do date; done", "oc get pods -n <namespace> -o yaml | grep activeDeadlineSeconds", "activeDeadlineSeconds: 3600", "oc edit runoncedurationoverride cluster", "apiVersion: operator.openshift.io/v1 kind: RunOnceDurationOverride metadata: spec: runOnceDurationOverride: spec: activeDeadlineSeconds: 1800 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/nodes/working-with-pods
Chapter 12. Hardening the Shared File System (Manila)
Chapter 12. Hardening the Shared File System (Manila) The Shared File Systems service (manila) provides a set of services for managing shared file systems in a multi-project cloud environment. With manila, you can create a shared file system and manage its properties, such as visibility, accessibility, and quotas. For more information on manila, see the Storage Guide: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/storage_guide/ 12.1. Security considerations for manila Manila is registered with keystone, allowing you to the locate the API using the manila endpoints command. For example: By default, the manila API service only listens on port 8786 with tcp6 , which supports both IPv4 and IPv6. Manila uses multiple configurations files; these are stored in /var/lib/config-data/puppet-generated/manila/ : It is recommended that you configure manila to run under a non-root service account, and change file permissions so that only the system administrator can modify them. Manila expects that only administrators can write to configuration files, and services can only read them through their group membership in the manila group. Other users must not be able to read these files, as they contain service account passwords. Note Only the root user should own be able to write to the configuration for manila-rootwrap in rootwrap.conf , and the manila-rootwrap command filters for share nodes in rootwrap.d/share.filters . 12.2. Network and security models for manila A share driver in manila is a Python class that can be set for the back end to manage share operations, some of which are vendor-specific. The back end is an instance of the manila-share service. Manila has share drivers for many different storage systems, supporting both commercial vendors and open source solutions. Each share driver supports one or more back end modes: share servers and no share servers . An administrator selects a mode by specifying it in manila.conf , using driver_handles_share_servers . A share server is a logical Network Attached Storage (NAS) server that exports shared file systems. Back-end storage systems today are sophisticated and can isolate data paths and network paths between different OpenStack projects. A share server provisioned by a manila share driver would be created on an isolated network that belongs to the project user creating it. The share servers mode can be configured with either a flat network, or a segmented network, depending on the network provider. It is possible to have separate drivers for different modes use the same hardware. Depending on the chosen mode, you might need to provide more configuration details through the configuration file. 12.3. Share backend modes Each share driver supports at least one of the available driver modes: Share servers - driver_handles_share_servers = True - The share driver creates share servers and manages the share server life cycle. No share servers - driver_handles_share_servers = False - An administrator (rather than a share driver) manages the bare metal storage with a network interface, instead of relying on the presence of the share servers. No share servers mode - In this mode, drivers will not set up share servers, and consequently will not need to set up any new network interfaces. It is assumed that storage controller being managed by the driver has all of the network interfaces it is going to need. Drivers create shares directly without previously creating a share server. To create shares using drivers operating in this mode, manila does not require users to create any private share networks either. Note In no share servers mode , manila will assume that the network interfaces through which any shares are exported are already reachable by all projects. In the no share servers mode a share driver does not handle share server life cycle. An administrator is expected to handle the storage, networking, and other host-side configuration that might be necessary to provide project isolation. In this mode an administrator can set storage as a host which exports shares. All projects within the OpenStack cloud share a common network pipe. Lack of isolation can impact security and quality of service. When using share drivers that do not handle share servers, cloud users cannot be sure that their shares cannot be accessed by untrusted users by a tree walk over the top directory of their file systems. In public clouds it is possible that all network bandwidth is used by one client, so an administrator should care for this not to happen. Network balancing can be done by any means, and not necessarily just with OpenStack tools. Share servers mode - In this mode, a driver is able to create share servers and plug them to existing OpenStack networks. Manila determines if a new share server is required, and provides all the networking information necessary for the share drivers to create the requisite share server. When creating shares in the driver mode that handles share servers, users must provide a share network that they expect their shares to be exported upon. Manila uses this network to create network ports for the share server on this network. Users can configure security services in both share servers and no share servers back end modes. But with the no share servers back end mode, an administrator must set the required authentication services manually on the host. And in share servers mode manila can configure security services identified by the users on the share servers it spawns. 12.4. Networking requirements for manila Manila can integrate with different network types: flat , GRE , VLAN , VXLAN . Note Manila is only storing the network information in the database, with the real networks being supplied by the network provider. Manila supports using the OpenStack Networking service (neutron) and also "standalone" pre-configured networking. In the share servers back end mode, a share driver creates and manages a share server for each share network. This mode can be divided in two variations: Flat network in share servers backend mode Segmented network in share servers backend mode Users can use a network and subnet from the OpenStack Networking (neutron) service to create share networks. If the administrator decides to use the StandAloneNetworkPlugin , users need not provide any networking information since the administrator pre-configures this in the configuration file. Note Share servers spawned by some share drivers are Compute servers created with the Compute service. A few of these drivers do not support network plugins. After a share network is created, manila retrieves network information determined by a network provider: network type, segmentation identifier (if the network uses segmentation) and the IP block in CIDR notation from which to allocate the network. Users can create security services that specify security requirements such as AD or LDAP domains or a Kerberos realm. Manila assumes that any hosts referred to in security service are reachable from a subnet where a share server is created, which limits the number of cases where this mode could be used. Note Some share drivers might not support all types of segmentation, for more details see the specification for the driver you are using. 12.5. Security services with manila Manila can restrict access to file shares by integrating with network authentication protocols. Each project can have its own authentication domain that functions separately from the cloud's keystone authentication domain. This project domain can be used to provide authorization (AuthZ) services to applications that run within the OpenStack cloud, including manila. Available authentication protocols include LDAP, Kerberos, and Microsoft Active Directory authentication service. 12.6. Introduction to security services After creating a share and getting its export location, users have no permissions to mount it and operate with files. Users need to explicitly grant access to the new share. The client authentication and authorization (authN/authZ) can be performed in conjunction with security services. Manila can use LDAP, Kerberos, or Microsoft Active directory if they are supported by the share drivers and back ends. Note In some cases, it is required to explicitly specify one of the security services, for example, NetApp, EMC and Windows drivers require Active Directory for the creation of shares with the CIFS protocol. 12.7. Security services management A security service is a manila entity that abstracts a set of options that define a security zone for a particular shared file system protocol, such as an Active Directory domain or a Kerberos domain. The security service contains all of the information necessary for manila to create a server that joins a given domain. Using the API, users can create, update, view, and delete a security service. Security Services are designed on the following assumptions: Projects provide details for the security service. Administrators care about security services: they configure the server side of such security services. Inside the manila API, a security_service is associated with the share_networks . Share drivers use data in the security service to configure newly created share servers. When creating a security service, you can select one of these authentication services: LDAP - The Lightweight Directory Access Protocol. An application protocol for accessing and maintaining distributed directory information services over an IP network. Kerberos - The network authentication protocol which works on the basis of tickets to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner. Active Directory - A directory service that Microsoft developed for Windows domain networks. Uses LDAP, Microsoft's version of Kerberos, and DNS. Manila allows you to configure a security service with these options: A DNS IP address that is used inside the project network. An IP address or hostname of a security service. A domain of a security service. A user or group name that is used by a project. A password for a user, if you specify a username. An existing security service entity can be associated with share network entities that inform manila about security and network configuration for a group of shares. You can also see the list of all security services for a specified share network and disassociate them from a share network. An administrator and users as share owners can manage access to the shares by creating access rules with authentication through an IP address, user, group, or TLS certificates. Authentication methods depend on which share driver and security service you configure and use. You can then configure a back end to use a specific authentication service, which can operate with clients without manila and keystone. Note Different authentication services are supported by different share drivers. For details of supporting of features by different drivers, see https://docs.openstack.org/manila/latest/admin/share_back_ends_feature_support_mapping.html Support for a specific authentication service by a driver does not mean that it can be configured with any shared file system protocol. Supported shared file systems protocols are NFS, CEPHFS, CIFS, GlusterFS, and HDFS. See the driver vendor's documentation for information on a specific driver and its configuration for security services. Some drivers support security services and other drivers do not support any of the security services mentioned above. For example, Generic Driver with the NFS or the CIFS shared file system protocol supports only authentication method through the IP address. Note In most cases, drivers that support the CIFS shared file system protocol can be configured to use Active Directory and manage access through the user authentication. Drivers that support the GlusterFS protocol can be used with authentication using TLS certificates. With drivers that support NFS protocol authentication using an IP address is the only supported option. Since the HDFS shared file system protocol uses NFS access it also can be configured to authenticate using an IP address. The recommended configuration for production manila deployments is to create a share with the CIFS share protocol and add to it the Microsoft Active Directory directory service. With this configuration you will get the centralized database and the service that integrates the Kerberos and LDAP approaches. 12.8. Share access control Users can specify which specific clients have access to the shares they create. Due to the keystone service, shares created by individual users are only visible to themselves and other users within the same project. Manila allows users to create shares that are "publicly" visible. These shares are visible in dashboards of users that belong to other OpenStack projects if the owners grant them access, they might even be able to mount these shares if they are made accessible on the network. While creating a share, use key --public to make your share public for other projects to see it in a list of shares and see its detailed information. According to the policy.json file, an administrator and the users as share owners can manage access to shares by means of creating access rules. Using the manila access-allow , manila access-deny , and manila access-list commands, you can grant, deny and list access to a specified share correspondingly. Note Manila does not provide end-to-end management of the storage system. You will still need to separately protect the backend system from unauthorized access. As a result, the protection offered by the manila API can still be circumvented if someone compromises the backend storage device, thereby gaining out of band access. When a share is just created there are no default access rules associated with it and permission to mount it. This could be seen in mounting config for export protocol in use. For example, there is an NFS command exportfs or /etc/exports file on the storage which controls each remote share and defines hosts that can access it. It is empty if nobody can mount a share. For a remote CIFS server there is net conf list command which shows the configuration. The hosts deny parameter should be set by the share driver to 0.0.0.0/0 which means that any host is denied to mount the share. Using manila, you can grant or deny access to a share by specifying one of these supported share access levels: rw - Read and write (RW) access. This is the default value. ro - Read-only (RO) access. Note The RO access level can be helpful in public shares when the administrator gives read and write (RW) access for some certain editors or contributors and gives read-only (RO) access for the rest of users (viewers). You must also specify one of these supported authentication methods: ip - Uses an IP address to authenticate an instance. IP access can be provided to clients addressable by well-formed IPv4 or IPv6 addresses or subnets denoted in CIDR notation. cert - Uses a TLS certificate to authenticate an instance. Specify the TLS identity as the IDENTKEY . A valid value is any string up to 64 characters long in the common name (CN) of the certificate. user - Authenticates by a specified user or group name. A valid value is an alphanumeric string that can contain some special characters and is from 4 to 32 characters long. Note Supported authentication methods depend on which share driver, security service and shared file system protocol you use. Supported shared file system protocols are MapRFS, CEPHFS, NFS, CIFS, GlusterFS, and HDFS. Supported security services are LDAP, Kerberos protocols, or Microsoft Active Directory service. To verify that access rules (ACL) were configured correctly for a share, you can list its permissions. Note When selecting a security service for your share, you will need to consider whether the share driver is able to create access rules using the available authentication methods. Supported security services are LDAP, Kerberos, and Microsoft Active Directory. 12.9. Share type access control A share type is an administrator-defined type of service , comprised of a project visible description, and a list of non-project-visible key-value pairs called extra specifications . The manila-scheduler uses extra specifications to make scheduling decisions, and drivers control the share creation. An administrator can create and delete share types, and can also manage extra specifications that give them meaning inside manila. Projects can list the share types and can use them to create new shares. Share types can be created as public and private . This is the level of visibility for the share type that defines whether other projects can or cannot see it in a share types list and use it to create a new share. By default, share types are created as public. While creating a share type, use --is_public parameter set to False to make your share type private which will prevent other projects from seeing it in a list of share types and creating new shares with it. On the other hand, public share types are available to every project in a cloud. Manila allows an administrator to grant or deny access to the private share types for projects. You can also get information about the access for a specified private share type. Note Since share types due to their extra specifications help to filter or choose back ends before users create a share, using access to the share types you can limit clients in choice of specific back ends. For example, an administrator user in the admin project can create a private share type named my_type and see it in the list. In the console examples below, the logging in and out is omitted, and environment variables are provided to show the currently logged in user. The demo user in the demo project can list the types and the private share type named my_type is not visible for him. The administrator can grant access to the private share type for the demo project with the project ID equal to df29a37db5ae48d19b349fe947fada46 : As a result, users in the demo project can see the private share type and use it in the share creation: To deny access for a specified project, use manila type-access-remove <share_type> <project_id> . Note For an example that demonstrates the purpose of the share types, consider a situation where you have two back ends: LVM as a public storage and Ceph as a private storage. In this case you can grant access to certain projects and control access with user/group authentication method. 12.10. Policies The Shared File Systems service API is gated with role-based access control policies. These policies determine which user can access certain APIs in a certain way, and are defined in the service's policy.json file. Note The configuration file policy.json may be placed anywhere. The path /var/lib/config-data/puppet-generated/manila/etc/manila/policy.json is expected by default. Whenever an API call is made to manila, the policy engine uses the appropriate policy definitions to determine if the call can be accepted. A policy rule determines under which circumstances the API call is permitted. The /var/lib/config-data/puppet-generated/manila/etc/manila/policy.json file has rules where an action is always permitted, when the rule is an empty string: "" ; the rules based on the user role or rules; rules with boolean expressions. Below is a snippet of the policy.json file for manila. It can be expected to change between OpenStack releases. Users must be assigned to groups and roles that you refer to in your policies. This is done automatically by the service when user management commands are used. Note Any changes to /var/lib/config-data/puppet-generated/manila/etc/manila/policy.json are effective immediately, which allows new policies to be implemented while manila is running. Manual modification of the policy can have unexpected side effects and is not encouraged. Manila does not provide a default policy file; all the default policies are within the code base. You can generate the default policies from the manila code by executing: oslopolicy-sample-generator --config-file=var/lib/config-data/puppet-generated/manila/etc/manila/manila-policy-generator.conf
[ "manila endpoints +-------------+-----------------------------------------+ | manila | Value | +-------------+-----------------------------------------+ | adminURL | http://172.18.198.55:8786/v1/20787a7b...| | region | RegionOne | | publicURL | http://172.18.198.55:8786/v1/20787a7b...| | internalURL | http://172.18.198.55:8786/v1/20787a7b...| | id | 82cc5535aa444632b64585f138cb9b61 | +-------------+-----------------------------------------+ +-------------+-----------------------------------------+ | manilav2 | Value | +-------------+-----------------------------------------+ | adminURL | http://172.18.198.55:8786/v2/20787a7b...| | region | RegionOne | | publicURL | http://172.18.198.55:8786/v2/20787a7b...| | internalURL | http://172.18.198.55:8786/v2/20787a7b...| | id | 2e8591bfcac4405fa7e5dc3fd61a2b85 | +-------------+-----------------------------------------+", "api-paste.ini manila.conf policy.json rootwrap.conf rootwrap.d ./rootwrap.d: share.filters", "env | grep OS_ OS_USERNAME=admin OS_TENANT_NAME=admin USD manila type-list --all +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | 4..| my_type| private | - | driver_handles_share_servers:False| snapshot_support:True | | 5..| default| public | YES | driver_handles_share_servers:True | snapshot_support:True | +----+--------+-----------+-----------+-----------------------------------+-----------------------+", "env | grep OS_ OS_USERNAME=demo OS_TENANT_NAME=demo USD manila type-list --all +----+--------+-----------+-----------+----------------------------------+----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +----+--------+-----------+-----------+----------------------------------+----------------------+ | 5..| default| public | YES | driver_handles_share_servers:True| snapshot_support:True| +----+--------+-----------+-----------+----------------------------------+----------------------+", "env | grep OS_ OS_USERNAME=admin OS_TENANT_NAME=admin USD openstack project list +----------------------------------+--------------------+ | ID | Name | +----------------------------------+--------------------+ | ... | ... | | df29a37db5ae48d19b349fe947fada46 | demo | +----------------------------------+--------------------+ USD manila type-access-add my_type df29a37db5ae48d19b349fe947fada46", "env | grep OS_ OS_USERNAME=demo OS_TENANT_NAME=demo USD manila type-list --all +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | 4..| my_type| private | - | driver_handles_share_servers:False| snapshot_support:True | | 5..| default| public | YES | driver_handles_share_servers:True | snapshot_support:True | +----+--------+-----------+-----------+-----------------------------------+-----------------------+", "{ \"context_is_admin\": \"role:admin\", \"admin_or_owner\": \"is_admin:True or project_id:%(project_id)s\", \"default\": \"rule:admin_or_owner\", \"share_extension:quotas:show\": \"\", \"share_extension:quotas:update\": \"rule:admin_api\", \"share_extension:quotas:delete\": \"rule:admin_api\", \"share_extension:quota_classes\": \"\", }" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/security_and_hardening_guide/hardening_the_shared_file_system_manila
Chapter 1. Introduction
Chapter 1. Introduction As OpenStack consists of many different projects, it is important to test the interoperability of the projects within your OpenStack cluster. The OpenStack Integration Test Suite (tempest) automates the integration testing of your Red Hat OpenStack Platform deployment. Running tests ensures that your cluster is working as expected, and can also provide early warning of potential problems, especially after an upgrade. The Integration Test Suite contains tests for OpenStack API validation and scenario testing, as well as unit testing for self-validation. The Integration Test Suite performs black box testing using the OpenStack public APIs, with tempest as the test runner.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/openstack_integration_test_suite_guide/chap-introduction
Chapter 6. AuthService
Chapter 6. AuthService 6.1. UpdateAuthMachineToMachineConfig PUT /v1/auth/m2m/{config.id} UpdateAuthMachineToMachineConfig updates an existing auth machine to machine config. In case the auth machine to machine config does not exist, a new one will be created. 6.1.1. Description 6.1.2. Parameters 6.1.2.1. Path Parameters Name Description Required Default Pattern config.id UUID of the config. Note that when adding a machine to machine config, this field should not be set. X null 6.1.2.2. Body Parameter Name Description Required Default Pattern body AuthServiceUpdateAuthMachineToMachineConfigBody X 6.1.3. Return Type Object 6.1.4. Content Type application/json 6.1.5. Responses Table 6.1. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 6.1.6. Samples 6.1.7. Common object reference 6.1.7.1. AuthMachineToMachineConfigMapping Mappings map an identity token's claim values to a specific role within Central. Field Name Required Nullable Type Description Format key String A key within the identity token's claim value to use. valueExpression String A regular expression that will be evaluated against values of the identity token claim identified by the specified key. This regular expressions is in RE2 format, see more here: https://github.com/google/re2/wiki/Syntax . role String The role which should be issued when the key and value match for a particular identity token. 6.1.7.2. AuthServiceUpdateAuthMachineToMachineConfigBody Field Name Required Nullable Type Description Format config AuthServiceUpdateAuthMachineToMachineConfigBodyConfig 6.1.7.3. AuthServiceUpdateAuthMachineToMachineConfigBodyConfig AuthMachineToMachineConfig determines rules for exchanging an identity token from a third party with a Central access token. The M2M stands for machine to machine, as this is the intended use-case for the config. Field Name Required Nullable Type Description Format type V1AuthMachineToMachineConfigType GENERIC, GITHUB_ACTIONS, KUBE_SERVICE_ACCOUNT, tokenExpirationDuration String Sets the expiration of the token returned from the ExchangeAuthMachineToMachineToken API call. Possible valid time units are: s, m, h. The maximum allowed expiration duration is 24h. As an example: 2h45m. For additional information on the validation of the duration, see: https://pkg.go.dev/time#ParseDuration . mappings List of AuthMachineToMachineConfigMapping At least one mapping is required to resolve to a valid role for the access token to be successfully generated. issuer String The issuer of the related OIDC provider issuing the ID tokens to exchange. Must be non-empty string containing URL when type is GENERIC. In case of GitHub actions, this must be empty or set to https://token.actions.githubusercontent.com . Issuer is a unique key, therefore there may be at most one GITHUB_ACTIONS config, and each GENERIC config must have a distinct issuer. 6.1.7.4. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 6.1.7.5. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 6.1.7.5.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 6.1.7.6. V1AuthMachineToMachineConfigType The type of the auth machine to machine config. Currently supports GitHub actions or any other generic OIDC provider to use for verifying and exchanging the token. Enum Values GENERIC GITHUB_ACTIONS KUBE_SERVICE_ACCOUNT 6.2. ExchangeAuthMachineToMachineToken POST /v1/auth/m2m/exchange ExchangeAuthMachineToMachineToken exchanges a given identity token for a Central access token based on configured auth machine to machine configs. 6.2.1. Description 6.2.2. Parameters 6.2.2.1. Body Parameter Name Description Required Default Pattern body V1ExchangeAuthMachineToMachineTokenRequest X 6.2.3. Return Type V1ExchangeAuthMachineToMachineTokenResponse 6.2.4. Content Type application/json 6.2.5. Responses Table 6.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1ExchangeAuthMachineToMachineTokenResponse 0 An unexpected error response. GooglerpcStatus 6.2.6. Samples 6.2.7. Common object reference 6.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 6.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 6.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 6.2.7.3. V1ExchangeAuthMachineToMachineTokenRequest Field Name Required Nullable Type Description Format idToken String Identity token that is supposed to be exchanged. 6.2.7.4. V1ExchangeAuthMachineToMachineTokenResponse Field Name Required Nullable Type Description Format accessToken String The exchanged access token. 6.3. ListAuthMachineToMachineConfigs GET /v1/auth/m2m ListAuthMachineToMachineConfigs lists the available auth machine to machine configs. 6.3.1. Description 6.3.2. Parameters 6.3.3. Return Type V1ListAuthMachineToMachineConfigResponse 6.3.4. Content Type application/json 6.3.5. Responses Table 6.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListAuthMachineToMachineConfigResponse 0 An unexpected error response. GooglerpcStatus 6.3.6. Samples 6.3.7. Common object reference 6.3.7.1. AuthMachineToMachineConfigMapping Mappings map an identity token's claim values to a specific role within Central. Field Name Required Nullable Type Description Format key String A key within the identity token's claim value to use. valueExpression String A regular expression that will be evaluated against values of the identity token claim identified by the specified key. This regular expressions is in RE2 format, see more here: https://github.com/google/re2/wiki/Syntax . role String The role which should be issued when the key and value match for a particular identity token. 6.3.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 6.3.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 6.3.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 6.3.7.4. V1AuthMachineToMachineConfig AuthMachineToMachineConfig determines rules for exchanging an identity token from a third party with a Central access token. The M2M stands for machine to machine, as this is the intended use-case for the config. Field Name Required Nullable Type Description Format id String UUID of the config. Note that when adding a machine to machine config, this field should not be set. type V1AuthMachineToMachineConfigType GENERIC, GITHUB_ACTIONS, KUBE_SERVICE_ACCOUNT, tokenExpirationDuration String Sets the expiration of the token returned from the ExchangeAuthMachineToMachineToken API call. Possible valid time units are: s, m, h. The maximum allowed expiration duration is 24h. As an example: 2h45m. For additional information on the validation of the duration, see: https://pkg.go.dev/time#ParseDuration . mappings List of AuthMachineToMachineConfigMapping At least one mapping is required to resolve to a valid role for the access token to be successfully generated. issuer String The issuer of the related OIDC provider issuing the ID tokens to exchange. Must be non-empty string containing URL when type is GENERIC. In case of GitHub actions, this must be empty or set to https://token.actions.githubusercontent.com . Issuer is a unique key, therefore there may be at most one GITHUB_ACTIONS config, and each GENERIC config must have a distinct issuer. 6.3.7.5. V1AuthMachineToMachineConfigType The type of the auth machine to machine config. Currently supports GitHub actions or any other generic OIDC provider to use for verifying and exchanging the token. Enum Values GENERIC GITHUB_ACTIONS KUBE_SERVICE_ACCOUNT 6.3.7.6. V1ListAuthMachineToMachineConfigResponse Field Name Required Nullable Type Description Format configs List of V1AuthMachineToMachineConfig 6.4. DeleteAuthMachineToMachineConfig DELETE /v1/auth/m2m/{id} DeleteAuthMachineToMachineConfig deletes the specific auth machine to machine config. In case a specified auth machine to machine config does not exist is deleted, no error will be returned. 6.4.1. Description 6.4.2. Parameters 6.4.2.1. Path Parameters Name Description Required Default Pattern id X null 6.4.3. Return Type Object 6.4.4. Content Type application/json 6.4.5. Responses Table 6.4. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 6.4.6. Samples 6.4.7. Common object reference 6.4.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 6.4.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 6.4.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 6.5. GetAuthMachineToMachineConfig GET /v1/auth/m2m/{id} GetAuthMachineToMachineConfig retrieves the specific auth machine to machine config. 6.5.1. Description 6.5.2. Parameters 6.5.2.1. Path Parameters Name Description Required Default Pattern id X null 6.5.3. Return Type V1GetAuthMachineToMachineConfigResponse 6.5.4. Content Type application/json 6.5.5. Responses Table 6.5. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetAuthMachineToMachineConfigResponse 0 An unexpected error response. GooglerpcStatus 6.5.6. Samples 6.5.7. Common object reference 6.5.7.1. AuthMachineToMachineConfigMapping Mappings map an identity token's claim values to a specific role within Central. Field Name Required Nullable Type Description Format key String A key within the identity token's claim value to use. valueExpression String A regular expression that will be evaluated against values of the identity token claim identified by the specified key. This regular expressions is in RE2 format, see more here: https://github.com/google/re2/wiki/Syntax . role String The role which should be issued when the key and value match for a particular identity token. 6.5.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 6.5.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 6.5.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 6.5.7.4. V1AuthMachineToMachineConfig AuthMachineToMachineConfig determines rules for exchanging an identity token from a third party with a Central access token. The M2M stands for machine to machine, as this is the intended use-case for the config. Field Name Required Nullable Type Description Format id String UUID of the config. Note that when adding a machine to machine config, this field should not be set. type V1AuthMachineToMachineConfigType GENERIC, GITHUB_ACTIONS, KUBE_SERVICE_ACCOUNT, tokenExpirationDuration String Sets the expiration of the token returned from the ExchangeAuthMachineToMachineToken API call. Possible valid time units are: s, m, h. The maximum allowed expiration duration is 24h. As an example: 2h45m. For additional information on the validation of the duration, see: https://pkg.go.dev/time#ParseDuration . mappings List of AuthMachineToMachineConfigMapping At least one mapping is required to resolve to a valid role for the access token to be successfully generated. issuer String The issuer of the related OIDC provider issuing the ID tokens to exchange. Must be non-empty string containing URL when type is GENERIC. In case of GitHub actions, this must be empty or set to https://token.actions.githubusercontent.com . Issuer is a unique key, therefore there may be at most one GITHUB_ACTIONS config, and each GENERIC config must have a distinct issuer. 6.5.7.5. V1AuthMachineToMachineConfigType The type of the auth machine to machine config. Currently supports GitHub actions or any other generic OIDC provider to use for verifying and exchanging the token. Enum Values GENERIC GITHUB_ACTIONS KUBE_SERVICE_ACCOUNT 6.5.7.6. V1GetAuthMachineToMachineConfigResponse Field Name Required Nullable Type Description Format config V1AuthMachineToMachineConfig 6.6. AddAuthMachineToMachineConfig POST /v1/auth/m2m AddAuthMachineToMachineConfig creates a new auth machine to machine config. 6.6.1. Description 6.6.2. Parameters 6.6.2.1. Body Parameter Name Description Required Default Pattern body V1AddAuthMachineToMachineConfigRequest X 6.6.3. Return Type V1AddAuthMachineToMachineConfigResponse 6.6.4. Content Type application/json 6.6.5. Responses Table 6.6. HTTP Response Codes Code Message Datatype 200 A successful response. V1AddAuthMachineToMachineConfigResponse 0 An unexpected error response. GooglerpcStatus 6.6.6. Samples 6.6.7. Common object reference 6.6.7.1. AuthMachineToMachineConfigMapping Mappings map an identity token's claim values to a specific role within Central. Field Name Required Nullable Type Description Format key String A key within the identity token's claim value to use. valueExpression String A regular expression that will be evaluated against values of the identity token claim identified by the specified key. This regular expressions is in RE2 format, see more here: https://github.com/google/re2/wiki/Syntax . role String The role which should be issued when the key and value match for a particular identity token. 6.6.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 6.6.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 6.6.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 6.6.7.4. V1AddAuthMachineToMachineConfigRequest Field Name Required Nullable Type Description Format config V1AuthMachineToMachineConfig 6.6.7.5. V1AddAuthMachineToMachineConfigResponse Field Name Required Nullable Type Description Format config V1AuthMachineToMachineConfig 6.6.7.6. V1AuthMachineToMachineConfig AuthMachineToMachineConfig determines rules for exchanging an identity token from a third party with a Central access token. The M2M stands for machine to machine, as this is the intended use-case for the config. Field Name Required Nullable Type Description Format id String UUID of the config. Note that when adding a machine to machine config, this field should not be set. type V1AuthMachineToMachineConfigType GENERIC, GITHUB_ACTIONS, KUBE_SERVICE_ACCOUNT, tokenExpirationDuration String Sets the expiration of the token returned from the ExchangeAuthMachineToMachineToken API call. Possible valid time units are: s, m, h. The maximum allowed expiration duration is 24h. As an example: 2h45m. For additional information on the validation of the duration, see: https://pkg.go.dev/time#ParseDuration . mappings List of AuthMachineToMachineConfigMapping At least one mapping is required to resolve to a valid role for the access token to be successfully generated. issuer String The issuer of the related OIDC provider issuing the ID tokens to exchange. Must be non-empty string containing URL when type is GENERIC. In case of GitHub actions, this must be empty or set to https://token.actions.githubusercontent.com . Issuer is a unique key, therefore there may be at most one GITHUB_ACTIONS config, and each GENERIC config must have a distinct issuer. 6.6.7.7. V1AuthMachineToMachineConfigType The type of the auth machine to machine config. Currently supports GitHub actions or any other generic OIDC provider to use for verifying and exchanging the token. Enum Values GENERIC GITHUB_ACTIONS KUBE_SERVICE_ACCOUNT 6.7. GetAuthStatus GET /v1/auth/status GetAuthStatus returns the status for the current client. 6.7.1. Description 6.7.2. Parameters 6.7.3. Return Type V1AuthStatus 6.7.4. Content Type application/json 6.7.5. Responses Table 6.7. HTTP Response Codes Code Message Datatype 200 A successful response. V1AuthStatus 0 An unexpected error response. GooglerpcStatus 6.7.6. Samples 6.7.7. Common object reference 6.7.7.1. AuthProviderRequiredAttribute RequiredAttribute allows to specify a set of attributes which ALL are required to be returned by the auth provider. If any attribute is missing within the external claims of the token issued by Central, the authentication request to this IdP is considered failed. Field Name Required Nullable Type Description Format attributeKey String attributeValue String 6.7.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 6.7.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 6.7.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 6.7.7.4. StorageAccess Enum Values NO_ACCESS READ_ACCESS READ_WRITE_ACCESS 6.7.7.5. StorageAuthProvider Tag: 15. Field Name Required Nullable Type Description Format id String name String type String uiEndpoint String enabled Boolean config Map of string Config holds auth provider specific configuration. Each configuration options are different based on the given auth provider type. OIDC: - \"issuer\": the OIDC issuer according to https://openid.net/specs/openid-connect-core-1_0.html#IssuerIdentifier . - \"client_id\": the client ID according to https://www.rfc-editor.org/rfc/rfc6749.html#section-2.2 . - \"client_secret\": the client secret according to https://www.rfc-editor.org/rfc/rfc6749.html#section-2.3.1 . - \"do_not_use_client_secret\": set to \"true\" if you want to create a configuration with only a client ID and no client secret. - \"mode\": the OIDC callback mode, choosing from \"fragment\", \"post\", or \"query\". - \"disable_offline_access_scope\": set to \"true\" if no offline tokens shall be issued. - \"extra_scopes\": a space-delimited string of additional scopes to request in addition to \"openid profile email\" according to https://www.rfc-editor.org/rfc/rfc6749.html#section-3.3 . OpenShift Auth: supports no extra configuration options. User PKI: - \"keys\": the trusted certificates PEM encoded. SAML: - \"sp_issuer\": the service provider issuer according to https://datatracker.ietf.org/doc/html/rfc7522#section-3 . - \"idp_metadata_url\": the metadata URL according to https://docs.oasis-open.org/security/saml/v2.0/saml-metadata-2.0-os.pdf . - \"idp_issuer\": the IdP issuer. - \"idp_cert_pem\": the cert PEM encoded for the IdP endpoint. - \"idp_sso_url\": the IdP SSO URL. - \"idp_nameid_format\": the IdP name ID format. IAP: - \"audience\": the audience to use. loginUrl String The login URL will be provided by the backend, and may not be specified in a request. validated Boolean extraUiEndpoints List of string UI endpoints which to allow in addition to ui_endpoint . I.e., if a login request is coming from any of these, the auth request will use these for the callback URL, not ui_endpoint. active Boolean requiredAttributes List of AuthProviderRequiredAttribute traits StorageTraits claimMappings Map of string Specifies claims from IdP token that will be copied to Rox token attributes. Each key in this map contains a path in IdP token we want to map. Path is separated by \".\" symbol. For example, if IdP token payload looks like: { \"a\": { \"b\" : \"c\", \"d\": true, \"e\": [ \"val1\", \"val2\", \"val3\" ], \"f\": [ true, false, false ], \"g\": 123.0, \"h\": [ 1, 2, 3] } } then \"a.b\" would be a valid key and \"a.z\" is not. We support the following types of claims: * string(path \"a.b\") * bool(path \"a.d\") * string array(path \"a.e\") * bool array (path \"a.f.\") We do NOT support the following types of claims: * complex claims(path \"a\") * float/integer claims(path \"a.g\") * float/integer array claims(path \"a.h\") Each value in this map contains a Rox token attribute name we want to add claim to. If, for example, value is \"groups\", claim would be found in \"external_user.Attributes.groups\" in token. Note: we only support this feature for OIDC auth provider. lastUpdated Date Last updated indicates the last time the auth provider has been updated. In case there have been tokens issued by an auth provider before this timestamp, they will be considered invalid. Subsequently, all clients will have to re-issue their tokens (either by refreshing or by an additional login attempt). date-time 6.7.7.6. StorageServiceIdentity Field Name Required Nullable Type Description Format serialStr String serial String int64 id String type StorageServiceType UNKNOWN_SERVICE, SENSOR_SERVICE, CENTRAL_SERVICE, CENTRAL_DB_SERVICE, REMOTE_SERVICE, COLLECTOR_SERVICE, MONITORING_UI_SERVICE, MONITORING_DB_SERVICE, MONITORING_CLIENT_SERVICE, BENCHMARK_SERVICE, SCANNER_SERVICE, SCANNER_DB_SERVICE, ADMISSION_CONTROL_SERVICE, SCANNER_V4_INDEXER_SERVICE, SCANNER_V4_MATCHER_SERVICE, SCANNER_V4_DB_SERVICE, SCANNER_V4_SERVICE, REGISTRANT_SERVICE, initBundleId String 6.7.7.7. StorageServiceType SCANNER_V4_SERVICE: This is used when Scanner V4 is run in combo-mode. Enum Values UNKNOWN_SERVICE SENSOR_SERVICE CENTRAL_SERVICE CENTRAL_DB_SERVICE REMOTE_SERVICE COLLECTOR_SERVICE MONITORING_UI_SERVICE MONITORING_DB_SERVICE MONITORING_CLIENT_SERVICE BENCHMARK_SERVICE SCANNER_SERVICE SCANNER_DB_SERVICE ADMISSION_CONTROL_SERVICE SCANNER_V4_INDEXER_SERVICE SCANNER_V4_MATCHER_SERVICE SCANNER_V4_DB_SERVICE SCANNER_V4_SERVICE REGISTRANT_SERVICE 6.7.7.8. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 6.7.7.9. StorageUserInfo Field Name Required Nullable Type Description Format username String friendlyName String permissions UserInfoResourceToAccess roles List of StorageUserInfoRole 6.7.7.10. StorageUserInfoRole Role is wire compatible with the old format of storage.Role and hence only includes role name and associated permissions. Field Name Required Nullable Type Description Format name String resourceToAccess Map of StorageAccess 6.7.7.11. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 6.7.7.12. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 6.7.7.13. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 6.7.7.14. UserInfoResourceToAccess ResourceToAccess represents a collection of permissions. It is wire compatible with the old format of storage.Role and replaces it in places where only aggregated permissions are required. Field Name Required Nullable Type Description Format resourceToAccess Map of StorageAccess 6.7.7.15. V1AuthStatus Field Name Required Nullable Type Description Format userId String serviceId StorageServiceIdentity expires Date date-time refreshUrl String authProvider StorageAuthProvider userInfo StorageUserInfo userAttributes List of V1UserAttribute idpToken String Token returned to ACS by the underlying identity provider. This field is set only in a few, specific contexts. Do not rely on this field being present in the response. 6.7.7.16. V1UserAttribute Field Name Required Nullable Type Description Format key String values List of string
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Next available tag: 18" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/authservice
2.6.2.2.4. Expansions
2.6.2.2.4. Expansions Expansions, when used in conjunction with the spawn and twist directives, provide information about the client, server, and processes involved. The following is a list of supported expansions: %a - Returns the client's IP address. %A - Returns the server's IP address. %c - Returns a variety of client information, such as the user name and hostname, or the user name and IP address. %d - Returns the daemon process name. %h - Returns the client's hostname (or IP address, if the hostname is unavailable). %H - Returns the server's hostname (or IP address, if the hostname is unavailable). %n - Returns the client's hostname. If unavailable, unknown is printed. If the client's hostname and host address do not match, paranoid is printed. %N - Returns the server's hostname. If unavailable, unknown is printed. If the server's hostname and host address do not match, paranoid is printed. %p - Returns the daemon's process ID. %s -Returns various types of server information, such as the daemon process and the host or IP address of the server. %u - Returns the client's user name. If unavailable, unknown is printed. The following sample rule uses an expansion in conjunction with the spawn command to identify the client host in a customized log file. When connections to the SSH daemon ( sshd ) are attempted from a host in the example.com domain, execute the echo command to log the attempt, including the client hostname (by using the %h expansion), to a special file: Similarly, expansions can be used to personalize messages back to the client. In the following example, clients attempting to access FTP services from the example.com domain are informed that they have been banned from the server: For a full explanation of available expansions, as well as additional access control options, see section 5 of the man pages for hosts_access ( man 5 hosts_access ) and the man page for hosts_options . Refer to Section 2.6.5, "Additional Resources" for more information about TCP Wrappers.
[ "sshd : .example.com : spawn /bin/echo `/bin/date` access denied to %h>>/var/log/sshd.log : deny", "vsftpd : .example.com : twist /bin/echo \"421 %h has been banned from this server!\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-option_fields-expansions
Chapter 2. ClusterAutoscaler [autoscaling.openshift.io/v1]
Chapter 2. ClusterAutoscaler [autoscaling.openshift.io/v1] Description ClusterAutoscaler is the Schema for the clusterautoscalers API Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Desired state of ClusterAutoscaler resource status object Most recently observed status of ClusterAutoscaler resource 2.1.1. .spec Description Desired state of ClusterAutoscaler resource Type object Property Type Description balanceSimilarNodeGroups boolean BalanceSimilarNodeGroups enables/disables the --balance-similar-node-groups cluster-autoscaler feature. This feature will automatically identify node groups with the same instance type and the same set of labels and try to keep the respective sizes of those node groups balanced. balancingIgnoredLabels array (string) BalancingIgnoredLabels sets "--balancing-ignore-label <label name>" flag on cluster-autoscaler for each listed label. This option specifies labels that cluster autoscaler should ignore when considering node group similarity. For example, if you have nodes with "topology.ebs.csi.aws.com/zone" label, you can add name of this label here to prevent cluster autoscaler from spliting nodes into different node groups based on its value. expanders array (string) Sets the type and order of expanders to be used during scale out operations. This option specifies an ordered list, highest priority first, of expanders that will be used by the cluster autoscaler to select node groups for expansion when scaling out. Expanders instruct the autoscaler on how to choose node groups when scaling out the cluster. They can be specified in order so that the result from the first expander is used as the input to the second, and so forth. For example, if set to [LeastWaste, Random] the autoscaler will first evaluate node groups to determine which will have the least resource waste, if multiple groups are selected the autoscaler will then randomly choose between those groups to determine the group for scaling. The following expanders are available: * LeastWaste - selects the node group that will have the least idle CPU (if tied, unused memory) after scale-up. * Priority - selects the node group that has the highest priority assigned by the user. For details, please see https://github.com/openshift/kubernetes-autoscaler/blob/master/cluster-autoscaler/expander/priority/readme.md * Random - selects the node group randomly. If not specified, the default value is Random , available options are: LeastWaste , Priority , Random . ignoreDaemonsetsUtilization boolean Enables/Disables --ignore-daemonsets-utilization CA feature flag. Should CA ignore DaemonSet pods when calculating resource utilization for scaling down. false by default logVerbosity integer Sets the autoscaler log level. Default value is 1, level 4 is recommended for DEBUGGING and level 6 will enable almost everything. This option has priority over log level set by the CLUSTER_AUTOSCALER_VERBOSITY environment variable. maxNodeProvisionTime string Maximum time CA waits for node to be provisioned maxPodGracePeriod integer Gives pods graceful termination time before scaling down podPriorityThreshold integer To allow users to schedule "best-effort" pods, which shouldn't trigger Cluster Autoscaler actions, but only run when there are spare resources available, More info: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-cluster-autoscaler-work-with-pod-priority-and-preemption resourceLimits object Constraints of autoscaling resources scaleDown object Configuration of scale down operation skipNodesWithLocalStorage boolean Enables/Disables --skip-nodes-with-local-storage CA feature flag. If true cluster autoscaler will never delete nodes with pods with local storage, e.g. EmptyDir or HostPath. true by default at autoscaler 2.1.2. .spec.resourceLimits Description Constraints of autoscaling resources Type object Property Type Description cores object Minimum and maximum number of cores in cluster, in the format <min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. gpus array Minimum and maximum number of different GPUs in cluster, in the format <gpu_type>:<min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. Can be passed multiple times. gpus[] object maxNodesTotal integer Maximum number of nodes in all node groups. Cluster autoscaler will not grow the cluster beyond this number. memory object Minimum and maximum number of GiB of memory in cluster, in the format <min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. 2.1.3. .spec.resourceLimits.cores Description Minimum and maximum number of cores in cluster, in the format <min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. Type object Required max min Property Type Description max integer min integer 2.1.4. .spec.resourceLimits.gpus Description Minimum and maximum number of different GPUs in cluster, in the format <gpu_type>:<min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. Can be passed multiple times. Type array 2.1.5. .spec.resourceLimits.gpus[] Description Type object Required max min type Property Type Description max integer min integer type string The type of GPU to associate with the minimum and maximum limits. This value is used by the Cluster Autoscaler to identify Nodes that will have GPU capacity by searching for it as a label value on the Node objects. For example, Nodes that carry the label key cluster-api/accelerator with the label value being the same as the Type field will be counted towards the resource limits by the Cluster Autoscaler. 2.1.6. .spec.resourceLimits.memory Description Minimum and maximum number of GiB of memory in cluster, in the format <min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. Type object Required max min Property Type Description max integer min integer 2.1.7. .spec.scaleDown Description Configuration of scale down operation Type object Required enabled Property Type Description delayAfterAdd string How long after scale up that scale down evaluation resumes delayAfterDelete string How long after node deletion that scale down evaluation resumes, defaults to scan-interval delayAfterFailure string How long after scale down failure that scale down evaluation resumes enabled boolean Should CA scale down the cluster unneededTime string How long a node should be unneeded before it is eligible for scale down utilizationThreshold string Node utilization level, defined as sum of requested resources divided by capacity, below which a node can be considered for scale down 2.1.8. .status Description Most recently observed status of ClusterAutoscaler resource Type object 2.2. API endpoints The following API endpoints are available: /apis/autoscaling.openshift.io/v1/clusterautoscalers DELETE : delete collection of ClusterAutoscaler GET : list objects of kind ClusterAutoscaler POST : create a ClusterAutoscaler /apis/autoscaling.openshift.io/v1/clusterautoscalers/{name} DELETE : delete a ClusterAutoscaler GET : read the specified ClusterAutoscaler PATCH : partially update the specified ClusterAutoscaler PUT : replace the specified ClusterAutoscaler /apis/autoscaling.openshift.io/v1/clusterautoscalers/{name}/status GET : read status of the specified ClusterAutoscaler PATCH : partially update status of the specified ClusterAutoscaler PUT : replace status of the specified ClusterAutoscaler 2.2.1. /apis/autoscaling.openshift.io/v1/clusterautoscalers HTTP method DELETE Description delete collection of ClusterAutoscaler Table 2.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterAutoscaler Table 2.2. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscalerList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterAutoscaler Table 2.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.4. Body parameters Parameter Type Description body ClusterAutoscaler schema Table 2.5. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 201 - Created ClusterAutoscaler schema 202 - Accepted ClusterAutoscaler schema 401 - Unauthorized Empty 2.2.2. /apis/autoscaling.openshift.io/v1/clusterautoscalers/{name} Table 2.6. Global path parameters Parameter Type Description name string name of the ClusterAutoscaler HTTP method DELETE Description delete a ClusterAutoscaler Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterAutoscaler Table 2.9. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterAutoscaler Table 2.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterAutoscaler Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. Body parameters Parameter Type Description body ClusterAutoscaler schema Table 2.14. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 201 - Created ClusterAutoscaler schema 401 - Unauthorized Empty 2.2.3. /apis/autoscaling.openshift.io/v1/clusterautoscalers/{name}/status Table 2.15. Global path parameters Parameter Type Description name string name of the ClusterAutoscaler HTTP method GET Description read status of the specified ClusterAutoscaler Table 2.16. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterAutoscaler Table 2.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterAutoscaler Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body ClusterAutoscaler schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 201 - Created ClusterAutoscaler schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/autoscale_apis/clusterautoscaler-autoscaling-openshift-io-v1
Installing on IBM Power
Installing on IBM Power OpenShift Container Platform 4.18 Installing OpenShift Container Platform on IBM Power Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_ibm_power/index
Chapter 6. Uninstalling a cluster on AWS
Chapter 6. Uninstalling a cluster on AWS You can remove a cluster that you deployed to Amazon Web Services (AWS). 6.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 6.2. Deleting Amazon Web Services resources with the Cloud Credential Operator utility After uninstalling an OpenShift Container Platform cluster that uses short-term credentials managed outside the cluster, you can use the CCO utility ( ccoctl ) to remove the Amazon Web Services (AWS) resources that ccoctl created during installation. Prerequisites Extract and prepare the ccoctl binary. Uninstall an OpenShift Container Platform cluster on AWS that uses short-term credentials. Procedure Delete the AWS resources that ccoctl created by running the following command: USD ccoctl aws delete \ --name=<name> \ 1 --region=<aws_region> 2 1 <name> matches the name that was originally used to create and tag the cloud resources. 2 <aws_region> is the AWS region in which to delete cloud resources. Example output 2021/04/08 17:50:41 Identity Provider object .well-known/openid-configuration deleted from the bucket <name>-oidc 2021/04/08 17:50:42 Identity Provider object keys.json deleted from the bucket <name>-oidc 2021/04/08 17:50:43 Identity Provider bucket <name>-oidc deleted 2021/04/08 17:51:05 Policy <name>-openshift-cloud-credential-operator-cloud-credential-o associated with IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:05 IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:07 Policy <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials associated with IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:07 IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:08 Policy <name>-openshift-image-registry-installer-cloud-credentials associated with IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:08 IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:09 Policy <name>-openshift-ingress-operator-cloud-credentials associated with IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:10 IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:11 Policy <name>-openshift-machine-api-aws-cloud-credentials associated with IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:11 IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:39 Identity Provider with ARN arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com deleted Verification To verify that the resources are deleted, query AWS. For more information, refer to AWS documentation. 6.3. Deleting a cluster with a configured AWS Local Zone infrastructure After you install a cluster on Amazon Web Services (AWS) into an existing Virtual Private Cloud (VPC), and you set subnets for each Local Zone location, you can delete the cluster and any AWS resources associated with it. The example in the procedure assumes that you created a VPC and its subnets by using a CloudFormation template. Prerequisites You know the name of the CloudFormation stacks, <local_zone_stack_name> and <vpc_stack_name> , that were used during the creation of the network. You need the name of the stack to delete the cluster. You have access rights to the directory that contains the installation files that were created by the installation program. Your account includes a policy that provides you with permissions to delete the CloudFormation stack. Procedure Change to the directory that contains the stored installation program, and delete the cluster by using the destroy cluster command: USD ./openshift-install destroy cluster --dir <installation_directory> \ 1 --log-level=debug 2 1 For <installation_directory> , specify the directory that stored any files created by the installation program. 2 To view different log details, specify error , info , or warn instead of debug . Delete the CloudFormation stack for the Local Zone subnet: USD aws cloudformation delete-stack --stack-name <local_zone_stack_name> Delete the stack of resources that represent the VPC: USD aws cloudformation delete-stack --stack-name <vpc_stack_name> Verification Check that you removed the stack resources by issuing the following commands in the AWS CLI. The AWS CLI outputs that no template component exists. USD aws cloudformation describe-stacks --stack-name <local_zone_stack_name> USD aws cloudformation describe-stacks --stack-name <vpc_stack_name> Additional resources See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks. Opt into AWS Local Zones AWS Local Zones available locations AWS Local Zones features
[ "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "ccoctl aws delete --name=<name> \\ 1 --region=<aws_region> 2", "2021/04/08 17:50:41 Identity Provider object .well-known/openid-configuration deleted from the bucket <name>-oidc 2021/04/08 17:50:42 Identity Provider object keys.json deleted from the bucket <name>-oidc 2021/04/08 17:50:43 Identity Provider bucket <name>-oidc deleted 2021/04/08 17:51:05 Policy <name>-openshift-cloud-credential-operator-cloud-credential-o associated with IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:05 IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:07 Policy <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials associated with IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:07 IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:08 Policy <name>-openshift-image-registry-installer-cloud-credentials associated with IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:08 IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:09 Policy <name>-openshift-ingress-operator-cloud-credentials associated with IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:10 IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:11 Policy <name>-openshift-machine-api-aws-cloud-credentials associated with IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:11 IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:39 Identity Provider with ARN arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com deleted", "./openshift-install destroy cluster --dir <installation_directory> \\ 1 --log-level=debug 2", "aws cloudformation delete-stack --stack-name <local_zone_stack_name>", "aws cloudformation delete-stack --stack-name <vpc_stack_name>", "aws cloudformation describe-stacks --stack-name <local_zone_stack_name>", "aws cloudformation describe-stacks --stack-name <vpc_stack_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_aws/uninstalling-cluster-aws
4.2. Configuring Timeout Values for a Cluster
4.2. Configuring Timeout Values for a Cluster When you create a cluster with the pcs cluster setup command, timeout values for the cluster are set to default values that should be suitable for most cluster configurations. If you system requires different timeout values, however, you can modify these values with the pcs cluster setup options summarized in Table 4.1, "Timeout Options" Table 4.1. Timeout Options Option Description --token timeout Sets time in milliseconds until a token loss is declared after not receiving a token (default 1000 ms) --join timeout sets time in milliseconds to wait for join messages (default 50 ms) --consensus timeout sets time in milliseconds to wait for consensus to be achieved before starting a new round of member- ship configuration (default 1200 ms) --miss_count_const count sets the maximum number of times on receipt of a token a message is checked for retransmission before a retransmission occurs (default 5 messages) --fail_recv_const failures specifies how many rotations of the token without receiving any messages when messages should be received may occur before a new configuration is formed (default 2500 failures) For example, the following command creates the cluster new_cluster and sets the token timeout value to 10000 milliseconds (10 seconds) and the join timeout value to 100 milliseconds.
[ "pcs cluster setup --name new_cluster nodeA nodeB --token 10000 --join 100" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-configtimeout-HAAR
Chapter 5. Adding network interfaces
Chapter 5. Adding network interfaces Satellite supports specifying multiple network interfaces for a single host. You can configure these interfaces when creating a new host as described in Section 2.1, "Creating a host in Red Hat Satellite" or when editing an existing host. There are several types of network interfaces that you can attach to a host. When adding a new interface, select one of: Interface : Allows you to specify an additional physical or virtual interface. There are two types of virtual interfaces you can create. Use VLAN when the host needs to communicate with several (virtual) networks by using a single interface, while these networks are not accessible to each other. Use alias to add an additional IP address to an existing interface. For more information about adding a physical interface, see Section 5.1, "Adding a physical interface" . For more information about adding a virtual interface, see Section 5.2, "Adding a virtual interface" . Bond : Creates a bonded interface. NIC bonding is a way to bind multiple network interfaces together into a single interface that appears as a single device and has a single MAC address. This enables two or more network interfaces to act as one, increasing the bandwidth and providing redundancy. For more information, see Section 5.3, "Adding a bonded interface" . BMC : Baseboard Management Controller (BMC) allows you to remotely monitor and manage the physical state of machines. For more information about BMC, see Enabling Power Management on Hosts in Installing Satellite Server in a connected network environment . For more information about configuring BMC interfaces, see Section 5.5, "Adding a baseboard management controller (BMC) interface" . Note Additional interfaces have the Managed flag enabled by default, which means the new interface is configured automatically during provisioning by the DNS and DHCP Capsule Servers associated with the selected subnet. This requires a subnet with correctly configured DNS and DHCP Capsule Servers. If you use a Kickstart method for host provisioning, configuration files are automatically created for managed interfaces in the post-installation phase at /etc/sysconfig/network-scripts/ifcfg- interface_id . Note Virtual and bonded interfaces currently require a MAC address of a physical device. Therefore, the configuration of these interfaces works only on bare-metal hosts. 5.1. Adding a physical interface Use this procedure to add an additional physical interface to a host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click Edit to the host you want to edit. On the Interfaces tab, click Add Interface . Keep the Interface option selected in the Type list. Specify a MAC address . This setting is required. Specify the Device Identifier , for example eth0 . The identifier is used to specify this physical interface when creating bonded interfaces, VLANs, and aliases. Specify the DNS name associated with the host's IP address. Satellite saves this name in Capsule Server associated with the selected domain (the "DNS A" field) and Capsule Server associated with the selected subnet (the "DNS PTR" field). A single host can therefore have several DNS entries. Select a domain from the Domain list. To create and manage domains, navigate to Infrastructure > Domains . Select a subnet from the Subnet list. To create and manage subnets, navigate to Infrastructure > Subnets . Specify the IP address . Managed interfaces with an assigned DHCP Capsule Server require this setting for creating a DHCP lease. DHCP-enabled managed interfaces are automatically provided with a suggested IP address. Select whether the interface is Managed . If the interface is managed, configuration is pulled from the associated Capsule Server during provisioning, and DNS and DHCP entries are created. If using Kickstart provisioning, a configuration file is automatically created for the interface. Select whether this is the Primary interface for the host. The DNS name from the primary interface is used as the host portion of the FQDN. Select whether this is the Provision interface for the host. TFTP boot takes place using the provisioning interface. For image-based provisioning, the script to complete the provisioning is executed through the provisioning interface. Select whether to use the interface for Remote execution . Leave the Virtual NIC checkbox clear. Click OK to save the interface configuration. Click Submit to apply the changes to the host. 5.2. Adding a virtual interface Use this procedure to configure a virtual interface for a host. This can be either a VLAN or an alias interface. An alias interface is an additional IP address attached to an existing interface. An alias interface automatically inherits a MAC address from the interface it is attached to; therefore, you can create an alias without specifying a MAC address. The interface must be specified in a subnet with boot mode set to static . Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click Edit to the host you want to edit. On the Interfaces tab, click Add Interface . Keep the Interface option selected in the Type list. Specify the general interface settings. The applicable configuration options are the same as for the physical interfaces described in Section 5.1, "Adding a physical interface" . Specify a MAC address for managed virtual interfaces so that the configuration files for provisioning are generated correctly. However, a MAC address is not required for virtual interfaces that are not managed. If creating a VLAN, specify ID in the form of eth1.10 in the Device Identifier field. If creating an alias, use ID in the form of eth1:10 . Select the Virtual NIC checkbox. Additional configuration options specific to virtual interfaces are appended to the form: Tag : Optionally set a VLAN tag to trunk a network segment from the physical network through to the virtual interface. If you do not specify a tag, managed interfaces inherit the VLAN tag of the associated subnet. User-specified entries from this field are not applied to alias interfaces. Attached to : Specify the identifier of the physical interface to which the virtual interface belongs, for example eth1 . This setting is required. Click OK to save the interface configuration. Click Submit to apply the changes to the host. 5.3. Adding a bonded interface Use this procedure to configure a bonded interface for a host. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click Edit to the host you want to edit. On the Interfaces tab, click Add Interface . Select Bond from the Type list. Additional type-specific configuration options are appended to the form. Specify the general interface settings. The applicable configuration options are the same as for the physical interfaces described in Section 5.1, "Adding a physical interface" . Bonded interfaces use IDs in the form of bond0 in the Device Identifier field. A single MAC address is sufficient. If you are adding a secondary interface, select Managed . Otherwise, Satellite does not apply the configuration. Specify the configuration options specific to bonded interfaces: Mode : Select the bonding mode that defines a policy for fault tolerance and load balancing. See Section 5.4, "Bonding modes available in Satellite" for a brief description of each bonding mode. Attached devices : Specify a comma-separated list of identifiers of attached devices. These can be physical interfaces or VLANs. Bond options : Specify a space-separated list of configuration options, for example miimon=100 . For more information on configuration options for bonded interfaces, see Configuring network bonding in Red Hat Enterprise Linux 8 Configuring and Managing Networking . Click OK to save the interface configuration. Click Submit to apply the changes to the host. CLI procedure To create a host with a bonded interface, enter the following command: Ensure to replace bondN with bond and the ID of your device identifier, for example, bond0 . 5.4. Bonding modes available in Satellite Bonding Mode Description balance-rr Transmissions are received and sent sequentially on each bonded interface. active-backup Transmissions are received and sent through the first available bonded interface. Another bonded interface is only used if the active bonded interface fails. balance-xor Transmissions are based on the selected hash policy. In this mode, traffic destined for specific peers is always sent over the same interface. broadcast All transmissions are sent on all bonded interfaces. 802.a3 Creates aggregation groups that share the same settings. Transmits and receives on all interfaces in the active group. balance-tlb The outgoing traffic is distributed according to the current load on each bonded interface. balance-alb Receive load balancing is achieved through Address Resolution Protocol (ARP) negotiation. 5.5. Adding a baseboard management controller (BMC) interface You can control the power status of bare-metal hosts from Satellite. Use this procedure to configure a baseboard management controller (BMC) interface for a host that supports this feature. Prerequisites You know the MAC address, IP address, and other details of the BMC interface on the host, and authentication credentials for that interface. Note You only need the MAC address for the BMC interface if the BMC interface is managed, so that it can create a DHCP reservation. Procedure Enable BMC power management on your Capsule: In the Satellite web UI, navigate to Infrastructure > Subnets . Select the subnet of your host. On the Capsules tab, select your Capsule as BMC Capsule . Click Submit . Navigate to Hosts > All Hosts . Click Edit to the host you want to edit. On the Interfaces tab, click Add Interface . Select BMC from the Type list. Type-specific configuration options are appended to the form. Specify the general interface settings. The applicable configuration options are the same as for the physical interfaces described in Section 5.1, "Adding a physical interface" . Specify the configuration options specific to BMC interfaces: Username and Password : Specify any authentication credentials required by BMC. Provider : Specify the BMC provider. Click OK to save the interface configuration. Click Submit to apply the changes to the host.
[ "hammer host create --ask-root-password yes --hostgroup My_Host_Group --ip= My_IP_Address --mac= My_MAC_Address --managed true --interface=\"identifier= My_NIC_1, mac=_My_MAC_Address_1 , managed=true, type=Nic::Managed, domain_id= My_Domain_ID , subnet_id= My_Subnet_ID \" --interface=\"identifier= My_NIC_2 , mac= My_MAC_Address_2 , managed=true, type=Nic::Managed, domain_id= My_Domain_ID , subnet_id= My_Subnet_ID \" --interface=\"identifier= bondN , ip= My_IP_Address_2 , type=Nic::Bond, mode=active-backup, attached_devices=[ My_NIC_1 , My_NIC_2 ], managed=true, domain_id= My_Domain_ID , subnet_id= My_Subnet_ID \" --location \" My_Location \" --name \" My_Host_Name \" --organization \" My_Organization \" --subnet-id= My_Subnet_ID", "satellite-installer --foreman-proxy-bmc-default-provider=ipmitool --foreman-proxy-bmc=true" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_hosts/adding_network_interfaces_managing-hosts
Chapter 18. Configuring custom SSL/TLS certificates
Chapter 18. Configuring custom SSL/TLS certificates You can configure the undercloud to use SSL/TLS for communication over public endpoints. However, if want to you use a SSL certificate with your own certificate authority, you must complete the following configuration steps. 18.1. Initializing the signing host The signing host is the host that generates and signs new certificates with a certificate authority. If you have never created SSL certificates on the chosen signing host, you might need to initialize the host so that it can sign new certificates. Procedure The /etc/pki/CA/index.txt file contains records of all signed certificates. Check if this file exists. If it does not exist, create an empty file: The /etc/pki/CA/serial file identifies the serial number to use for the certificate to sign. Check if this file exists. If the file does not exist, create a new file with a new starting value: 18.2. Creating a certificate authority Normally you sign your SSL/TLS certificates with an external certificate authority. In some situations, you might want to use your own certificate authority. For example, you might want to have an internal-only certificate authority. Procedure Generate a key and certificate pair to act as the certificate authority: The openssl req command requests certain details about your authority. Enter these details at the prompt. These commands create a certificate authority file called ca.crt.pem . 18.3. Adding the certificate authority to clients For any external clients aiming to communicate using SSL/TLS, copy the certificate authority file to each client that requires access to your Red Hat OpenStack Platform environment. Procedure Copy the certificate authority to the client system: After you copy the certificate authority file to each client, run the following command on each client to add the certificate to the certificate authority trust bundle: 18.4. Creating an SSL/TLS key Enabling SSL/TLS on an OpenStack environment requires an SSL/TLS key to generate your certificates. Procedure Run the following command to generate the SSL/TLS key ( server.key.pem ): 18.5. Creating an SSL/TLS certificate signing request Complete the following steps to create a certificate signing request. Procedure Copy the default OpenSSL configuration file: Edit the new openssl.cnf file and configure the SSL parameters that you want to use for director. An example of the types of parameters to modify include: Set the commonName_default to one of the following entries: If you are using an IP address to access director over SSL/TLS, use the undercloud_public_host parameter in the undercloud.conf file. If you are using a fully qualified domain name to access director over SSL/TLS, use the domain name. Add subjectAltName = @alt_names to the v3_req section. Edit the alt_names section to include the following entries: IP - A list of IP addresses that clients use to access director over SSL. DNS - A list of domain names that clients use to access director over SSL. Also include the Public API IP address as a DNS entry at the end of the alt_names section. Note For more information about openssl.cnf , run the man openssl.cnf command. Run the following command to generate a certificate signing request ( server.csr.pem ): Ensure that you include your OpenStack SSL/TLS key with the -key option. This command generates a server.csr.pem file, which is the certificate signing request. Use this file to create your OpenStack SSL/TLS certificate. 18.6. Creating the SSL/TLS certificate To generate the SSL/TLS certificate for your OpenStack environment, the following files must be present: openssl.cnf The customized configuration file that specifies the v3 extensions. server.csr.pem The certificate signing request to generate and sign the certificate with a certificate authority. ca.crt.pem The certificate authority, which signs the certificate. ca.key.pem The certificate authority private key. Procedure Run the following command to create a certificate for your undercloud or overcloud: This command uses the following options: -config Use a custom configuration file, which is the openssl.cnf file with v3 extensions. -extensions v3_req Enabled v3 extensions. -days Defines how long in days until the certificate expires. -in ' The certificate signing request. -out The resulting signed certificate. -cert The certificate authority file. -keyfile The certificate authority private key. This command creates a new certificate named server.crt.pem . Use this certificate in conjunction with your OpenStack SSL/TLS key 18.7. Adding the certificate to the undercloud Complete the following steps to add your OpenStack SSL/TLS certificate to the undercloud trust bundle. Procedure Run the following command to combine the certificate and key: This command creates a undercloud.pem file. Copy the undercloud.pem file to a location within your /etc/pki directory and set the necessary SELinux context so that HAProxy can read it: Add the undercloud.pem file location to the undercloud_service_certificate option in the undercloud.conf file: Add the certificate authority that signed the certificate to the list of trusted Certificate Authorities on the undercloud so that different services within the undercloud have access to the certificate authority:
[ "sudo touch /etc/pki/CA/index.txt", "echo '1000' | sudo tee /etc/pki/CA/serial", "openssl genrsa -out ca.key.pem 4096 openssl req -key ca.key.pem -new -x509 -days 7300 -extensions v3_ca -out ca.crt.pem", "sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/", "sudo update-ca-trust extract", "openssl genrsa -out server.key.pem 2048", "cp /etc/pki/tls/openssl.cnf .", "[req] distinguished_name = req_distinguished_name req_extensions = v3_req [req_distinguished_name] countryName = Country Name (2 letter code) countryName_default = AU stateOrProvinceName = State or Province Name (full name) stateOrProvinceName_default = Queensland localityName = Locality Name (eg, city) localityName_default = Brisbane organizationalUnitName = Organizational Unit Name (eg, section) organizationalUnitName_default = Red Hat commonName = Common Name commonName_default = 192.168.0.1 commonName_max = 64 Extensions to add to a certificate request basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] IP.1 = 192.168.0.1 DNS.1 = instack.localdomain DNS.2 = vip.localdomain DNS.3 = 192.168.0.1", "openssl req -config openssl.cnf -key server.key.pem -new -out server.csr.pem", "sudo openssl ca -config openssl.cnf -extensions v3_req -days 3650 -in server.csr.pem -out server.crt.pem -cert ca.crt.pem -keyfile ca.key.pem", "cat server.crt.pem server.key.pem > undercloud.pem", "sudo mkdir /etc/pki/undercloud-certs sudo cp ~/undercloud.pem /etc/pki/undercloud-certs/. sudo semanage fcontext -a -t etc_t \"/etc/pki/undercloud-certs(/.*)?\" sudo restorecon -R /etc/pki/undercloud-certs", "undercloud_service_certificate = /etc/pki/undercloud-certs/undercloud.pem", "sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/ sudo update-ca-trust extract" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/director_installation_and_usage/configuring-custom-ssl-tls-certificates
Chapter 92. TgzArtifact schema reference
Chapter 92. TgzArtifact schema reference Used in: Plugin Property Property type Description url string URL of the artifact which will be downloaded. Streams for Apache Kafka does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required for jar , zip , tgz and other artifacts. Not applicable to the maven artifact type. sha512sum string SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified. Not applicable to the maven artifact type. insecure boolean By default, connections using TLS are verified to check they are secure. The server certificate used must be valid, trusted, and contain the server name. By setting this option to true , all TLS verification is disabled and the artifact will be downloaded, even when the server is considered insecure. type string Must be tgz .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-tgzartifact-reference
Logging
Logging OpenShift Container Platform 4.10 OpenShift Logging installation, usage, and release notes Red Hat OpenShift Documentation Team
[ "tls.verify_certificate = false tls.verify_hostname = false", "oc get clusterversion/version -o jsonpath='{.spec.clusterID}{\"\\n\"}'", "oc -n openshift-logging edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" annotations: logging.openshift.io/preview-vector-collector: enabled spec: collection: logs: type: \"vector\" vector: {}", "oc delete pod -l component=collector", "oc delete pod -l component=collector", "tls.verify_certificate = false tls.verify_hostname = false", "oc get clusterversion/version -o jsonpath='{.spec.clusterID}{\"\\n\"}'", "apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: logging-loki-s3 3 type: s3 4 storageClassName: <storage_class_name> 5 tenants: mode: openshift-logging", "apply -f logging-loki.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki collection: type: vector", "apply -f cr-lokistack.yaml", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "oc apply -f sub.yaml", "oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV", "currentCSV: jaeger-operator.v1.8.2", "oc delete subscription jaeger -n openshift-operators", "subscription.operators.coreos.com \"jaeger\" deleted", "oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators", "clusterserviceversion.operators.coreos.com \"jaeger-operator.v1.8.2\" deleted", "apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: logging-loki-s3 3 type: s3 4 storageClassName: <storage_class_name> 5 tenants: mode: openshift-logging", "apply -f logging-loki.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki collection: type: vector", "apply -f cr-lokistack.yaml", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "oc apply -f sub.yaml", "oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV", "currentCSV: jaeger-operator.v1.8.2", "oc delete subscription jaeger -n openshift-operators", "subscription.operators.coreos.com \"jaeger\" deleted", "oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators", "clusterserviceversion.operators.coreos.com \"jaeger-operator.v1.8.2\" deleted", "oc -n openshift-logging edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" annotations: logging.openshift.io/preview-vector-collector: enabled spec: collection: logs: type: \"vector\" vector: {}", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" 1 namespace: \"openshift-logging\" spec: managementState: \"Managed\" 2 logStore: type: \"elasticsearch\" 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: \"<storage_class_name>\" 6 size: 200G resources: 7 limits: memory: \"16Gi\" requests: memory: \"16Gi\" proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" 9 kibana: replicas: 1 collection: logs: type: \"fluentd\" 10 fluentd: {}", "oc get deployment", "cluster-logging-operator 1/1 1 1 18h elasticsearch-cd-x6kdekli-1 0/1 1 0 6m54s elasticsearch-cdm-x6kdekli-1 1/1 1 1 18h elasticsearch-cdm-x6kdekli-2 0/1 1 0 6m49s elasticsearch-cdm-x6kdekli-3 0/1 1 0 6m44s", "apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2", "oc create -f <file-name>.yaml", "oc create -f eo-namespace.yaml", "apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\"", "oc create -f <file-name>.yaml", "oc create -f olo-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {}", "oc create -f <file-name>.yaml", "oc create -f eo-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: \"elasticsearch-operator\" namespace: \"openshift-operators-redhat\" 1 spec: channel: \"stable-5.1\" 2 installPlanApproval: \"Automatic\" 3 source: \"redhat-operators\" 4 sourceNamespace: \"openshift-marketplace\" name: \"elasticsearch-operator\"", "oc create -f <file-name>.yaml", "oc create -f eo-sub.yaml", "oc get csv --all-namespaces", "NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded kube-node-lease elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded kube-public elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded kube-system elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded openshift-apiserver-operator elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded openshift-apiserver elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded openshift-authentication-operator elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded openshift-authentication elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging 2", "oc create -f <file-name>.yaml", "oc create -f olo-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: \"stable\" 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace", "oc create -f <file-name>.yaml", "oc create -f olo-sub.yaml", "oc get csv -n openshift-logging", "NAMESPACE NAME DISPLAY VERSION REPLACES PHASE openshift-logging clusterlogging.5.1.0-202007012112.p0 OpenShift Logging 5.1.0-202007012112.p0 Succeeded", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" 1 namespace: \"openshift-logging\" spec: managementState: \"Managed\" 2 logStore: type: \"elasticsearch\" 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: \"<storage-class-name>\" 6 size: 200G resources: 7 limits: memory: \"16Gi\" requests: memory: \"16Gi\" proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" 9 kibana: replicas: 1 collection: logs: type: \"fluentd\" 10 fluentd: {}", "oc get deployment", "cluster-logging-operator 1/1 1 1 18h elasticsearch-cd-x6kdekli-1 1/1 1 0 6m54s elasticsearch-cdm-x6kdekli-1 1/1 1 1 18h elasticsearch-cdm-x6kdekli-2 1/1 1 0 6m49s elasticsearch-cdm-x6kdekli-3 1/1 1 0 6m44s", "oc create -f <file-name>.yaml", "oc create -f olo-instance.yaml", "oc get pods -n openshift-logging", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s collector-587vb 1/1 Running 0 2m26s collector-7mpb9 1/1 Running 0 2m30s collector-flm6j 1/1 Running 0 2m33s collector-gn4rn 1/1 Running 0 2m26s collector-nlgb6 1/1 Running 0 2m30s collector-snpkt 1/1 Running 0 2m28s kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s", "oc auth can-i get pods/log -n <project>", "yes", "oc adm pod-network join-projects --to=openshift-operators-redhat openshift-logging", "oc label namespace openshift-operators-redhat project=openshift-operators-redhat", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring-ingress-operators-redhat spec: ingress: - from: - podSelector: {} - from: - namespaceSelector: matchLabels: project: \"openshift-operators-redhat\" - from: - namespaceSelector: matchLabels: name: \"openshift-monitoring\" - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" 1 namespace: \"openshift-logging\" 2 spec: managementState: \"Managed\" 3 logStore: type: \"elasticsearch\" 4 retentionPolicy: application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 resources: limits: memory: 16Gi requests: cpu: 500m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: 5 type: \"kibana\" kibana: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi replicas: 1 collection: 6 logs: type: \"fluentd\" fluentd: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi", "oc get pods --selector component=collector -o wide -n openshift-logging", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES fluentd-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> fluentd-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> fluentd-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> fluentd-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> fluentd-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>", "oc -n openshift-logging edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: collection: logs: fluentd: resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi", "oc edit ClusterLogging instance", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: forwarder: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: \"300s\" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9", "oc get pods -l component=collector -n openshift-logging", "oc extract configmap/fluentd --confirm", "<buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size \"#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}\" total_limit_size 32m chunk_limit_size 8m overflow_action throw_exception </buffer>", "outputRefs: - default", "oc edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" collection: logs: type: \"fluentd\" fluentd: {}", "oc get pods -l component=collector -n openshift-logging", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: \"elasticsearch\" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: \"fluentdForward\" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3", "apiVersion: \"logging.openshift.io/v1\" kind: \"Elasticsearch\" metadata: name: \"elasticsearch\" spec: indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4", "oc get cronjob", "NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s", "oc edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: 1 resources: limits: 2 memory: \"32Gi\" requests: 3 cpu: \"1\" memory: \"16Gi\" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi", "resources: limits: 1 memory: \"32Gi\" requests: 2 cpu: \"8\" memory: \"32Gi\"", "oc edit clusterlogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: redundancyPolicy: \"SingleRedundancy\" 1", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"gp2\" size: \"200G\"", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}", "oc project openshift-logging", "oc get pods -l component=elasticsearch-", "oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"false\"}}}}}'", "oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST", "oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST", "{\"_shards\":{\"total\":4,\"successful\":4,\"failed\":0},\".security\":{\"total\":2,\"successful\":2,\"failed\":0},\".kibana_1\":{\"total\":2,\"successful\":2,\"failed\":0}}", "oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'", "oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'", "{\"acknowledged\":true,\"persistent\":{\"cluster\":{\"routing\":{\"allocation\":{\"enable\":\"primaries\"}}}},\"transient\":", "oc rollout resume deployment/<deployment-name>", "oc rollout resume deployment/elasticsearch-cdm-0-1", "deployment.extensions/elasticsearch-cdm-0-1 resumed", "oc get pods -l component=elasticsearch-", "NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h", "oc rollout pause deployment/<deployment-name>", "oc rollout pause deployment/elasticsearch-cdm-0-1", "deployment.extensions/elasticsearch-cdm-0-1 paused", "oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true", "oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true", "{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"yellow\", 1 \"timed_out\" : false, \"number_of_nodes\" : 3, \"number_of_data_nodes\" : 3, \"active_primary_shards\" : 8, \"active_shards\" : 16, \"relocating_shards\" : 0, \"initializing_shards\" : 0, \"unassigned_shards\" : 1, \"delayed_unassigned_shards\" : 0, \"number_of_pending_tasks\" : 0, \"number_of_in_flight_fetch\" : 0, \"task_max_waiting_in_queue_millis\" : 0, \"active_shards_percent_as_number\" : 100.0 }", "oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'", "oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'", "{ \"acknowledged\" : true, \"persistent\" : { }, \"transient\" : { \"cluster\" : { \"routing\" : { \"allocation\" : { \"enable\" : \"all\" } } } } }", "oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"true\"}}}}}'", "oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging", "172.30.183.229", "oc get service elasticsearch -n openshift-logging", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h", "oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://172.30.183.229:9200/_cat/health\"", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108", "oc project openshift-logging", "oc extract secret/elasticsearch --to=. --keys=admin-ca", "admin-ca", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1", "cat ./admin-ca | sed -e \"s/^/ /\" >> <file-name>.yaml", "oc create -f <file-name>.yaml", "route.route.openshift.io/elasticsearch created", "token=USD(oc whoami -t)", "routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`", "curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://USD{routeES}\"", "{ \"name\" : \"elasticsearch-cdm-i40ktba0-1\", \"cluster_name\" : \"elasticsearch\", \"cluster_uuid\" : \"0eY-tJzcR3KOdpgeMJo-MQ\", \"version\" : { \"number\" : \"6.8.1\", \"build_flavor\" : \"oss\", \"build_type\" : \"zip\", \"build_hash\" : \"Unknown\", \"build_date\" : \"Unknown\", \"build_snapshot\" : true, \"lucene_version\" : \"7.7.0\", \"minimum_wire_compatibility_version\" : \"5.6.0\", \"minimum_index_compatibility_version\" : \"5.0.0\" }, \"<tagline>\" : \"<for search>\" }", "oc -n openshift-logging edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: \"fluentd\" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi", "oc edit ClusterLogging instance", "oc edit ClusterLogging instance apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: visualization: type: \"kibana\" kibana: replicas: 1 1", "oc -n openshift-logging edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: \"fluentd\" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 tolerations: 1 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: {} redundancyPolicy: \"ZeroRedundancy\" visualization: type: \"kibana\" kibana: tolerations: 2 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi replicas: 1 collection: logs: type: \"fluentd\" fluentd: tolerations: 3 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi", "tolerations: - effect: \"NoExecute\" key: \"node.kubernetes.io/disk-pressure\" operator: \"Exists\"", "oc adm taint nodes <node-name> <key>=<value>:<effect>", "oc adm taint nodes node1 elasticsearch=node:NoExecute", "logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 1 tolerations: - key: \"elasticsearch\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4", "oc adm taint nodes <node-name> <key>=<value>:<effect>", "oc adm taint nodes node1 kibana=node:NoExecute", "visualization: type: \"kibana\" kibana: tolerations: - key: \"kibana\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4", "tolerations: - key: \"node-role.kubernetes.io/master\" operator: \"Exists\" effect: \"NoExecute\"", "oc adm taint nodes <node-name> <key>=<value>:<effect>", "oc adm taint nodes node1 collector=node:NoExecute", "collection: logs: type: \"fluentd\" fluentd: tolerations: - key: \"collector\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4", "oc edit ClusterLogging instance", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana", "oc get pod kibana-5b8bdf44f9-ccpq9 -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.23.0", "oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml", "kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana", "oc get pods", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s", "oc get pod kibana-7d85dcffc8-bfpfp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>", "oc get pods", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s", "variant: openshift version: 4.10.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: \"worker\" storage: files: - path: /etc/systemd/journald.conf mode: 0644 1 overwrite: true contents: inline: | Compress=yes 2 ForwardToConsole=no 3 ForwardToSyslog=no MaxRetentionSec=1month 4 RateLimitBurst=10000 5 RateLimitIntervalSec=30s Storage=persistent 6 SyncIntervalSec=1s 7 SystemMaxUse=8G 8 SystemKeepFree=20% 9 SystemMaxFileSize=10M 10", "butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml", "oc apply -f 40-worker-custom-journald.yaml", "oc describe machineconfigpool/worker", "Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e", "Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.", "apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: size: 1x.small storage: schemas: - version: v12 effectiveDate: \"2022-06-01\" secret: name: logging-loki-s3 type: s3 storageClassName: gp2 tenants: mode: openshift-logging", "oc apply -f logging-loki.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki collection: type: vector", "oc apply -f cr-lokistack.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki collection: type: vector", "\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}", "429 Too Many Requests Ingestion rate limit exceeded", "2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true", "2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"", "level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2", "oc logs -f <pod_name> -c <container_name>", "oc logs ruby-58cd97df55-mww7r", "oc logs -f ruby-57f7f4855b-znl92 -c ruby", "oc logs <object_type>/<resource_name> 1", "oc logs deployment/ruby", "oc auth can-i get pods/log -n <project>", "yes", "oc auth can-i get pods/log -n <project>", "yes", "{ \"_index\": \"infra-000001\", \"_type\": \"_doc\", \"_id\": \"YmJmYTBlNDkZTRmLTliMGQtMjE3NmFiOGUyOWM3\", \"_version\": 1, \"_score\": null, \"_source\": { \"docker\": { \"container_id\": \"f85fa55bbef7bb783f041066be1e7c267a6b88c4603dfce213e32c1\" }, \"kubernetes\": { \"container_name\": \"registry-server\", \"namespace_name\": \"openshift-marketplace\", \"pod_name\": \"redhat-marketplace-n64gc\", \"container_image\": \"registry.redhat.io/redhat/redhat-marketplace-index:v4.7\", \"container_image_id\": \"registry.redhat.io/redhat/redhat-marketplace-index@sha256:65fc0c45aabb95809e376feb065771ecda9e5e59cc8b3024c4545c168f\", \"pod_id\": \"8f594ea2-c866-4b5c-a1c8-a50756704b2a\", \"host\": \"ip-10-0-182-28.us-east-2.compute.internal\", \"master_url\": \"https://kubernetes.default.svc\", \"namespace_id\": \"3abab127-7669-4eb3-b9ef-44c04ad68d38\", \"namespace_labels\": { \"openshift_io/cluster-monitoring\": \"true\" }, \"flat_labels\": [ \"catalogsource_operators_coreos_com/update=redhat-marketplace\" ] }, \"message\": \"time=\\\"2020-09-23T20:47:03Z\\\" level=info msg=\\\"serving registry\\\" database=/database/index.db port=50051\", \"level\": \"unknown\", \"hostname\": \"ip-10-0-182-28.internal\", \"pipeline_metadata\": { \"collector\": { \"ipaddr4\": \"10.0.182.28\", \"inputname\": \"fluent-plugin-systemd\", \"name\": \"fluentd\", \"received_at\": \"2020-09-23T20:47:15.007583+00:00\", \"version\": \"1.7.4 1.6.0\" } }, \"@timestamp\": \"2020-09-23T20:47:03.422465+00:00\", \"viaq_msg_id\": \"YmJmYTBlNDktMDMGQtMjE3NmFiOGUyOWM3\", \"openshift\": { \"labels\": { \"logging\": \"infra\" } } }, \"fields\": { \"@timestamp\": [ \"2020-09-23T20:47:03.422Z\" ], \"pipeline_metadata.collector.received_at\": [ \"2020-09-23T20:47:15.007Z\" ] }, \"sort\": [ 1600894023422 ] }", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: elasticsearch-secure 3 type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: elasticsearch - name: elasticsearch-insecure 4 type: \"elasticsearch\" url: http://elasticsearch.insecure.com:9200 - name: kafka-app 5 type: \"kafka\" url: tls://kafka.secure.com:9093/app-topic inputs: 6 - name: my-app-logs application: namespaces: - my-project pipelines: - name: audit-logs 7 inputRefs: - audit outputRefs: - elasticsearch-secure - default parse: json 8 labels: secure: \"true\" 9 datacenter: \"east\" - name: infrastructure-logs 10 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: datacenter: \"west\" - name: my-app 11 inputRefs: - my-app-logs outputRefs: - default - inputRefs: 12 - application outputRefs: - kafka-app labels: datacenter: \"south\"", "oc create secret generic -n openshift-logging <my-secret> --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputDefaults: elasticsearch: enableStructuredContainerLogs: true 1 pipelines: - inputRefs: - application name: application-logs outputRefs: - default parse: json", "apiVersion: v1 kind: Pod metadata: annotations: containerType.logging.openshift.io/heavy: heavy 1 containerType.logging.openshift.io/low: low spec: containers: - name: heavy 2 image: heavyimage - name: low image: lowimage", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: elasticsearch-insecure 3 type: \"elasticsearch\" 4 url: http://elasticsearch.insecure.com:9200 5 - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 6 secret: name: es-secret 7 pipelines: - name: application-logs 8 inputRefs: 9 - application - audit outputRefs: - elasticsearch-secure 10 - default 11 parse: json 12 labels: myLabel: \"myValue\" 13 - name: infrastructure-audit-logs 14 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: logs: \"audit-infra\"", "oc create -f <file-name>.yaml", "apiVersion: v1 kind: Secret metadata: name: openshift-test-secret data: username: <username> password: <password>", "oc create secret -n openshift-logging openshift-test-secret.yaml", "kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: openshift-test-secret", "oc create -f <file-name>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure 7 inputRefs: 8 - application - audit outputRefs: - fluentd-server-secure 9 - default 10 parse: json 11 labels: clusterId: \"C1234\" 12 - name: forward-to-fluentd-insecure 13 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: \"C1234\"", "oc create -f <file-name>.yaml", "input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } } filter { } output { stdout { codec => rubydebug } }", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: rsyslog-east 3 type: syslog 4 syslog: 5 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514' 6 secret: 7 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'udp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east 8 inputRefs: 9 - audit - application outputRefs: 10 - rsyslog-east - default 11 parse: json 12 labels: secure: \"true\" 13 syslog: \"east\" - name: syslog-west 14 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: \"west\"", "oc create -f <file-name>.yaml", "spec: outputs: - name: syslogout syslog: addLogSource: true facility: user payloadKey: message rfc: RFC3164 severity: debug tag: mytag type: syslog url: tls://syslog-receiver.openshift-logging.svc:24224 pipelines: - inputRefs: - application name: test-app outputRefs: - syslogout", "<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {\"msgcontent\"=>\"Message Contents\", \"timestamp\"=>\"2020-11-15 17:06:09\", \"tag_key\"=>\"rec_tag\", \"index\"=>56}", "<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={\"msgcontent\":\"My life is my message\", \"timestamp\":\"2020-11-16 10:49:36\", \"tag_key\":\"rec_tag\", \"index\":76}", "apiVersion: v1 kind: Secret metadata: name: cw-secret namespace: openshift-logging data: aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=", "oc apply -f cw-secret.yaml", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: cw 3 type: cloudwatch 4 cloudwatch: groupBy: logType 5 groupPrefix: <group prefix> 6 region: us-east-2 7 secret: name: cw-secret 8 pipelines: - name: infra-logs 9 inputRefs: 10 - infrastructure - audit - application outputRefs: - cw 11", "oc create -f <file-name>.yaml", "oc get Infrastructure/cluster -ojson | jq .status.infrastructureName \"mycluster-7977k\"", "oc run busybox --image=busybox -- sh -c 'while true; do echo \"My life is my message\"; sleep 3; done' oc logs -f busybox My life is my message My life is my message My life is my message", "oc get ns/app -ojson | jq .metadata.uid \"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\"", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: cw type: cloudwatch cloudwatch: groupBy: logType region: us-east-2 secret: name: cw-secret pipelines: - name: all-logs inputRefs: - infrastructure - audit - application outputRefs: - cw", "aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.application\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"", "aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName \"kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log\"", "aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log\"", "aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log\"", "aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log { \"events\": [ { \"timestamp\": 1629422704178, \"message\": \"{\\\"docker\\\":{\\\"container_id\\\":\\\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\\\"},\\\"kubernetes\\\":{\\\"container_name\\\":\\\"busybox\\\",\\\"namespace_name\\\":\\\"app\\\",\\\"pod_name\\\":\\\"busybox\\\",\\\"container_image\\\":\\\"docker.io/library/busybox:latest\\\",\\\"container_image_id\\\":\\\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\\\",\\\"pod_id\\\":\\\"870be234-90a3-4258-b73f-4f4d6e2777c7\\\",\\\"host\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"labels\\\":{\\\"run\\\":\\\"busybox\\\"},\\\"master_url\\\":\\\"https://kubernetes.default.svc\\\",\\\"namespace_id\\\":\\\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\\\",\\\"namespace_labels\\\":{\\\"kubernetes_io/metadata_name\\\":\\\"app\\\"}},\\\"message\\\":\\\"My life is my message\\\",\\\"level\\\":\\\"unknown\\\",\\\"hostname\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"pipeline_metadata\\\":{\\\"collector\\\":{\\\"ipaddr4\\\":\\\"10.0.216.3\\\",\\\"inputname\\\":\\\"fluent-plugin-systemd\\\",\\\"name\\\":\\\"fluentd\\\",\\\"received_at\\\":\\\"2021-08-20T01:25:08.085760+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-20T01:25:04.178986+00:00\\\",\\\"viaq_index_name\\\":\\\"app-write\\\",\\\"viaq_msg_id\\\":\\\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\\\",\\\"log_type\\\":\\\"application\\\",\\\"time\\\":\\\"2021-08-20T01:25:04+00:00\\\"}\", \"ingestionTime\": 1629422744016 },", "cloudwatch: groupBy: logType groupPrefix: demo-group-prefix region: us-east-2", "aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"demo-group-prefix.application\" \"demo-group-prefix.audit\" \"demo-group-prefix.infrastructure\"", "cloudwatch: groupBy: namespaceName region: us-east-2", "aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.app\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"", "cloudwatch: groupBy: namespaceUUID region: us-east-2", "aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf\" // uid of the \"app\" namespace \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <your_role_name>-credrequest namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - logs:PutLogEvents - logs:CreateLogGroup - logs:PutRetentionPolicy - logs:CreateLogStream - logs:DescribeLogGroups - logs:DescribeLogStreams effect: Allow resource: arn:aws:logs:*:*:* secretRef: name: <your_role_name> namespace: openshift-logging serviceAccountNames: - logcollector", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com 1", "apply -f output/manifests/openshift-logging-<your_role_name>-credentials.yaml", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: cw 3 type: cloudwatch 4 cloudwatch: groupBy: logType 5 groupPrefix: <group prefix> 6 region: us-east-2 7 secret: name: <your_role_name> 8 pipelines: - name: to-cloudwatch 9 inputRefs: 10 - infrastructure - audit - application outputRefs: - cw 11", "create secret generic cw-sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/my-role_with-permissions", "apiVersion: v1 kind: Secret metadata: namespace: openshift-logging name: my-secret-name stringData: role_arn: arn:aws:iam::123456789012:role/my-role_with-permissions", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: loki-insecure 3 type: \"loki\" 4 url: http://loki.insecure.com:3100 5 loki: tenantKey: kubernetes.namespace_name labelKeys: kubernetes.labels.foo - name: loki-secure 6 type: \"loki\" url: https://loki.secure.com:3100 secret: name: loki-secret 7 loki: tenantKey: kubernetes.namespace_name 8 labelKeys: kubernetes.labels.foo 9 pipelines: - name: application-logs 10 inputRefs: 11 - application - audit outputRefs: 12 - loki-secure", "oc create -f <file-name>.yaml", "\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}", "429 Too Many Requests Ingestion rate limit exceeded", "2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true", "2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"", "level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2", "oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json= <your_service_account_key_file.json>", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: outputs: - name: gcp-1 type: googleCloudLogging secret: name: gcp-secret googleCloudLogging: projectId : \"openshift-gce-devel\" 1 logId : \"app-gcp\" 2 pipelines: - name: test-app inputRefs: 3 - application outputRefs: - gcp-1", "oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: \"instance\" 1 namespace: \"openshift-logging\" 2 spec: outputs: - name: splunk-receiver 3 secret: name: vector-splunk-secret 4 type: splunk 5 url: <http://your.splunk.hec.url:8088> 6 pipelines: 7 - inputRefs: - application - infrastructure name: 8 outputRefs: - splunk-receiver 9", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs: 7 - name: my-app-logs application: namespaces: - my-project pipelines: - name: forward-to-fluentd-insecure 8 inputRefs: 9 - my-app-logs outputRefs: 10 - fluentd-server-insecure parse: json 11 labels: project: \"my-project\" 12 - name: forward-to-fluentd-secure 13 inputRefs: - application - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: \"C1234\"", "oc create -f <file-name>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: pipelines: - inputRefs: [ myAppLogData ] 3 outputRefs: [ default ] 4 parse: json 5 inputs: 6 - name: myAppLogData application: selector: matchLabels: 7 environment: production app: nginx namespaces: 8 - app1 - app2 outputs: 9 - default", "- inputRefs: [ myAppLogData, myOtherAppLogData ]", "oc create -f <file-name>.yaml", "oc delete pod --selector logging-infra=collector", "{\"level\":\"info\",\"name\":\"fred\",\"home\":\"bedrock\"}", "{\"message\":\"{\\\"level\\\":\\\"info\\\",\\\"name\\\":\\\"fred\\\",\\\"home\\\":\\\"bedrock\\\"\", \"more fields...\"}", "pipelines: - inputRefs: [ application ] outputRefs: myFluentd parse: json", "{\"structured\": { \"level\": \"info\", \"name\": \"fred\", \"home\": \"bedrock\" }, \"more fields...\"}", "outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat pipelines: - inputRefs: <application> outputRefs: default parse: json 2", "{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"apache\", ...}} }", "{ \"structured\":{\"name\":\"wilma\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"google\", ...}} }", "outputDefaults: elasticsearch: structuredTypeKey: openshift.labels.myLabel 1 structuredTypeName: nologformat pipelines: - name: application-logs inputRefs: - application - audit outputRefs: - elasticsearch-secure - default parse: json labels: myLabel: myValue 2", "{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"openshift\":{\"labels\":{\"myLabel\": \"myValue\", ...}} }", "outputDefaults: elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json", "oc create -f <file-name>.yaml", "oc delete pod --selector logging-infra=collector", "kind: Template apiVersion: template.openshift.io/v1 metadata: name: eventrouter-template annotations: description: \"A pod forwarding kubernetes events to OpenShift Logging stack.\" tags: \"events,EFK,logging,cluster-logging\" objects: - kind: ServiceAccount 1 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} - kind: ClusterRole 2 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader rules: - apiGroups: [\"\"] resources: [\"events\"] verbs: [\"get\", \"watch\", \"list\"] - kind: ClusterRoleBinding 3 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader-binding subjects: - kind: ServiceAccount name: eventrouter namespace: USD{NAMESPACE} roleRef: kind: ClusterRole name: event-reader - kind: ConfigMap 4 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} data: config.json: |- { \"sink\": \"stdout\" } - kind: Deployment 5 apiVersion: apps/v1 metadata: name: eventrouter namespace: USD{NAMESPACE} labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" spec: selector: matchLabels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" replicas: 1 template: metadata: labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" name: eventrouter spec: serviceAccount: eventrouter containers: - name: kube-eventrouter image: USD{IMAGE} imagePullPolicy: IfNotPresent resources: requests: cpu: USD{CPU} memory: USD{MEMORY} volumeMounts: - name: config-volume mountPath: /etc/eventrouter volumes: - name: config-volume configMap: name: eventrouter parameters: - name: IMAGE 6 displayName: Image value: \"registry.redhat.io/openshift-logging/eventrouter-rhel8:v0.4\" - name: CPU 7 displayName: CPU value: \"100m\" - name: MEMORY 8 displayName: Memory value: \"128Mi\" - name: NAMESPACE displayName: Namespace value: \"openshift-logging\" 9", "oc process -f <templatefile> | oc apply -n openshift-logging -f -", "oc process -f eventrouter.yaml | oc apply -n openshift-logging -f -", "serviceaccount/eventrouter created clusterrole.authorization.openshift.io/event-reader created clusterrolebinding.authorization.openshift.io/event-reader-binding created configmap/eventrouter created deployment.apps/eventrouter created", "oc get pods --selector component=eventrouter -o name -n openshift-logging", "pod/cluster-logging-eventrouter-d649f97c8-qvv8r", "oc logs <cluster_logging_eventrouter_pod> -n openshift-logging", "oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging", "{\"verb\":\"ADDED\",\"event\":{\"metadata\":{\"name\":\"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"namespace\":\"openshift-service-catalog-removed\",\"selfLink\":\"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"uid\":\"787d7b26-3d2f-4017-b0b0-420db4ae62c0\",\"resourceVersion\":\"21399\",\"creationTimestamp\":\"2020-09-08T15:40:26Z\"},\"involvedObject\":{\"kind\":\"Job\",\"namespace\":\"openshift-service-catalog-removed\",\"name\":\"openshift-service-catalog-controller-manager-remover\",\"uid\":\"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f\",\"apiVersion\":\"batch/v1\",\"resourceVersion\":\"21280\"},\"reason\":\"Completed\",\"message\":\"Job completed\",\"source\":{\"component\":\"job-controller\"},\"firstTimestamp\":\"2020-09-08T15:40:26Z\",\"lastTimestamp\":\"2020-09-08T15:40:26Z\",\"count\":1,\"type\":\"Normal\"}}", "oc get pod -n openshift-logging --selector component=elasticsearch", "NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m", "oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health", "{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"green\", }", "oc project openshift-logging", "oc get cronjob", "NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s", "oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices", "Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0", "oc get ds collector -o json | grep collector", "\"containerName\": \"collector\"", "oc get kibana kibana -o json", "[ { \"clusterCondition\": { \"kibana-5fdd766ffd-nb2jj\": [ { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" }, { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" } ] }, \"deployment\": \"kibana\", \"pods\": { \"failed\": [], \"notReady\": [] \"ready\": [] }, \"replicaSets\": [ \"kibana-5fdd766ffd\" ], \"replicas\": 1 } ]", "oc project openshift-logging", "oc get clusterlogging instance -o yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging . status: 1 collection: logs: fluentdStatus: daemonSet: fluentd 2 nodes: fluentd-2rhqp: ip-10-0-169-13.ec2.internal fluentd-6fgjh: ip-10-0-165-244.ec2.internal fluentd-6l2ff: ip-10-0-128-218.ec2.internal fluentd-54nx5: ip-10-0-139-30.ec2.internal fluentd-flpnn: ip-10-0-147-228.ec2.internal fluentd-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - fluentd-2rhqp - fluentd-54nx5 - fluentd-6fgjh - fluentd-6l2ff - fluentd-flpnn - fluentd-n2frh logstore: 3 elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization: 4 kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1", "nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-clientdatamaster-0-1 upgradeStatus: {}", "nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: cluster-logging-operator upgradeStatus: {}", "Elasticsearch Status: Shard Allocation Enabled: shard allocation unknown Cluster: Active Primary Shards: 0 Active Shards: 0 Initializing Shards: 0 Num Data Nodes: 0 Num Nodes: 0 Pending Tasks: 0 Relocating Shards: 0 Status: cluster health unknown Unassigned Shards: 0 Cluster Name: elasticsearch Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: 0/5 nodes are available: 5 node(s) didn't match node selector. Reason: Unschedulable Status: True Type: Unschedulable elasticsearch-cdm-mkkdys93-2: Node Count: 2 Pods: Client: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Data: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Master: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready:", "Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) Reason: Unschedulable Status: True Type: Unschedulable", "Status: Collection: Logs: Fluentd Status: Daemon Set: fluentd Nodes: Pods: Failed: Not Ready: Ready:", "oc project openshift-logging", "oc describe deployment cluster-logging-operator", "Name: cluster-logging-operator . Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1----", "oc get replicaset", "NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m", "oc describe replicaset cluster-logging-operator-574b8987df", "Name: cluster-logging-operator-574b8987df . Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv----", "oc project openshift-logging", "oc get Elasticsearch", "NAME AGE elasticsearch 5h9m", "oc get Elasticsearch <Elasticsearch-instance> -o yaml", "oc get Elasticsearch elasticsearch -n openshift-logging -o yaml", "status: 1 cluster: 2 activePrimaryShards: 30 activeShards: 60 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: \"\" conditions: [] 3 nodes: 4 - deploymentName: elasticsearch-cdm-zjf34ved-1 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-2 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-3 upgradeStatus: {} pods: 5 client: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt data: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt master: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt shardAllocationEnabled: all", "status: nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}", "status: nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}", "status: nodes: - conditions: - lastTransitionTime: 2019-04-10T02:26:24Z message: '0/8 nodes are available: 8 node(s) didn''t match node selector.' reason: Unschedulable status: \"True\" type: Unschedulable", "status: nodes: - conditions: - last Transition Time: 2019-04-10T05:55:51Z message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) reason: Unschedulable status: True type: Unschedulable", "status: clusterHealth: \"\" conditions: - lastTransitionTime: 2019-04-17T20:01:31Z message: Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles reason: Invalid Settings status: \"True\" type: InvalidRedundancy", "status: clusterHealth: green conditions: - lastTransitionTime: '2019-04-17T20:12:34Z' message: >- Invalid master nodes count. Please ensure there are no more than 3 total nodes with master roles reason: Invalid Settings status: 'True' type: InvalidMasters", "status: clusterHealth: green conditions: - lastTransitionTime: \"2021-05-07T01:05:13Z\" message: Changing the storage structure for a custom resource is not supported reason: StorageStructureChangeIgnored status: 'True' type: StorageStructureChangeIgnored", "oc get pods --selector component=elasticsearch -o name", "pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7", "oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices", "Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -n openshift-logging' to see all of the containers in this pod. green open infra-000002 S4QANnf1QP6NgCegfnrnbQ 3 1 119926 0 157 78 green open audit-000001 8_EQx77iQCSTzFOXtxRqFw 3 1 0 0 0 0 green open .security iDjscH7aSUGhIdq0LheLBQ 1 1 5 0 0 0 green open .kibana_-377444158_kubeadmin yBywZ9GfSrKebz5gWBZbjw 3 1 1 0 0 0 green open infra-000001 z6Dpe__ORgiopEpW6Yl44A 3 1 871000 0 874 436 green open app-000001 hIrazQCeSISewG3c2VIvsQ 3 1 2453 0 3 1 green open .kibana_1 JCitcBMSQxKOvIq6iQW6wg 1 1 0 0 0 0 green open .kibana_-1595131456_user1 gIYFIEGRRe-ka0W3okS-mQ 3 1 1 0 0 0", "oc get pods --selector component=elasticsearch -o name", "pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7", "oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw", ". Status: Running . Containers: elasticsearch: Container ID: cri-o://b7d44e0a9ea486e27f47763f5bb4c39dfd2 State: Running Started: Mon, 08 Jun 2020 10:17:56 -0400 Ready: True Restart Count: 0 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . proxy: Container ID: cri-o://3f77032abaddbb1652c116278652908dc01860320b8a4e741d06894b2f8f9aa1 State: Running Started: Mon, 08 Jun 2020 10:18:38 -0400 Ready: True Restart Count: 0 . Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True . Events: <none>", "oc get deployment --selector component=elasticsearch -o name", "deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3", "oc describe deployment elasticsearch-cdm-1gon-1", ". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Conditions: Type Status Reason ---- ------ ------ Progressing Unknown DeploymentPaused Available True MinimumReplicasAvailable . Events: <none>", "oc get replicaSet --selector component=elasticsearch -o name replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495 replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7d", "oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495", ". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8@sha256:4265742c7cdd85359140e2d7d703e4311b6497eec7676957f455d6908e7b1c25 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Events: <none>", "eo_elasticsearch_cr_cluster_management_state{state=\"managed\"} 1 eo_elasticsearch_cr_cluster_management_state{state=\"unmanaged\"} 0", "eo_elasticsearch_cr_restart_total{reason=\"cert_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"rolling_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"scheduled_restart\"} 3", "Total number of Namespaces. es_index_namespaces_total 5", "es_index_document_count{namespace=\"namespace_1\"} 25 es_index_document_count{namespace=\"namespace_2\"} 10 es_index_document_count{namespace=\"namespace_3\"} 5", "message\": \"Secret \\\"elasticsearch\\\" fields are either missing or empty: [admin-cert, admin-key, logging-es.crt, logging-es.key]\", \"reason\": \"Missing Required Secrets\",", "oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')", "tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- health", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/nodes?v", "-n openshift-logging get pods -l component=elasticsearch", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/master?v", "logs <elasticsearch_master_pod_name> -c elasticsearch -n openshift-logging", "logs <elasticsearch_node_name> -c elasticsearch -n openshift-logging", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/recovery?active_only=true", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- health |grep number_of_pending_tasks", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/settings?pretty", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/settings?pretty -X PUT -d '{\"persistent\": {\"cluster.routing.allocation.enable\":\"all\"}}'", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/indices?v", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_cache/clear?pretty", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.allocation.max_retries\":10}'", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_search/scroll/_all -X DELETE", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.unassigned.node_left.delayed_timeout\":\"10m\"}'", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/indices?v", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_red_index_name> -X DELETE", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_nodes/stats?pretty", "-n openshift-logging get po -o wide", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/health?pretty | grep unassigned_shards", "for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done", "-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'", "-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE", "-n openshift-logging get po -o wide", "for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/health?pretty | grep relocating_shards", "-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'", "-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE", "for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done", "-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'", "-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_all/_settings?pretty -X PUT -d '{\"index.blocks.read_only_allow_delete\": null}'", "for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done", "-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'", "-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html-single/logging/index
Chapter 4. New features
Chapter 4. New features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 9.4. 4.1. Installer and image creation Support to add customized files for SCAP security profile to a blueprint With this enhancement, you can now add customized tailoring options for a profile to the osbuild-composer blueprint customizations by using the following options: selected for the list of rules that you want to add unselected for the list of rules that you want to remove With the default org.ssgproject.content rule namespace, you can omit the prefix for rules under this namespace. For example: the org.ssgproject.content_grub2_password and grub2_password are functionally equivalent. When you build an image from that blueprint, it creates a tailoring file with a new tailoring profile ID and saves it to the image as /usr/share/xml/osbuild-oscap-tailoring/tailoring.xml . The new profile ID will have _osbuild_tailoring appended as a suffix to the base profile. For example, if you use the cis base profile, xccdf_org.ssgproject.content_profile_cis_osbuild_tailoring . Jira:RHELDOCS-17792 [1] Minimal RHEL installation now installs only the s390utils-core package In RHEL 8.4 and later, the s390utils-base package is split into an s390utils-core package and an auxiliary s390utils-base package. As a result, setting the RHEL installation to minimal-environment installs only the necessary s390utils-core package and not the auxiliary s390utils-base package. If you want to use the s390utils-base package with a minimal RHEL installation, you must manually install the package after completing the RHEL installation or explicitly install s390utils-base using a Kickstart file. Bugzilla:1932480 [1] 4.2. Security Keylime verifier and registrar containers available You can now configure Keylime server components, the verifier and registrar, as containers. When configured to run inside a container, the Keylime registrar monitors the tenant systems from the container without any binaries on the host. The container deployment provides better isolation, modularity, and reproducibility of Keylime components. Jira:RHELDOCS-16721 [1] libkcapi now provides an option for specifying target file names in hash-sum calculations This update of the libkcapi (Linux kernel cryptographic API) packages introduces the new option -T for specifying target file names in hash-sum calculations. The value of this option overrides file names specified in processed HMAC files. You can use this option only with the -c option, for example: Jira:RHEL-15298 [1] Finer control over MACs in SSH with crypto-policies You can now set additional options for message authentication codes (MACs) for the SSH protocol in the system-wide cryptographic policies ( crypto-policies ). With this update, the crypto-policies option ssh_etm has been converted into a tri-state etm@SSH option. The ssh_etm option has been deprecated. You can now set ssh_etm to one of the following values: ANY Allows both encrypt-then-mac and encrypt-and-mac MACs. DISABLE_ETM Disallows encrypt-then-mac MACs. DISABLE_NON_ETM Disallows MACs that do not use encrypt-then-mac . Note that ciphers that use implicit MACs are always allowed because they use authenticated encryption. Jira:RHEL-15925 The semanage fcontext command no longer reorders local modifications The semanage fcontext -l -C command lists local file context modifications stored in the file_contexts.local file. The restorecon utility processes the entries in the file_contexts.local from the most recent entry to the oldest. Previously, semanage fcontext -l -C listed the entries in an incorrect order. This mismatch between processing order and listing order caused problems when managing SELinux rules. With this update, semanage fcontext -l -C displays the rules in the correct and expected order, from the oldest to the newest. Jira:RHEL-24462 [1] Additional services confined in the SELinux policy This update adds additional rules to the SELinux policy that confine the following systemd services: nvme-stas rust-afterburn rust-coreos-installer bootc As a result, these services do not run with the unconfined_service_t SELinux label anymore, and run successfully in SELinux enforcing mode. Jira:RHEL-12591 [1] New SELinux policy module for the SAP HANA service This update adds additional rules to the SELinux policy for the SAP HANA service. As a result, the service now runs successfully in SELinux enforcing mode in the sap_unconfined_t domain. Jira:RHEL-21452 The glusterd SELinux module moved to a separate glusterfs-selinux package With this update, the glusterd SELinux module is maintained in the separate glusterfs-selinux package. The module is therefore no longer part of the selinux-policy package. For any actions that concern the glusterd module, install and use the glusterfs-selinux package. Jira:RHEL-1548 The fips.so library for OpenSSL provided as a separate package OpenSSL uses the fips.so shared library as a FIPS provider. With this update, the latest version of fips.so submitted to the National Institute of Standards and Technology (NIST) for certification is in a separate package to ensure that future versions of OpenSSL use certified code or code undergoing certification. Jira:RHEL-23474 [1] The chronyd-restricted service is confined by the SELinux policy This update adds additional rules to the SELinux policy that confine the new chronyd-restricted service. As a result, the service now runs successfully in SELinux. Jira:RHEL-18219 OpenSSL adds a drop-in directory for provider configuration The OpenSSL TLS toolkit supports provider APIs for installation and configuration of modules that provide cryptographic algorithms. With this update, you can place provider-specific configuration in separate .conf files in the /etc/pki/tls/openssl.d directory without modifying the main OpenSSL configuration file. Jira:RHEL-17193 SELinux user-space components rebased to 3.6 The SELinux user-space components libsepol , libselinux , libsemanage , policycoreutils , checkpolicy , and mcstrans library package have been rebased to 3.6. This version provides various bug fixes, optimizations and enhancements, most notably: Added support for deny rules in CIL. Added support for notself and other keywords in CIL. Added the getpolicyload binary that prints the number of policy reloads performed on the current system. Jira:RHEL-16233 GnuTLS rebased to 3.8.3 The GnuTLS package has been rebased to upstream version 3.8.3 This version provides various bug fixes and enhancements, most notably: The gnutls_hkdf_expand function now accepts only arguments with lengths less than or equal to 255 times hash digest size, to comply with RFC 5869 2.3. Length limit for TLS PSK usernames has been increased to 65535 characters. The gnutls_session_channel_binding API function performs additional checks when GNUTLS_CB_TLS_EXPORTER is requested accordingly to RFC 9622 4.2. The GNUTLS_NO_STATUS_REQUEST flag and the %NO_STATUS_REQUEST priority modifier have been added to allow disabling of the status_request TLS extension on the client side. GnuTLS now checks the contents of the Change Cipher Spec message to be equal to 1 when the TLS version is older than 1.3. ClientHello extensions order is randomized by default. GnuTLS now supports EdDSA key generation on PKCS #11 tokens, which previously did not work. Jira:RHEL-14891 [1] nettle rebased to 3.9.1 The nettle library package has been rebased to 3.9.1. This version provides various bug fixes, optimizations and enhancements, most notably: Added balloon password hashing Added SIV-GCM authenticated encryption mode Added Offset Codebook Mode authenticated encryption mode Improved performance of the SHA-256 hash function on 64-bit IBM Z, AMD and Intel 64-bit architectures Improved performance of the Poly1305 hash function on IBM Power Systems, Little Endian, AMD and Intel 64-bit architectures Jira:RHEL-14890 [1] p11-kit rebased to 0.25.3 The p11-kit packages have been updated to upstream version 0.25.3. The packages contain the p11-kit tool for managing PKCS #11 modules, the trust tool for operating on the trust policy store, and the p11-kit library. Notable enhancements include the following: Added support for PKCS #11 version 3.0 The pkcs11.h header file: Added ChaCha20/Salsa20, Poly1305 and IBM-specific mechanisms and attributes Added AES-GCM mechanism parameters for message-based encryption The p11-kit tool: Added utility commands to list and manage objects of a token ( list-tokens , list-mechanisms , list-objects , import-object , export-object , delete-object , and generate-keypair ) Added utility commands to manage PKCS#11 profiles of a token ( list-profiles , add-profile , and delete-profile ) Added the print-config command for printing merged configuration The trust tool: Added the check-format command to validate the format of .p11-kit files Jira:RHEL-14834 [1] libkcapi rebased to 1.4.0 The libkcapi library, which provides access to the Linux kernel crypto API, has been rebased to upstream version 1.4.0. The update includes various enhancements and bug fixes, most notably: Added the sm3sum and sm3hmac tools. Added the kcapi_md_sm3 and kcapi_md_hmac_sm3 APIs. Added SM4 convenience functions. Fixed support for link-time optimization (LTO). Fixed LTO regression testing. Fixed support for AEAD encryption of an arbitrary size with kcapi-enc . Jira:RHEL-5367 [1] User and group creation in OpenSSH uses the sysusers.d format Previously, OpenSSH used static useradd scripts. With this update, OpenSSH uses the sysusers.d format to declare system users, which makes it possible to introspect system users. Jira:RHEL-5222 OpenSSH limits artificial delays in authentication OpenSSH's response after login failure is artificially delayed to prevent user enumeration attacks. This update introduces an upper limit on such delays when remote authentication takes too long, for example in privilege access management (PAM) processing. Jira:RHEL-2469 [1] stunnel rebased to 5.71 The stunnel TLS/SSL tunneling service has been rebased to upstream version 5.71. Notable new features include: Added support for modern PostgreSQL clients. You can use the protocolHeader service-level option to insert custom connect protocol negotiation headers. You can use the protocolHost option to control the client SMTP protocol negotiation HELO/EHLO value. Added client-side support for Client-side protocol = ldap . You can now configure session resumption by using the service-level sessionResume option. Added support to request client certificates in server mode with CApath (previously, only CAfile was supported). Improved file reading and logging performance. Added support for configurable delay for the retry option. In client mode, OCSP stapling is requested and verified when verifyChain is set. In server mode, OCSP stapling is always available. Inconclusive OCSP verification breaks TLS negotiation. You can disable this by setting OCSPrequire = no . Jira:RHEL-2468 [1] New options for dropping capabilities in Rsyslog You can now configure Rsyslog's behavior when dropping capabilities by using the following global options: libcapng.default Determines Rsyslog's actions when it encounters errors while dropping capabilities. The default value is on , which caused Rsyslog to exit if an error related to libcapng-related occurs. libcapng.enable Determines whether Rsyslog drops capabilities during startup. If this option is disabled, libcapng.default has no impact. Jira:RHEL-943 [1] audit rebased to 3.1.2 The Linux Audit system has been updated to version 3.1.2, which provides bug fixes, enhancements, and performance improvements over the previously released version 3.0.7. Notable enhancements include: The auparse library now interprets unnamed and anonymous sockets. You can use the new keyword this-hour in the start and end options of the ausearch and aureport tools. Support for the io_uring asynchronous I/O API has been added. User-friendly keywords for signals have been added to the auditctl program. Handling of corrupt logs in auparse has been improved. The ProtectControlGroups option is now disabled by default in the auditd service. Rule checking for the exclude filter has been fixed. The interpretation of OPENAT2 fields has been enhanced. The audispd af_unix plugin has been moved to a standalone program. The Python binding has been changed to prevent setting Audit rules from the Python API. This change was made due to a bug in the Simplified Wrapper and Interface Generator (SWIG). Jira:RHEL-14896 [1] Rsyslog rebased to 8.2310 The Rsyslog log processing system has been rebased to upstream version 8.2310. This update introduces significant enhancements and bug fixes. Most notable enhancements include: Customizable TLS/SSL encryption settings In versions, configuring TLS/SSL encryption settings for separate connections was limited to global settings. With the latest version, you can now define unique TLS/SSL settings for each individual connection in Rsyslog. This includes specifying different CA certificates, private keys, public keys, and CRL files for enhanced security and flexibility. For detailed information and usage, see documentation provided in the rsyslog-doc package. Refined capability dropping feature You can now set additional options that relate to capability dropping. You can disable capability dropping by setting the libcapng.enable global option to off . For more information, see RHEL-943 . Jira:RHEL-937 , Jira:RHEL-943 SCAP Security Guide rebased to 0.1.72 The SCAP Security Guide (SSG) packages have been rebased to upstream version 0.1.72. This version provides bug fixes and various enhancements, most notably: CIS profiles are updated to align with the latest benchmarks. The PCI DSS profile is aligned with the PCI DSS policy version 4.0. STIG profiles are aligned with the latest DISA STIG policies. For additional information, see the SCAP Security Guide release notes . Jira:RHEL-21425 4.3. RHEL for Edge Support for building FIPS enabled RHEL for Edge images This enhancement adds support for building FIPS enabled RHEL for Edge images for the following images types: edge-installer edge-simplified-installer edge-raw-image edge-ami edge-vsphere Important You can enable FIPS mode only during the image provisioning process. You cannot change to FIPS mode after the non-FIPS image build starts. Jira:RHELDOCS-17263 [1] 4.4. Shells and command-line tools openCryptoki rebased to version 3.22.0 The opencryptoki package has been updated to version 3.22.0. Notable changes include: Added support for the AES-XTS key type by using the CPACF protected keys. Added support for managing certificate objects. Added support for public sessions with the no-login option. Added support for logging in as the Security Officer (SO). Added support for importing and exporting the Edwards and Montgomery keys. Added support for importing the RSA-PSS keys and certificates. For security reasons, the 2 key parts of an AES-XTS key should not be the same. This update adds checks to the key generation and import process to ensure this. Various bug fixes have been implemented. Jira:RHEL-11412 [1] 4.5. Infrastructure services synce4l rebased to version 1.0.0 The synce4l protocol has been updated to version 1.0.0. This update adds support for kernel Digital Phase Locked Loop (DPLL) interface. Jira:RHEL-10089 [1] chrony rebased to version 4.5 The chrony suite has been updated to version 4.5. Notable changes include: Added support for the AES-GCM-SIV cipher to shorten Network Time Security (NTS) cookies to improve reliability of NTS over the internet, where some providers block or limit the rate of longer Network Time Protocol (NTP) messages. Added periodic refresh of IP addresses of NTP sources specified by hostname. The default interval is two weeks and it can be disabled by adding refresh 0 parameter to the chrony.conf file. Improved automatic replacement of unreachable NTP sources. Improved logging of important changes made by the chronyc utility. Improved logging of source selection failures and falsetickers. Added the hwtstimeout directive to configure timeout for late hardware transmit timestamps. Added experimental support for corrections provided by Precision Time Protocol (PTP) transparent clocks to reach accuracy of PTP with hardware timestamping. Added the chronyd-restricted service as an alternative service for minimal client-only configurations where the chronyd service can be started without root privileges. Fixed the presend option in interleaved mode. Fixed reloading of modified sources specified by IP address from the sourcedir directories. Jira:RHEL-6522 linuxptp rebased to version 4.2 The linuxptp protocol has been updated to version 4.2. Notable changes include: Added support for multiple domains in the phc2sys utility. Added support for notifications on clock updates and changes in the Precision Time Protocol (PTP) parent dataset, for example, clock class. Added support for PTP Power Profile, namely IEEE C37.238-2011 and IEEE C37.238-2017. Jira:RHEL-2026 4.6. Networking The nft utility can now reset nftables rule-contained states With this enhancement, you can use the nft reset command to reset nftables rule-contained states. For example, use this feature to reset counter and quota statement values. Jira:RHEL-5980 [1] Marvell Octeon PCIe Endpoint Network Interface Controller driver is available This enhancement has added the octeon_ep driver. You can use it for networking of Marvell's Octeon PCIe Endpoint network interface cards. The host drivers act as PCI Express (PCIe) endpoint network interface (NIC) to support Marvell OCTEON TX2 CN106XX, a 24 N2 cores Infrastructure Processor Family. By using OCTEON TX2 driver as a PCIe NIC, you can use OCTEON TX2 as a PCIe endpoint in various products: security firewalls, 5G Open Radio Access Network (ORAN) and Virtual RAN (VRAN) applications and data processing offloading applications. Currently, you can use it with the following devices: Network controller: Cavium, Inc. Device b100 Network controller: Cavium, Inc. Device b200 Network controller: Cavium, Inc. Device b400 Network controller: Cavium, Inc. Device b900 Network controller: Cavium, Inc. Device ba00 Network controller: Cavium, Inc. Device bc00 Network controller: Cavium, Inc. Device bd00 Jira:RHEL-9308 [1] NetworkManager now supports configuring the switchdev mode for advanced hardware offload With this enhancement, you can configure the following new properties in NetworkManager connection profiles: sriov.eswitch-mode sriov.eswitch-inline-mode sriov.eswitch-encap-mode With these properties, you can configure the eSwitch of smart network interface controllers (Smart NICs). For example, use the sriov.eswitch-mode setting to change the mode from legacy SR-IOV to switchdev to use advanced hardware offload features. Jira:RHEL-1441 NetworkManager supports changing ethtool channel settings A network interface can have multiple interrupt request (IRQs) and associated packet queues called channels . With this enhancement, NetworkManager connection profiles can specify the number of channels to assign to an interface through connection properties ethtool.channels-rx , ethtool.channels-tx , ethtool.channels-other , and ethtool.channels-combined . Jira:RHEL-1471 [1] Nmstate can now create a YAML file to revert settings With this enhancement, Nmstate can create a "revert configuration file" that contains the differences between the current network settings and a YAML file with the new configuration that you want to apply. If the settings do not work as expected after you applied the YAML file, you can use the revert configuration file to restore the settings: Create a YAML file, for example, new.yml with the configuration that you want to apply. Create a revert configuration file that contains the differences between intended settings in new.yml and the current state: Apply the configuration from new.yml . If you want now to switch back to the state, apply revert.yml . Alternatively, you can use the NetworkState::generate_revert(current) call if you use the Nmstate API to create a revert configuration. Jira:RHEL-1434 Nmstate API configures VPN connection based on IPsec configuration The Libreswan utility is an implementation of IPsec for configuring VPNs. With this update, by using nmstatectl , you can configure IPsec-based authentication types along with configuration modes (tunnel and transport) and network layouts ( host-to-subnet , host-to-host , subnet-to-subnet ). Jira:RHEL-1605 nmstate now supports the priority bond property With this update, you can set the priority of bond ports in the nmstate framework by using the priority property in the ports-config section of the configuration file. An example YAML file can look as follows: When an active port within the bonded interface is down, the RHEL kernel elects the active port that has the highest numerical value in the priority property from the pool of all backup ports. The priority property is relevant for the following modes of the bond interface: active-backup balance-tlb balance-alb Jira:RHEL-1438 [1] NetworkManager wifi connections support a new MAC address-based privacy option With this enhancement, you can configure NetworkManager to associate a random-generated MAC address with the Service Set Identifier (SSID) of a wifi network. This enables you to permanently use a random but consistent MAC address for a wifi network even if you delete a connection profile and re-create it. To use this new feature, set the 802-11-wireless.cloned-mac-address property of a wifi connection profile to stable-ssid . Jira:RHEL-16470 Introduction of new nmstate attributes for the VLAN interface With this update of the nmstate framework, the following VLAN attributes were introduced: registration-protocol : VLAN Registration Protocol. The valid values are gvrp (GARP VLAN Registration Protocol), mvrp (Multiple VLAN Registration Protocol), and none . reorder-headers : reordering of output packet headers. The valid values are true and false . loose-binding : loose binding of the interface to the operating state of its primary device. The valid values are true and false . Your YAML configuration file can look similar to the following example: Jira:RHEL-19142 ipv4.dhcp-client-id set to none prevents sending a client-identifier If the client-identifier option is not set in NetworkManager, then the actual value depends on the type of DHCP clients in use, such as NetworkManager internal DHCP client or dhclient . Generally, DHCP clients send a client-identifier . Therefore, in almost all cases, you do not need to set the none option. As a result, this option is only useful in case of some unusual DHCP server configurations that require clients to not send a client-identifier . Jira:RHEL-1469 nmstate now supports creating MACsec interfaces With this update, the users of the nmstate framework can configure MACsec interfaces to protect their communication on Layer 2 of the Open Systems Interconnection (OSI) model. As a result, there is no need to encrypt individual services later on Layer 7. Also, the feature eliminates associated challenges such as managing large amounts of certificates for each endpoint. For more information, see Configuring a MACsec connection using nmstatectl . Jira:RHEL-1420 netfilter update The kernel package has been upgraded to version 5.14.0-405 in RHEL 9. As a result, the rebase also provided multiple enhancements and bug fixes in the netfilter component of the RHEL kernel. The most notable change includes: The nftables subsystem is able to match various inner header fields of the tunnel packets. This enables more granular and effective control over network traffic, especially in environments where tunneling protocols are used. Jira:RHEL-16630 [1] firewalld now avoids unnecessary firewall rule flushes The firewalld service does not remove all existing rules from the iptables configuration if both following conditions are met: firewalld is using the nftables backend. There are no firewall rules created with the --direct option. This change aims at reducing unnecessary operations (firewall rules flushes) and improves integration with other software. Jira:RHEL-427 [1] The ss utility adds visibility improvement to TCP bound-inactive sockets The iproute2 suite provides a collection of utilities to control TCP/IP networking traffic. TCP bound-inactive sockets are attached to an IP address and a port number but neither connected nor listening on TCP ports. The socket services ( ss ) utility adds support for the kernel to dump TCP bound-inactive sockets. You can view those sockets with the following command options: ss --all : to dump all sockets including TCP bound-inactive ones ss --bound-inactive : to dump only bound-inactive sockets Jira:RHEL-21223 [1] The Nmstate API now supports SR-IOV VLAN 802.1ad tagging With this enhancement, you can now use the Nmstate API to enable hardware-accelerated Single-Root I/O Virtualization (SR-IOV) Virtual Local Area Network (VLAN) 802.1ad tagging on cards whose firmware supports this feature. Jira:RHEL-1425 The TCP Illinois congestion algorithm kernel module is re-enabled TCP Illinois is a variant of the TCP protocol. Customers such as Internet Service Providers (ISP) experience sub-optimal performance without TCP Illinois algorithm and network traffic does not scale well even when using Bandwidth and Round-trip propagation time (BBR) algorithm that results into high latency. As a result, TCP Illinois algorithm can produce slightly higher average throughput, fairer network resources allocation, and compatibility. Jira:RHEL-5736 [1] The iptables utility rebased to version 1.8.10 The iptables utility defines rules for packet filtering to manage firewall. This utility has been rebased. Notable changes include: Notable features: Add support for newer chunk types in sctp match Align ip6tables opt-in column if empty helps when piping output to jc --iptables Print numeric protocol numbers with --numeric for a more stable output More translations for *tables-translate utilities with improved output formatting Several manual page improvements Notable fixes: iptables-restore error messages incorrectly pointing at the COMMIT line Broken -p Length match in ebtables Broken ebtables among match when used in multiple rules restored through ebtables-restore Program could crash when renaming a chain depending on the number of chains already present Non-critical memory leaks Missing broute table support in ebtables after the switch to nft-variants Broken ip6tables rule counter setting with '-c' option Unexpected error message when listing a non-existent chain Potential false-positive ebtables rule comparison if among match is used Prohibit renaming a chain to an invalid name Stricter checking of "chain lines" in iptables-restore input to detect invalid chain names Non-functional built-in chain policy counters Jira:RHEL-14147 nftables rebased to version 1.0.9 The nftables utility has been upgraded to version 1.0.9, which provides multiple bug fixes and enhancements. Notable changes include: Improvements to the --optimize command option Extended the Python nftables class Improved behavior when dealing with rules created by iptables-nft Support accessing fields of vxlan-encapsulated headers Initial support for GRE, Geneve, and GRETAP protocols New reset rule(s) commands to reset rule counters, quotas New destroy command deletes things only if they exist New last statement recording when it has seen a packet for the last time Add and remove devices from netdev-family chains New meta broute expression to emulate ebtables' broute functionality Fixed miscellaneous memory leaks Fixed wrong location in error messages in corner-cases Set and map statements missing in JSON output Jira:RHEL-14191 firewalld rebased to version 1.3 The firewalld package has been upgraded to version 1.3, which provides multiple bug fixes and enhancements. Notable changes include: New reset-to-defaults CLI option: This option resets configuration of the firewalld service to defaults. This allows users to erase firewalld configuration and start over with the default settings. Enable the --add-masquerade CLI option for policies with ingress-zone=ZONE , where ZONE has interfaces assigned with the --add-interface CLI option. This removes a restriction and enables usage of interfaces (instead of sources) in common scenarios. The reasons to introduce these features: reset-to-defaults was implemented to reset the firewall to the default configuration. Using interfaces allows change of IP address without impacting firewall configuration. As a result, users can perform the following actions: Reset the configuration Combine --add-maquerade with --add-interface while using policies Jira:RHEL-14485 4.7. Kernel Kernel version in RHEL 9.4 Red Hat Enterprise Linux 9.4 is distributed with the kernel version 5.14.0-427.13.1. rteval now supports adding and removing arbitrary CPUs from the default measurement CPU list With the rteval utility, you can add (using the + sign) or subtract (using the - sign) CPUs to the default measurement CPU list when using the --measurement-cpulist parameter, instead of having to specify an entire new list. Additionally, --measurement-run-on-isolcpus is introduced for adding the set of all isolated CPUs to the default measurement CPU list. This option covers the most common use case of a real-time application running on isolated CPUs. Other use cases require a more generic feature. For example, some real-time applications used one isolated CPU for housekeeping, requiring it to be excluded from the default measurement CPU list. As a result, you can now not only add, but also remove arbitrary CPUs from the default measurement CPU list in a flexible way. Removing takes precedence over adding. This rule applies to both, CPUs specified with +/- signs and to those defined with --measurement-run-on-isolcpus . Jira:RHEL-9912 [1] rtla rebased to version 6.6 of the upstream kernel source code The rtla utility has been upgraded to the latest upstream version, which provides multiple bug fixes and enhancements. Notable changes include: Added the -C option to specify additional control groups for rtla threads to run in, apart from the main rtla thread. Added the --house-keeping option to place rtla threads on a housekeeping CPU and to put measurement threads on different CPUs. Added support to the timerlat tracer so that you can run timerlat hist and timerlat top threads in user space. Jira:RHEL-10079 [1] cyclicdeadline now supports generating a histogram of latencies With this release, the cyclicdeadline utility supports generating a histogram of latencies. You can use this feature to get more insight into the frequency of latency spikes of different sizes, rather than getting just one worst-case number. Jira:RHEL-9910 [1] SGX is now fully supported Software Guard Extensions (SGX) is an Intel(R) technology for protecting software code and data from disclosure and modification. The RHEL kernel provides the SGX version 1 and 2 functionality. Version 1 enables platforms using the Flexible Launch Control mechanism to use the SGX technology. Version 2 adds Enclave Dynamic Memory Management (EDMM). Notable features include: Modifying EPCM permissions of regular enclave pages that belong to an initialized enclave. Dynamic addition of regular enclave pages to an initialized enclave. Expanding an initialized enclave to accommodate more threads. Removing regular and TCS pages from an initialized enclave. In this release, SGX moves from Technology Preview to a fully supported feature. Bugzilla:2041883 [1] The Intel data streaming accelerator driver is now fully supported The Intel data streaming accelerator driver (IDXD) is a kernel driver that provides an Intel CPU integrated accelerator. It includes a shared work queue with process address space ID ( pasid ) submission and shared virtual memory (SVM). In this release, IDXD moves from a Technology Preview to a fully supported feature. Jira:RHEL-10097 [1] The eBPF facility has been rebased to Linux kernel version 6.6 Notable changes and enhancements include: New dynamic pointers ( dynptrs ) of the skb and xdp type, which enable for more ergonomic and less brittle iteration through data and variable-sized accesses in BPF programs. A new BPF netfilter program type and minimal support to hook BPF programs to netfilter hooks, such as prerouting or forward. Multiple improvements to kernel pointers ( kptrs ): You can use kptrs in more map types. RCU semantics are enabled for task kptrs . New reference-counted local kptrs useful for adding a node to both the BPF list and rbtree . At load time, BPF programs can detect whether a particular kfunc exists or not. Several new kfuncs for working with dynptrs , cgroups , sockets , and cpumasks . New BPF links for attaching multiple uprobes and usdt probes, which is significantly faster and saves extra file descriptors (FDs). The BPF map element count is enabled for all program types. The memory usage reporting for all BPF map types is more precise. The bpf_fib_lookup BPF helper includes the routing table ID. The BPF_OBJ_PIN and BPF_OBJ_GET commands support O_PATH FDs. Jira:RHEL-10691 [1] The libbpf-tools package is now available on IBM Z The libbpf-tools package, which provides command line tools for the BPF Compiler Collection (BCC), is now available on the IBM Z architecture. As a result, you can now use commands from libbpf-tools on IBM Z. Jira:RHEL-16325 [1] 4.8. Boot loader DEP/NX support in the pre-boot stage The memory protection feature known as Data Execution Prevention (DEP), No Execute (NX), or Execute Disable (XD), blocks the execution of code that is marked as non-executable. DEP/NX has been available in RHEL at the operating system level. This release adds DEP/NX support in the GRUB and shim boot loaders. This can prevent certain vulnerabilities during the pre-boot stage, such as a malicious EFI driver that might start certain attacks without the DEP/NX protection. Jira:RHEL-10288 [1] 4.9. File systems and storage Setting a filesystem size limit is now supported With this update, users can now set a filesystem size limit when creating or modifying a filesystem. The stratisd service enables dynamic filesystem growth, but excessive expansion of an XFS filesystem can cause significant performance issues. The addition of this feature addresses potential performance issues that might occur when growing XFS filesystems beyond a certain threshold. By setting a filesystem size limit, users can prevent such issues and ensure optimal performance. Additionally, this feature enables better pool monitoring and maintenance by allowing users to impose an upper limit on a filesystem's size, ensuring efficient resource allocation. Jira:RHEL-12898 Converting a standard LV to a thin LV by using lvconvert is now possible By specifying a standard logical volume (LV) as a thin pool data, you can now convert a standard LV to a thin LV by using the lvconvert command. With this update, you can convert existing LVs to use the thin provisioning facility. Jira:RHEL-8357 multipathd now supports detecting FPIN-Li events for NVMe devices Previously, the multipathd command would only monitor Integrity Fabric Performance Impact Notification (PFIN-Li) events on SCSI devices. multipathd could listen for Link Integrity events sent by a Fibre Channel fabric and use it to mark paths as marginal. This feature was only supported for multipath devices on top of SCSI devices, and multipathd was unable to mark Non-volatile Memory Express (NVMe) device paths as marginal by limiting the use of this feature. With this update, multipathd supports detecting FPIN-Li events for both SCSI and NVMe devices. As a result, multipath now does not use paths without a good fabric connection, while other paths are available. This helps to avoid IO delays in such situations. Jira:RHEL-6678 max_retries option is now added to the defaults section of multipath.conf This enhancement adds the max_retries option to the defaults section of the multipath.conf file. By default this option is unset, and uses the SCSI layer's default value of 5 retries. The valid values for this option is from 0 to 5 . When this option is set, it overrides the default value of the max_retries sysfs attribute for SCSI devices. This attribute controls the number of times the SCSI layer retries I/O commands before returning failure when it encounters certain error types. If users encounter an issue where multipath's path checkers return success but I/O to a device is hanging, they can set this option to decrease the time before the I/O will be retried down another path. Jira:RHEL-1729 [1] auto_resize option is now added to the defaults section of multipath.conf Previously, to resize a multipath device, you had to manually run the multipathd resize map <name> command. With this update, the auto_resize option is now added to the defaults section of the multipath.conf file. This option controls when the multipathd command can automatically resize a multipath device. The following are the different values for auto_resize : By default, auto_resize is set to never . In this case, multipathd works without any change. If auto_resize is set to grow_only , multipathd automatically resizes the multipath device when the device's paths have grown in size. If auto_resize is set to grow_shrink , multipathd automatically shrinks the multipath device when the device's paths are decreased in size. As a result, when this option is enabled, you no longer need to manually resize your multipath devices. Jira:RHEL-986 [1] Changes to Arcus NVMeoFC multipath.conf settings are now included in kernel Device-mapper-multipath now has a built-in configuration for the HPE Alletra 9000 NVMeFC array. Arcus added support for ANA (Asymmetric Namespace Access) for NVMeoFC. This is similar to ALUA for SCSI. A change in the multipath.conf is required for a RHEL host to use this feature and send only I/O to ANA optimized paths when available. Without this change, device mapper was sending I/O to both ANA optimized and ANA non-optimized paths. Note This change is only for NVMeoFC. FCP multipath.conf content already had this setting for supporting ALUA previously. Jira:RHEL-1830 stratis-cli rebased to version 3.6.0 The stratis-cli package has been upgraded to version 3.6.0. Notable bug fixes and enhancements include: The stratis-cli command-line interface supports an additional option to set the file system size limit on creation. The set-size-limit and unset-size-limit are two new file system commands, which sets or unsets the file system size limit after creating a file system. stratis-cli now incorporates password verification when it is used to set a key in the kernel keyring by using a manual entry. stratis-cli now supports specifying a pool either by name or by UUID when stopping a pool. stratis-cli also gets updates with various internal improvements, and now enforces a requirement of at least the python 3.9 version in its package configuration. Jira:RHEL-2265 [1] boom rebased to version 1.6.0 The boom package has been upgraded to version 3.6.0. Notable enhancements include: Support for multi-volume snapshot boot syntax supported by the systemd command. The new --mount and --no-fstab options are added to specify additional volumes to mount at the boot entry. Jira:RHEL-16813 NVMe-FC Boot from SAN is now fully supported The Non-volatile Memory Express (NVMe) over Fibre Channel (NVMe/FC) Boot, which was introduced in Red Hat Enterprise Linux 9.2 as a Technology Preview, is now fully supported. Some NVMe/FC host bus adapters support a NVMe/FC boot capability. For more information on programming a Host Bus Adapter (HBA) to enable NVMe/FC boot capability, see the NVMe/FC host bus adapter manufacturer's documentation. Jira:RHEL-1492 [1] 4.10. High availability and clusters pcs support for ISO 8601 duration specification for time properties The pcs command-line interface now allows you to specify values for Pacemaker time properties according to the ISO 8601 duration specification standard. Jira:RHEL-7672 Support for new pscd Web UI features The pscd Web UI now supports the following features: Moving a cluster resource off the node on which it is currently running Banning a resource from running on a node Displaying cluster status that shows the age of the cluster status and when the cluster state is being reloaded Requesting a reload of the cluster status display Jira:RHEL-7582 , Jira:RHEL-7739 TLS cipher list now defaults to system-wide crypto policy Previously, the pcsd TLS cipher list was set to DEFAULT:!RC4:!3DES:@STRENGTH by default. With this update, the cipher list is defined by the system-wide crypto policy by default. The TLS ciphers accepted by the pcsd daemon might change with this upgrade, depending on the currently set crypto policy. For more information about the crypto policies framework, see the crypto-policies (7) man page. Jira:RHEL-7724 4.11. Dynamic programming languages, web and database servers Python 3.12 available in RHEL 9 RHEL 9.4 introduces Python 3.12, provided by the new package python3.12 and a suite of packages built for it, and the ubi9/python-312 container image. Notable enhancements compared to the previously released Python 3.11 include: Python introduces a new type statement and new type parameter syntax for generic classes and functions. Formatted string literal (f-strings) have been formalized in the grammar and can now be integrated into the parser directly. Python now provides a unique per-interpreter global interpreter lock (GIL). You can now use the buffer protocol from Python code. Dictionary, list, and set comprehensions in CPython are now inlined. This significantly increases the speed of a comprehension execution. CPython now supports the Linux perf profiler. CPython now provides stack overflow protection on supported platforms. Python 3.12 and packages built for it can be installed in parallel with Python 3.9 and Python 3.11 on the same system. To install packages from the python3.12 stack, use, for example: To run the interpreter, use, for example: See Installing and using Python for more information. For information about the length of support of Python 3.12, see Red Hat Enterprise Linux Application Streams Life Cycle . Jira:RHEL-14941 A new environment variable in Python to control parsing of email addresses To mitigate CVE-2023-27043 , a backward incompatible change to ensure stricter parsing of email addresses was introduced in Python 3. This update introduces a new PYTHON_EMAIL_DISABLE_STRICT_ADDR_PARSING environment variable. When you set this variable to true , the , less strict parsing behavior is the default for the entire system: However, individual calls to the affected functions can still enable stricter behavior. You can achieve the same result by creating the /etc/python/email.cfg configuration file with the following content: For more information, see the Knowledgebase article Mitigation of CVE-2023-27043 introducing stricter parsing of email addresses in Python . Jira:RHELDOCS-17369 [1] A new module stream: ruby:3.3 RHEL 9.4 introduces Ruby 3.3.0 in a new ruby:3.3 module stream. This version provides several performance improvements, bug and security fixes, and new features over Ruby 3.1 distributed with RHEL 9.1. Notable enhancements include: You can use the new Prism parser instead of Ripper . Prism is a portable, error tolerant, and maintainable recursive descent parser for the Ruby language. YJIT, the Ruby just-in-time (JIT) compiler implementation, is no longer experimental and it provides major performance improvements. The Regexp matching algorithm has been improved to reduce the impact of potential Regular Expression Denial of Service (ReDoS) vulnerabilities. The new experimental RJIT (a pure-Ruby JIT) compiler replaces MJIT. Use YJIT in production. A new M:N thread scheduler is now available. Other notable changes: You must now use the Lrama LALR parser generator instead of Bison . Several deprecated methods and constants have been removed. The Racc gem has been promoted from a default gem to a bundled gem. To install the ruby:3.3 module stream, use: If you want to upgrade from an earlier ruby module stream, see Switching to a later stream . For information about the length of support of Ruby 3.3, see Red Hat Enterprise Linux Application Streams Life Cycle . Jira:RHEL-17089 [1] A new module stream: php:8.2 RHEL 9.4 adds PHP 8.2 as a new php:8.2 module stream. Improvements in this release include: Readonly classes Several new stand-alone types A new Random extension Constraints in traits To install the php:8.2 module stream, use the following command: If you want to upgrade from the php:8.1 stream, see Switching to a later stream . For details regarding PHP usage on RHEL 9, see Using the PHP scripting language . For information about the length of support for the php module streams, see the Red Hat Enterprise Linux Application Streams Life Cycle . Jira:RHEL-14699 [1] The name() method of the perl-DateTime-TimeZone module now returns the time zone name The perl-DateTime-TimeZone module has been updated to version 2.62, which changed the value that is returned by the name() method from the time zone alias to the main time zone name. For more information and an example, see the Knowledgebase article Change in the perl-DateTime-TimeZone API related to time zone name and alias . Jira:RHEL-35685 A new module stream: nginx:1.24 The nginx 1.24 web and proxy server is now available as the nginx:1.24 module stream. This update provides several bug fixes, security fixes, new features, and enhancements over the previously released version 1.22. New features and changes related to Transport Layer Security (TLS): Encryption keys are now automatically rotated for TLS session tickets when using shared memory in the ssl_session_cache directive. Memory usage has been optimized in configurations with Secure Sockets Layer (SSL) proxy. You can now disable looking up IPv4 addresses while resolving by using the ipv4=off parameter of the resolver directive. nginx now supports the USDproxy_protocol_tlv_* variables, which store the values of the Type-Length-Value (TLV) fields that appear in the PROXY v2 TLV protocol. The ngx_http_gzip_static_module module now supports byte ranges. Other changes: Header lines are now represented as linked lists in the internal API. nginx now concatenates identically named header strings passed to the FastCGI, SCGI, and uwsgi back ends in the USDr->header_in() method of the ngx_http_perl_module , and during lookups of the USDhttp_... , USDsent_http_... , USDsent_trailer_... , USDupstream_http_... , and USDupstream_trailer_... variables. nginx now displays a warning if protocol parameters of a listening socket are redefined. nginx now closes connections with lingering if pipelining was used by the client. The logging level of various SSL errors has been lowered, for example, from Critical to Informational . To install the nginx:1.24 stream, use: To upgrade from the nginx 1.22 stream, switch to a later stream . For more information, see Setting up and configuring NGINX . For information about the length of support for the nginx module streams, see the Red Hat Enterprise Linux Application Streams Life Cycle . Jira:RHEL-14713 [1] A new module stream: mariadb:10.11 MariaDB 10.11 is now available as a new module stream, mariadb:10.11 . Notable enhancements over the previously available version 10.5 include: A new sys_schema feature. Atomic Data Definition Language (DDL) statements. A new GRANT ... TO PUBLIC privilege. Separate SUPER and READ ONLY ADMIN privileges. A new UUID database data type. Support for the Secure Socket Layer (SSL) protocol version 3; the MariaDB server now requires correctly configured SSL to start. Support for the natural sort order through the natural_sort_key() function. A new SFORMAT function for arbitrary text formatting. Changes to the UTF-8 charset and the UCA-14 collation. systemd socket activation files available in the /usr/share/ directory. Note that they are not a part of the default configuration in RHEL as opposed to upstream. Error messages containing the MariaDB string instead of MySQL . Error messages available in the Chinese language. Changes to the default logrotate file. For MariaDB and MySQL clients, the connection property specified on the command line (for example, --port=3306 ), now forces the protocol type of communication between the client and the server, such as tcp , socket , pipe , or memory . For more information about changes in MariaDB 10.11, see Notable differences between MariaDB 10.5 and MariaDB 10.11 . For more information about MariaDB, see Using MariaDB . To install the mariadb:10.11 stream, use: If you want to upgrade from MariaDB 10.5, see Upgrading from MariaDB 10.5 to MariaDB 10.11 . For information about the length of support for the mariadb module streams, see Red Hat Enterprise Linux Application Streams Life Cycle . Jira:RHEL-3638 A new module stream: postgresql:16 RHEL 9.4 introduces PostgreSQL 16 as the postgresql:16 module stream. PostgreSQL 16 provides several new features and enhancements over version 15. Notable enhancements include: Enhanced bulk loading improves performance. The libpq library now supports connection-level load balancing. You can use the new load_balance_hosts option for more efficient load balancing. You can now create custom configuration files and include them in the pg_hba.conf and pg_ident.conf files. PostgreSQL now supports regular expression matching on database and role entries in the pg_hba.conf file. Other changes include: PostgreSQL is no longer distributed with the postmaster binary. Users who start the postgresql server by using the provided systemd unit file (the systemctl start postgres command) are not affected by this change. If you previously started the postgresql server directly through the postmaster binary, you must now use the postgres binary instead. PostgreSQL no longer provides documentation in PDF format within the package. Use the online documentation instead. See also Using PostgreSQL . To install the postgresql:16 stream, use the following command: If you want to upgrade from an earlier postgresql stream within RHEL 9, follow the procedure described in Switching to a later stream and then migrate your PostgreSQL data as described in Migrating to a RHEL 9 version of PostgreSQL . For information about the length of support for the postgresql module streams, see the Red Hat Enterprise Linux Application Streams Life Cycle . Jira:RHEL-3635 Git rebased to version 2.43.0 The Git version control system has been updated to version 2.43.0, which provides bug fixes, enhancements, and performance improvements over the previously released version 2.39. Notable enhancements include: You can now use the new --source option with the git check-attr command to read the .gitattributes file from the provided tree-ish object instead of the current working directory. Git can now pass information from the WWW-Authenticate response-type header to credential helpers. In case of an empty commit, the git format-patch command now writes an output file containing a header of the commit instead of creating an empty file. You can now use the git blame --contents= <file> <revision> -- <path> command to find the origins of lines starting at <file> contents through the history that leads to <revision> . The git log --format command now accepts the %(decorate) placeholder for further customization to extend the capabilities provided by the --decorate option. Jira:RHEL-17100 [1] Git LFS rebased to version 3.4.1 The Git Large File Storage (LFS) extension has been updated to version 3.4.1, which provides bug fixes, enhancements, and performance improvements over the previously released version 3.2.0. Notable changes include: The git lfs push command can now read references and object IDs from standard input. Git LFS now handles alternative remotes without relying on Git. Git LFS now supports the WWW-Authenticate response-type header as a credential helper. Jira:RHEL-17101 [1] 4.12. Compilers and development tools LLVM Toolset rebased to version 17.0.6 LLVM Toolset has been updated to version 17.0.6. Notable enhancements include: The opaque pointers migration is now completed. Removed support for the legacy pass manager in middle-end optimization. Clang changes: C++20 coroutines are no longer considered experimental. Improved code generation for the std::move function and similar in unoptimized builds. For more information, see the LLVM and Clang upstream release notes. Jira:RHEL-9283 Rust Toolset rebased to version 1.75.0 Rust Toolset has been updated to version 1.75.0. Notable enhancements include: Constant evaluation time is now unlimited Cleaner panic messages Cargo registry authentication async fn and opaque return types in traits Jira:RHEL-12963 Go Toolset rebased to version 1.21.0 Go Toolset has been updated to version 1.21.0. Notable enhancements include: min , max , and clear built-ins have been added. Official support for profile guided optimization has been added. Package initialization order is now more precisely defined. Type inferencing is improved. Backwards compatibility support is improved. For more information, see the Go upstream release notes. Jira:RHEL-11871 [1] Clang resource directory moved The Clang resource directory, where Clang stores its internal headers and libraries, has been moved from /usr/lib64/clang/17 to /usr/lib/clang/17 . Jira:RHEL-9346 elfutils rebased to version 0.190 The elfutils package has been updated to version 0.190. Notable improvements include: The libelf library now supports relative relocation (RELR). The libdw library now recognizes .debug_[ct]u_index sections. The eu-readelf utility now supports a new -Ds , --use-dynamic --symbol option to show symbols through the dynamic segment without using ELF sections. The eu-readelf utility can now show .gdb_index version 9. A new eu-scrlines utility compiles a list of source files associated with a specified DWARF or ELF file. A debuginfod server schema has changed for a 60% compression in file name representation (this requires reindexing). Jira:RHEL-12489 systemtap rebased to version 5.0 The systemtap package has been updated to version 5.0. Notable enhancements include: Faster and more reliable kernel-user transport. Extended DWARF5 debuginfo format support. Jira:RHEL-12488 Updated GCC Toolset 13 GCC Toolset 13 is a compiler toolset that provides recent versions of development tools. It is available as an Application Stream in the form of a Software Collection in the AppStream repository. Notable changes introduced in RHEL 9.4 include: The GCC compiler has been updated to version 13.2.1, which provides many bug fixes and enhancements that are available in upstream GCC. binutils now support AMD CPUs based on the znver5 core through the -march=znver5 compiler switch. annobin has been updated to version 12.32. The annobin plugin for GCC now defaults to using a more compressed format for the notes that it stores in object files, resulting in smaller object files and faster link times, especially in large, complex programs. The following tools and versions are provided by GCC Toolset 13: Tool Version GCC 13.2.1 GDB 12.1 binutils 2.40 dwz 0.14 annobin 12.32 To install GCC Toolset 13, run the following command as root: To run a tool from GCC Toolset 13: To run a shell session where tool versions from GCC Toolset 13 override system versions of these tools: For more information, see GCC Toolset 13 and Using GCC Toolset . Jira:RHEL-23798 [1] Compiling with GCC and the -fstack-protector flag no longer fails to guard dynamic stack allocations on 64-bit ARM Previously, on the 64-bit ARM architecture, the system GCC compiler with the -fstack-protector flag failed to detect a buffer overflow in functions containing a C99 variable-length array or an alloca() -allocated object. Consequently, an attacker could overwrite saved registers on the stack. With this update, the buffer overflow detection on 64-bit ARM has been fixed. As a result, applications compiled with the system GCC are more secure. Jira:RHEL-17638 [1] GCC Toolset 13: Compiling with GCC and the -fstack-protector flag no longer fails to guard dynamic stack allocations on 64-bit ARM Previously, on the 64-bit ARM architecture, the GCC compiler with the -fstack-protector flag failed to detect a buffer overflow in functions containing a C99 variable-length array or an alloca() -allocated object. Consequently, an attacker could overwrite saved registers on the stack. With this update, the buffer overflow detection on 64-bit ARM has been fixed. As a result, applications compiled with GCC are more secure. Jira:RHEL-16998 pcp updated to version 6.2.0 The pcp package has been updated to version 6.2.0. Notable improvements include: pcp-htop now supports user-defined tabs. pcp-atop now supports a new bar graph visualization mode. OpenMetrics PMDA metric labels and logging are improved. Additional Linux kernel virtual memory metrics have been added. New tools: pmlogredact pcp-buddyinfo pcp-meminfo pcp-netstat pcp-slabinfo pcp-zoneinfo Jira:RHEL-2317 [1] A new grafana-selinux package Previously, the default installation of grafana-server ran as an unconfined_service_t SELinux type. This update adds the new grafana-selinux package, which contains an SELinux policy for grafana-server and which is installed by default with grafana-server . As a result, grafana-server now runs as grafana_t SELinux type. Jira:RHEL-7505 papi supports new processor microarchitectures With this enhancement, you can access performance monitoring hardware using papi events presets on the following processor microarchitectures: AMD Zen 4 4th Generation Intel(R) Xeon(R) Scalable Processors Jira:RHEL-9333 [1] , Jira:RHEL-9335, Jira:RHEL-9334 New package: maven-openjdk21 The maven:3.8 module stream now includes the maven-openjdk21 subpackage, which provides the Maven JDK binding for OpenJDK 21 and configures Maven to use the system OpenJDK 21. Jira:RHEL-13046 [1] New package: libzip-tools RHEL 9.4 introduces the libzip-tools package, which provides utilities such as zipcmp , zipmerge , and ziptool . Jira:RHEL-17567 cmake rebased to version 3.26 The cmake package has been updated to version 3.26. Notable improvements include: Added support for the C17 and C18 language standards. cmake can now query the /etc/os-release file for operating system identification information. Added support for the CUDA 20 and nvtx3 libraries. Added support for the Python stable application binary interface. Added support for Perl 5 in the Simplified Wrapper and Interface Generator (SWIG) tool. Jira:RHEL-7393 valgrind updated to 3.22 The valgrind package has been updated to version 3.22. Notable improvements include: valgrind memcheck now checks that the values given to the C functions memalign , posix_memalign , and aligned_alloc , and the C++17 aligned new operator are valid alignment values. valgrind memcheck now supports mismatch detection for C++14 sized and C++17 aligned new and delete operators. Added support for lazy reading of DWARF debugging information, resulting in faster startup when debuginfo packages are installed. Jira:RHEL-12490 libabigail rebased to version 2.4 The libabigail package has been updated to version 2.4. Notable enhancements include: The abidiff tool now supports comparing two sets of binaries. Added support for suppressing harmless change reports related to flexible array data members. Improved support for suppressing harmless change reports about enum types. Improved representation of changes to anonymous enum, union, and struct types. Jira:RHEL-12491 4.13. Identity Management A new passwordless authentication method is available in SSSD With this update, you can enable and configure passwordless authentication in SSSD to use a biometric device that is compatible with the FIDO2 specification, for example a YubiKey. You must register the FIDO2 token in advance and store this registration information in the user account in RHEL IdM, Active Directory, or an LDAP store. RHEL implements FIDO2 compatibility with the libfido2 library, which currently only supports USB-based tokens. Jira:RHELDOCS-17841 [1] The ansible-freeipa ipauser and ipagroup modules now support a new renamed state With this update, you can use the renamed state in ansible-freeipa ipauser module to change the user name of an existing IdM user. You can also use this state in ansible-freeipa ipagroup module to change the group name of an existing IdM group. Jira:RHEL-4962 Identity Management users can now use external identity providers to authenticate to IdM With this enhancement, you can now associate Identity Management (IdM) users with external identity providers (IdPs) that support the OAuth 2 device authorization flow. Examples of such IdPs include Red Hat build of Keycloak, Microsoft Entra ID (formerly Azure Active Directory), GitHub, and Google. If an IdP reference and an associated IdP user ID exist in IdM, you can use them to enable an IdM user to authenticate at the external IdP. After performing authentication and authorization at the external IdP, the IdM user receives a Kerberos ticket with single sign-on capabilities. The user must authenticate with the SSSD version available in RHEL 9.1 or later. Jira:RHELPLAN-169666 [1] ipa rebased to version 4.11 The ipa package has been updated from version 4.10 to 4.11. Notable changes include: Support for FIDO2-based passkeys. Initial implementation of resource-based constrained delegation (RBCD) for Kerberos services. Context manager for ipalib.api to automatically configure, connect, and disconnect. The installation of an IdM replica now occurs against a chosen server, not only for Kerberos authentication but also for all IPA API and CA requests. The ansible-freeipa package has been rebased from version 1.11 to 1.12.1. The ipa-healthcheck package has been rebased from version 0.12 to 0.16. For more information, see the upstream release notes . Jira:RHEL-11652 Deleting expired KCM Kerberos tickets Previously, if you attempted to add a new credential to the Kerberos Credential Manager (KCM) and you had already reached the storage space limit, the new credential was rejected. The user storage space is limited by the max_uid_ccaches configuration option that has a default value of 64. With this update, if you have already reached the storage space limit, your oldest expired credential is removed and the new credential is added to the KCM. If there are no expired credentials, the operation fails and an error is returned. To prevent this issue, you can free some space by removing credentials using the kdestroy command. Jira:SSSD-6216 IdM now supports the idoverrideuser , idoverridegroup and idview Ansible modules With this update, the ansible-freeipa package now contains the following modules: idoverrideuser Allows you to override user attributes for users stored in the Identity Management (IdM) LDAP server, for example, the user login name, home directory, certificate, or SSH keys. idoverridegroup Allows you to override attributes for groups stored in the IdM LDAP server, for example, the name of the group, its GID, or description. idview Allows you to organize user and group ID overrides and apply them to specific IdM hosts. In the future, you will be able to use these modules to enable AD users to use smart cards to log in to IdM. Jira:RHEL-16934 The idp Ansible module allows associating IdM users with external IdPs With this update, you can use the idp ansible-freeipa module to associate Identity Management (IdM) users with external identity providers (IdP) that support the OAuth 2 device authorization flow. If an IdP reference and an associated IdP user ID exist in IdM, you can use them to enable IdP authentication for an IdM user. After performing authentication and authorization at the external IdP, the IdM user receives a Kerberos ticket with single sign-on capabilities. The user must authenticate with the SSSD version available in RHEL 8.7 or later. Jira:RHEL-16939 getcert add-ca returns a new return code if a certificate is already present or tracked With this update, the getcert command returns a specific return code, 2 , if you try to add or track a certificate that is already present or tracked. Previously, the command returned return code 1 on any error condition. Jira:RHEL-22302 The delegation of DNS zone management is now enabled in ansible-freeipa You can now use the dnszone ansible-freeipa module to delegate DNS zone management. Use the permission or managedby variable of the dnszone module to configure a per-zone access delegation permission. Jira:RHEL-19134 Enforcing OTP usage for all LDAP clients With the release of the RHBA-2024:2558 advisory, in RHEL IdM, you can now set the default behavior for LDAP server authentication of user accounts with two-factor (OTP) authentication configured. If OTP is enforced, LDAP clients cannot authenticate against an LDAP server using single factor authentication (a password) for users that have associated OTP tokens. This method is already enforced through the Kerberos backend by using a special LDAP control with OID 2.16.840.1.113730.3.8.10.7 without any data. To enforce OTP usage for all LDAP clients, administrators can use the following command: To change back to the OTP behavior for all LDAP clients, use the following command: Jira:RHEL-23377 [1] The runasuser_group parameter is now available in ansible-freeipa ipasudorule With this update, you can set Groups of RunAs Users for a sudo rule by using the ansible-freeipa ipasudorule module. The option is already available in the Identity Management (IdM) command-line interface and the IdM Web UI. Jira:RHEL-19130 389-ds-base rebased to version 2.4.5 The 389-ds-base package has been updated to version 2.4.5. Notable bug fixes and enhancements over version 2.3.4 include: https://www.port389.org/docs/389ds/releases/release-2-3-5.html https://www.port389.org/docs/389ds/releases/release-2-3-6.html https://www.port389.org/docs/389ds/releases/release-2-3-7.html https://www.port389.org/docs/389ds/releases/release-2-4-0.html https://www.port389.org/docs/389ds/releases/release-2-4-1.html https://www.port389.org/docs/389ds/releases/release-2-4-2.html https://www.port389.org/docs/389ds/releases/release-2-4-3.html https://www.port389.org/docs/389ds/releases/release-2-4-4.html https://www.port389.org/docs/389ds/releases/release-2-4-5.html Jira:RHEL-15907 Transparent Huge Pages are now disabled by default for the ns-slapd process When large database caches are used, Transparent Huge Pages (THP) can have a negative effect on Directory Server performance under heavy load, for example, high memory footprint, high CPU usage and latency spikes. With this enhancement, a new THP_DISABLE=1 configuration option was added to the /usr/lib/systemd/system/[email protected]/custom.conf drop-in configuration file for the dirsrv systemd unit to disable THP for the ns-slapd process. In addition, the Directory Server health check tool now detects the THP settings. If you enabled THP system-wide and for the Directory Server instance, the health check tool informs you about the enabled THP and prints recommendations on how to disable them. Jira:RHEL-5142 The new lastLoginHistSize configuration attribute is now available for the Account Policy plug-in Previously, when a user did a successful bind, only the time of the last login was available. With this update, you can use the new lastLoginHistSize configuration attribute to manage a history of successful logins. By default, the last five successful logins are saved. Note that for the lastLoginHistSize attribute to collect statistics of successful logins, you must enable the alwaysRecordLogin attribute for the Account Policy plug-in. For more details, see lastLoginHistSize . Jira:RHEL-5133 [1] The new notes=M message in the access log to identify MFA binds With this update, when you configure the two-factor authentication for user accounts by using a pre-bind authentication plug-in, such as MFA plug-in, the Directory Server log files record the following messages during BIND operations: The access log records the new notes=M note message: The security log records the new SIMPLE/MFA bind method: Note that for the access and security logs to record such messages, the pre-bind authentication plug-in must set the flag by using the SLAPI API if a bind was part of this plug-in. Jira:RHELDOCS-17838 [1] The new inchainMatch matching rule is now available With this update, a client application can use the new inchainMatch matching rule to search for the ancestry of an LDAP entry. The member , manager , parentOrganization , and memberof attributes can be used with the inchainMatch matching rule and the following searches can be performed: Find all direct or indirect groups in which a user is a member. Find all direct or indirect users whose manager is a certain user. Find all direct or indirect organizations an entry belongs to. Finds all direct or indirect members of a certain group. Note that for performance reasons, you must index the member , manager , parentOrganization , and memberof attributes if the client application performs searches against these attributes by using the inchainMatch matching rule. Directory Server uses the In Chain plug-in that is enabled by default to implement the inchainMatch matching rule. However, because inchainMatch is expensive to compute, an access control instruction (ACI) limits the matching rule usage. For more details, refer to Using inchainMatch matching rule to find the ancestry of an LDAP entry . Jira:RHELDOCS-17256 [1] The HAProxy protocol is now supported for the 389-ds-base package Previously, Directory Server did not differentiate incoming connections between proxy and non-proxy clients. With this update, you can use the new nsslapd-haproxy-trusted-ip multi-valued configuration attribute to configure the list of trusted proxy servers. When nsslapd-haproxy-trusted-ip is configured under the cn=config entry, Directory Server uses the HAProxy protocol to receive client IP addresses via an additional TCP header so that access control instructions (ACIs) can be correctly evaluated and client traffic can be logged. If an untrusted proxy server initiates a bind request, Directory Server rejects the request and records the following message to the error log file: For more details, see nsslapd-haproxy-trusted-ip . Jira:RHEL-5130 samba rebased to version 4.19.4 The samba packages have been upgraded to upstream version 4.19.4, which provides bug fixes and enhancements over the version. The most notable changes are: Command-line options in the smbget utility have been renamed and removed for a consistent user experience. However, this can break existing scripts or jobs that use the utility. See the smbget --help command and smbget(1) man page for further details about the new options. If the winbind debug traceid option is enabled, the winbind service now logs, additionally, the following fields: traceid : Tracks the records belonging to the same request. depth : Tracks the request nesting level. Samba no longer uses its own cryptography implementations and, instead, now fully uses cryptographic functionality provided by the GnuTLS library. The directory name cache size option was removed. Note that the server message block version 1 (SMB1) protocol has been deprecated since Samba 4.11 and will be removed in a future release. Back up the database files before starting Samba. When the smbd , nmbd , or winbind services start, Samba automatically updates its tdb database files. Red Hat does not support downgrading tdb database files. After updating Samba, use the testparm utility to verify the /etc/samba/smb.conf file. Jira:RHEL-16476 Identity Management API is now fully supported The Identity Management (IdM) API was available as a Technology Preview in RHEL 9.2. Since RHEL 9.3, it has been fully supported. Users can use existing tools and scripts even if the IdM API is enhanced to enable multiple versions of API commands. These enhancements do not change the behavior of a command in an incompatible way. This has the following benefits: Administrators can use or later versions of IdM on the server than on the managing client. Developers can use a specific version of an IdM call, even if the IdM version changes on the server. The communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature. NOTE While IdM API provides a JSON-RPC interface, this type of access is not supported. Red Hat recommends accessing the API with Python instead. Using Python automates important parts such as the metadata retrieval from the server, which allows listing all available commands. Bugzilla:1513934 4.14. The web console RHEL web console can now generate Ansible and shell scripts In the web console, you can now easily access and copy automation scripts on the kdump configuration page. You can then use the generated script to implement a specific kdump configuration on multiple systems. Jira:RHELDOCS-17060 [1] Simplified managing storage and resizing partitions on Storage The Storage section of the web console is now redesigned. The new design improved visibility across all views. The overview page now presents all storage objects in a comprehensive table, which makes it easier to perform operations directly. You can click any row to view detailed information and any supplementary actions. Additionally, you can now resize partitions from the Storage section. Jira:RHELDOCS-17056 [1] 4.15. Red Hat Enterprise Linux system roles The ad_integration RHEL system role now supports configuring dynamic DNS update options With this update, the ad_integration RHEL system role supports configuring options for dynamic DNS updates using SSSD when integrated with Active Directory (AD). By default, SSSD will attempt to automatically refresh the DNS record: When the identity provider comes online (always). At a specified interval (optional configuration); by default, the AD provider updates the DNS record every 24 hours. You can change these and other settings using the new variables in ad_integration . For example, you can set ad_dyndns_refresh_interval to 172800 to change the DNS record refresh interval to 48 hours. For more details regarding the role variables, see the resources in the /usr/share/doc/rhel-system-roles/ad_integration/ directory. Jira:RHELDOCS-17372 [1] The Storage RHEL system roles now support shared LVM device management The RHEL system roles now support the creation and management of shared logical volumes and volume groups. Jira:RHEL-1535 Microsoft SQL Server 2022 available on RHEL 9 The mssql-server system role is now available on RHEL 9. The role adds two variables: mssql_run_selinux_confined to control whether to run SQL Server as a confined application or not. If set to true , the role installs the mssql-server-selinux package. If set to false , the role removes the mssql-server-selinux package. Default setting is true for RHEL 9 managed nodes and false for other managed nodes. mssql_manage_selinux to control whether to configure SELinux. When set to true , the variable configures the enforcing or permissive mode based on the value of the mssql_run_selinux_confined variable. Jira:RHEL-16342 The rhc system role now supports RHEL 7 systems You can now manage RHEL 7 systems by using the rhc system role. Register the RHEL 7 system to Red Hat Subscription Management (RHSM) and Insights and start managing your system using the rhc system role. Using the rhc_insights.remediation parameter has no impact on RHEL 7 systems as the Insights Remediation feature is currently not available on RHEL 7. Jira:RHEL-16976 New RHEL system role for configuring fapolicyd With the new fapolicyd RHEL system role, you can use Ansible playbooks to manage and configure the fapolicyd framework. The fapolicyd software framework controls the execution of applications based on a user-defined policy. Jira:RHEL-16542 The RHEL system roles now support LVM snapshot management With this enhancement, you can use the new snapshot RHEL system role to create, configure, and manage LVM snapshots. Jira:RHEL-16552 The Nmstate API and the network RHEL system role now support new route types With this enhancement, you can use the following route types with the Nmstate API and the network RHEL system role: blackhole prohibit unreachable Jira:RHEL-19579 [1] The ad_integration RHEL system role now supports custom SSSD domain configuration settings Previously, when using the ad_integration RHEL system role, it was not possible to add custom settings to the domain configuration section in the sssd.conf file using the role. With this enhancement, the ad_integration role can now modify the sssd.conf file and, as a result, you can use custom SSSD settings. Jira:RHEL-17668 The ad_integration RHEL system role now supports custom SSSD settings Previously, when using the ad_integration RHEL system role, it was not possible to add custom settings to the [sssd] section in the sssd.conf file using the role. With this enhancement, the ad_integration role can now modify the sssd.conf file and, as a result, you can use custom SSSD settings. Jira:RHEL-21133 New rhc_insights.display_name option in the rhc role to set display names You can now configure or update the display name of the system registered to Red Hat Insights by using the new rhc_insights.display_name parameter. The parameter allows you to name the system based on your preference to easily manage systems in the Insights Inventory. If your system is already connected with Red Hat Insights, use the parameter to update the existing display name. If the display name is not set explicitly on registration, it is set to the hostname by default. It is not possible to automatically revert the display name to the hostname, but it can be set so manually. Jira:RHEL-16964 New RHEL system role for configuring fapolicyd With the new fapolicyd RHEL system role, you can use Ansible playbooks to manage and configure the fapolicyd framework. The fapolicyd software framework controls the execution of applications based on a user-defined policy. Jira:RHEL-16541 New logging_preserve_fqdn variable for the logging RHEL system role Previously, it was not possible to configure a fully qualified domain name (FQDN) using the logging system role. This update adds the optional logging_preserve_fqdn variable, which you can use to set the preserveFQDN configuration option in rsyslog to use the full FQDN instead of a short name in syslog entries. Jira:RHEL-15932 The logging role supports general queue and general action parameters in output modules Previously, it was not possible to configure general queue parameters and general action parameters with the logging role. With this update, the logging RHEL system role supports configuration of general queue parameters and general action parameters in output modules. Jira:RHEL-15439 The postgresql RHEL system role now supports PostgreSQL 16 The postgresql RHEL system role, which installs, configures, manages, and starts the PostgreSQL server, now supports PostgreSQL 16. For more information about this system role, see Installing and configuring PostgreSQL by using the postgresql RHEL system role . Jira:RHEL-18962 Support for creation of volumes without creating a file system With this enhancement, you can now create a new volume without creating a file system by specifying the fs_type=unformatted option. Similarly, existing file systems can be removed using the same approach by ensuring that the safe mode is disabled. Jira:RHEL-16212 Support for new ha_cluster system role features The ha_cluster system role now supports the following features: Enablement of the repositories containing resilient storage packages, such as dlm or gfs2 . A Resilient Storage subscription is needed to access the repository. Configuration of fencing levels, allowing a cluster to use multiple devices to fence nodes. Configuration of node attributes. For information about the parameters you configure to implement these features, see Configuring a high-availability cluster by using the ha_cluster RHEL system role . Jira:RHEL-15876 [1] , Jira:RHEL-22106 , Jira:RHEL-15910 ForwardToSyslog flag is now supported in the journald system role In the journald RHEL system role, the journald_forward_to_syslog variable controls whether the received messages should be forwarded to the traditional syslog daemon or not. The default value of this variable is false . With this enhancement, you can now configure the ForwardToSyslog flag by setting journald_forward_to_syslog to true in the inventory. As a result, when using remote logging systems such as Splunk, the logs are available in the /var/log files. Jira:RHEL-21117 New rhc_insights.ansible_host option in the rhc role to set Ansible hostnames You can now configure or update the Ansible hostname for the systems registered to Red Hat Insights by using the new rhc_insights.ansible_host parameter. When set, the parameter changes the ansible_host configuration in the /etc/insights-client/insights-client.conf file to your selected Ansible hostname. If your system is already connected with Red Hat Insights, this parameter will update the existing Ansible hostname. Jira:RHEL-16974 New mssql_ha_prep_for_pacemaker variable Previously, the microsoft.sql.server RHEL system role did not have a variable to control whether to configure SQL Server for Pacemaker. This update adds the mssql_ha_prep_for_pacemaker . Set the variable to false if you do not want to configure your system for Pacemaker and you want to use another HA solution. Jira:RHEL-19091 The sshd role now configures certificate-based SSH authentications With the sshd RHEL system role, you can now configure and manage multiple SSH servers to authenticate by using SSH certificates. This makes SSH authentications more secure because certificates are signed by a trusted CA and provide fine-grained access control, expiration dates, and centralized management. Jira:RHEL-5972 Use the logging_max_message_size parameter instead of rsyslog_max_message_size in the logging system role Previously, even though the rsyslog_max_message_size parameter was not supported, the logging RHEL system role was using rsyslog_max_message_size instead of using the logging_max_message_size parameter. This enhancement ensures that logging_max_message_size is used and not rsyslog_max_message_size to set the maximum size for the log messages. Jira:RHEL-15037 ratelimit_burst variable is only used if ratelimit_interval is set in logging system role Previously, in the logging RHEL system role, when the ratelimit_interval variable was not set, the role would use the ratelimit_burst variable to set the rsyslog ratelimit.burst setting. But it had no effect because it is also required to set ratelimit_interval . With this enhancement, if ratelimit_interval is not set, the role does not set ratelimit.burst . If you want to set ratelimit.burst , you must set both ratelimit_interval and ratelimit_burst variables. Jira:RHEL-19046 selinux role now prints a message when specifying a non-existent module With this release, the selinux RHEL system role prints an error message when you specify a non-existent module in the selinux_modules.path variable. Jira:RHEL-19043 selinux role now supports configuring SELinux in disabled mode With this update, the selinux RHEL system role supports configuring SELinux ports, file contexts, and boolean mappings on nodes that have SELinux set to disabled. This is useful for configuration scenarios before you enable SELinux to permissive or enforcing mode on a system. Jira:RHEL-15870 The metrics RHEL system role now supports configuring PMIE webhooks With this update, you can automatically configure the`global webhook_endpoint` PMIE variable using the metrics_webhook_endpoint variable for the metrics RHEL system role. This enables you to provide a custom URL for your environment that receives messages about important performance events, and is typically used with external tools such as Event-Driven Ansible. Jira:RHEL-13760 The bootloader RHEL system role This update introduces the bootloader RHEL system role. You can use this feature for stable and consistent configuration of bootloaders and kernels on your RHEL systems. For more details regarding requirements, role variables, and example playbooks, see the README resources in the /usr/share/doc/rhel-system-roles/bootloader/ directory. Jira:RHEL-16336 4.16. Virtualization Virtualization is now supported on ARM 64 This update introduces support for creating KVM virtual machines on systems that use ARM 64 (also known as AArch64) CPUs. Note, however, that certain virtualization features and functionalities that are available on AMD64 and Intel 64 systems might work differently or be unsupported on ARM 64. For details, see How virtualization on ARM 64 differs from AMD 64 and Intel 64 . Jira:RHEL-14097 External snapshots for virtual machines This update introduces the external snapshot mechanism for virtual machines (VMs), which replaces the previously deprecated internal snapshot mechanism. As a result, you can create, delete, and revert to VM snapshots that are fully supported. External snapshots work more reliably both in the command-line interface and in the RHEL web console. This also applies to snapshots of running VMs, known as live snapshots. Note, however, that some commands and utilities might still create internal snapshots. To verify that your snapshot is fully supported, ensure that it is configured as external . For example: Jira:RHEL-7528 RHEL now supports Multi-FD migration of virtual machines With this update, multiple file descriptors (multi-FD) migration of virtual machines is now supported. Multi-FD migration uses multiple parallel connections to migrate a virtual machine, which can speed up the process by utilizing all the available network bandwidth. It is recommended to use this feature on high-speed networks (20 Gbps and higher). Jira:RHELDOCS-16970 [1] VM migration now supports post-copy preemption Post-copy live migrations of virtual machines (VM) now use the postcopy-preempt feature, which improves the performance and stability of these migrations. Jira:RHEL-13004 [1] , Jira:RHEL-7100 Secure Execution VMs on IBM Z now support cryptographic coprocessors With this update, you can now assign cryptographic coprocessors as mediated devices to a virtual machine (VM) with IBM Secure Execution on IBM Z. By assigning a cryptographic coprocessor as a mediated device to a Secure Execution VM, you can now use hardware encryption without compromising the security of the VM. Jira:RHEL-11597 [1] 4th Generation AMD EPYC processors supported on KVM guests Support for 4th Generation AMD EPYC processors (also known as AMD Genoa) has now been added to the KVM hypervisor and kernel code, and to the libvirt API. This enables KVM virtual machines to use 4th Generation AMD EPYC processors. Jira:RHEL-7568 New virtualization features in the RHEL web console With this update, the RHEL web console includes new features in the Virtual Machines page. You can now: Add an SSH public key during virtual machine (VM) creation. This public key will be stored in the ~/.ssh/authorized_keys file of the designated non-root user on the newly created VM, which provides you with an immediate SSH access to the specified user account. Select a pre-formatted block device type when creating a new storage pool. This is a more robust alternative to a physical disk device type, as it prevents unintentional reformatting of a raw disk device. This update also changes some default behavior in the Virtual Machines page: In the Add disk dialog, the Always attach option is now set by default. The Create snapshot action now uses an external snapshot insted of an internal snapshot, which is deprecated in RHEL 9. External snapshots are more reliable and also work for raw images, not just for qcow2 images. You can also select a memory snapshot file location if you want to retain the memory state of the running VM. Jira:RHELDOCS-17000 [1] virtio-mem is now supported on AMD64 and Intel 64 systems With this update, RHEL 9 introduces support for the virtio-mem feature on AMD64 and Intel 64 systems. With virtio-mem , you can dynamically add or remove host memory in virtual machines (VMs). For more information on virtio-mem , see: Adding and removing virtual machine memory by using virtio-mem Jira:RHELDOCS-17053 [1] You can now replace SPICE with VNC in the web console With this update, you can use the web console to replace the SPICE remote display protocol with the VNC protocol in an existing virtual machine (VM). Because the support for the SPICE protocol has been removed in RHEL 9, VMs that use the SPICE protocol fail to start on a RHEL 9 host. For example, RHEL 8 VMs use SPICE by default, so you must switch from SPICE to VNC for a successful migration to RHEL 9. Jira:RHEL-17434 Improved I/O performance for virtio-blk disk devices With this update, you can configure a separate IOThread for each virtqueue in a virtio-blk disk device. This configuration improves performance for virtual machines with multiple CPUs during intensive I/O workloads. Jira:RHEL-7416 VNC viewer correctly initializes a VM display after live migration of ramfb This update enhances the ramfb framebuffer device, which you can configure as a primary display for a virtual machine (VM). Previously, ramfb was unable to migrate, which resulted in VMs that use ramfb showing a blank screen after live migration. Now, ramfb is compatible with live migration. As a result, you see the VM desktop display when the migration completes. Jira:RHEL-7478 4.17. RHEL in cloud environments RHEL instances on EC2 now support IPv6 IMDS connections With this update, RHEL 8 and 9 instances on Amazon Elastic Cloud Compute (EC2) can use the IPv6 protocol to connect to Instance Metadata Service (IMDS). As a result, you can configure RHEL instances with cloud-init on EC2 with a dual-stack IPv4 and IPv6 connection. In addition, you can launch EC2 instances of RHEL with cloud-init in IPv6-only subnet. Jira:RHEL-7278 New cloud-init clean option for deleting generated configuration files The cloud-init clean --configs option has been added for the cloud-init utility. You can use this option to delete unnecessary configuration files generated by cloud-init on your instance. For example, to delete cloud-init configuration files that define network setup, use the following command: Jira:RHEL-7311 [1] OpenTelemetry Collector is available for RHEL on AWS While running RHEL on Amazon Web Services (AWS), you can use the OpenTelemetry (OTel) framework to collect and send telemetry data, for example, logs. You can maintain and debug the RHEL cloud instances by using the OTel framework. With this update, RHEL includes the OTel Collector service, which you can use to manage logs. The OTel Collector gathers, processes, transforms, and exports logs to and from various formats and external back ends. You can also use the OTel Collector to aggregate the collected data and generate metrics useful for analytics services. For example, you can configure OTel Collector to send data to Amazon Web Services (AWS) CloudWatch, which enhances the scope and accuracy of data obtained by CloudWatch from RHEL instances. For details, see Configuring the OpenTelemetry Collector for RHEL on public cloud platforms . Jira:RHELDOCS-19755 4.18. Containers Podman now supports containers.conf modules You can use Podman modules to load a predetermined set of configurations. Podman modules are containers.conf files in the TOML format. These modules are located in the following directories, or their subdirectories: For rootless users: USDHOME/.config/containers/containers.conf.modules For root users: /etc/containers/containers.conf.modules , or /usr/share/containers/containers.conf.modules You can load the modules on-demand with the podman --module <your_module_name> command to override the system and user configuration files. Working with modules involve the following facts: You can specify modules multiple times by using the --module option. If <your_module_name> is the absolute path, the configuration file will be loaded directly. The relative paths are resolved relative to the three module directories mentioned previously. Modules in USDHOME override those in the /etc/ and /usr/share/ directories. For more information, see the upstream documentation . Jira:RHELPLAN-167829 [1] The Container Tools packages have been updated The updated Container Tools RPM meta-package, which contain the Podman, Buildah, Skopeo, crun, and runc tools, are now available. Notable bug fixes and enhancements over the version include: Notable changes in Podman v4.9: You can now use Podman to load the modules on-demand by using the podman --module <your_module_name> command and to override the system and user configuration files. A new podman farm command with a set of the create , set , remove , and update subcommands has been added. With these commands, you can farm out builds to machines running podman for different architectures. A new podman-compose command has been added, which runs Compose workloads by using an external compose provider such as Docker compose. The podman build command now supports the --layer-label and --cw options. The podman generate systemd command is deprecated. Use Quadlet to run containers and pods under systemd . The podman build command now supports Containerfiles with the HereDoc syntax. The podman kube play command now supports a new --publish-all option. Use this option to expose all containerPorts on the host. For more information about notable changes, see upstream release notes . Jira:RHELPLAN-167796 [1] The Podman v4.9 RESTful API now displays data of progress With this enhancement, the Podman v4.9 RESTful API now displays data of progress when you pull or push an image to the registry. Jira:RHELPLAN-167823 [1] Toolbx is now available With Toolbx, you can install the development and debugging tools, editors, and Software Development Kits (SDKs) into the Toolbx fully mutable container without affecting the base operating system. The Toolbx container is based on the registry.access.redhat.com/ubi9.4/toolbox:latest image. Jira:RHELDOCS-16241 [1] SQLite is now fully supported as a default database backend for Podman With Podman v4.9, the SQLite database backend for Podman, previously available as Technology Preview, is now fully supported. The SQLite database provides better stability, performance, and consistency when working with container metadata. The SQLite database backend is the default backend for new installations of RHEL 9.4. If you upgrade from a RHEL version, the default backend is BoltDB. If you have explicitly configured the database backend by using the database_backend option in the containers.conf file, then Podman will continue to use the specified backend. Jira:RHELPLAN-168180 [1] Administrators can set up isolation for firewall rules by using nftables You can use Netavark, a Podman container networking stack, on systems without iptables installed. Previously, when using the container networking interface (CNI) networking, the predecessor to Netavark, there was no way to set up container networking on systems without iptables installed. With this enhancement, the Netavark network stack works on systems with only nftables installed and improves isolation of automatically generated firewall rules. Jira:RHELDOCS-16955 [1] Containerfile now supports multi-line instructions You can use the multi-line HereDoc instructions (Here Document notation) in the Containerfile file to simplify this file and reduce the number of image layers caused by performing multiple RUN directives. For example, the original Containerfile can contain the following RUN directives: Instead of multiple RUN directives, you can use the HereDoc notation: Jira:RHELPLAN-168185 [1] The gvisor-tap-vsock package is now available The gvisor-tap-vsock package is an alternative to the libslirp user-mode networking library and VPNKit tools and services. It is written in Go and based on the network stack of gVisor. Compared to libslirp , the gvisor-tap-vsock librarysupports a configurable DNS server and dynamic port forwarding. You can use the gvisor-tap-vsock networking library for podman-machine virtual machines. The podman machine command for managing virtual machines is currently unsupported on Red Hat Enterprise Linux. Jira:RHELPLAN-167396 [1]
[ "sha256hmac -c <hmac_file> -T <target_file>", "nmstatectl gr new.yml > revert.yml", "--- interfaces: - name: bond99 type: bond state: up link-aggregation: mode: active-backup ports-config: - name: eth2 priority: 15", "--- interfaces: - name: eth1.101 type: vlan state: up vlan: base-iface: eth1 id: 101 registration-protocol: mvrp loose-binding: true reorder-headers: true", "dnf install python3.12 dnf install python3.12-pip", "python3.12 python3.12 -m pip --help", "export PYTHON_EMAIL_DISABLE_STRICT_ADDR_PARSING=true", "[email_addr_parsing] PYTHON_EMAIL_DISABLE_STRICT_ADDR_PARSING = true", "dnf module install ruby:3.3", "dnf module install php:8.2", "dnf module install nginx:1.24", "dnf module install mariadb:10.11", "dnf module install postgresql:16", "dnf install gcc-toolset-13", "scl enable gcc-toolset-13 tool", "scl enable gcc-toolset-13 bash", "ipa config-mod --addattr ipaconfigstring=EnforceLDAPOTP", "ipa config-mod --delattr ipaconfigstring=EnforceLDAPOTP", "[time_stamp] conn=1 op=0 BIND dn=\"uid=jdoe,ou=people,dc=example,dc=com\" method=128 version=3 [time_stamp] conn=1 op=0 RESULT err=0 tag=97 nentries=0 wtime=0.000111632 optime=0.006612223 etime=0.006722325 notes=M details=\"Multi-factor Authentication\" dn=\"uid=jdoe,ou=people,dc=example,dc=com\"", "{ \"date\": \"[time_stamp] \", \"utc_time\": \"1709327649.232748932\", \"event\": \"BIND_SUCCESS\", \"dn\": \"uid=djoe,ou=people,dc=example,dc=com\", \"bind_method\": \"SIMPLE\\/MFA\" , \"root_dn\": false, \"client_ip\": \"::1\", \"server_ip\": \"::1\", \"ldap_version\": 3, \"conn_id\": 1, \"op_id\": 0, \"msg\": \"\" }", "[time_stamp] conn=5 op=-1 fd=64 Disconnect - Protocol error - Unknown Proxy - P4", "virsh snapshot-dumpxml VM-name snapshot-name | grep external <disk name='vda' snapshot='external' type='file'>", "cloud-init clean --configs network", "RUN dnf update RUN dnf -y install golang RUN dnf -y install java", "RUN <<EOF dnf update dnf -y install golang dnf -y install java EOF" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.4_release_notes/new-features
OpenShift sandboxed containers
OpenShift sandboxed containers OpenShift Container Platform 4.16 OpenShift sandboxed containers guide Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/openshift_sandboxed_containers/index
Chapter 4. Using domain-specific LDAP backends with director
Chapter 4. Using domain-specific LDAP backends with director Red Hat OpenStack Platform director can configure keystone to use one or more LDAP backends. This approach results in the creation of a separate LDAP backend for each keystone domain. 4.1. Setting the configuration options For deployments using Red Hat OpenStack Platform director, you need to set the KeystoneLDAPDomainEnable flag to true in your heat templates; as a result, this will configure the domain_specific_drivers_enabled option in keystone (within the identity configuration group). Note The default directory for domain configuration files is set to /etc/keystone/domains/ . You can override this by setting the required path using the keystone::domain_config_directory hiera key and adding it as an ExtraConfig parameter within an environment file. You must also add a specification of the LDAP backend configuration. This is done using the KeystoneLDAPBackendConfigs parameter in tripleo-heat-templates , where you can then specify your required LDAP options. 4.2. Configure the director deployment Create a copy of the keystone_domain_specific_ldap_backend.yaml environment file: Edit the /home/stack/templates/keystone_domain_specific_ldap_backend.yaml environment file and set the values to suit your deployment. For example, these entries create a LDAP configuration for a keystone domain named testdomain : You can also configure the environment file to specify multiple domains. For example: This will result in two domains named domain1 and domain2 ; each will have a different LDAP domain with its own configuration.
[ "cp /usr/share/openstack-tripleo-heat-templates/environments/services/keystone_domain_specific_ldap_backend.yaml /home/stack/templates/", "parameter_defaults: KeystoneLDAPDomainEnable: true KeystoneLDAPBackendConfigs: testdomain: url: ldaps://192.0.2.250 user: cn=openstack,ou=Users,dc=director,dc=example,dc=com password: RedactedComplexPassword suffix: dc=director,dc=example,dc=com user_tree_dn: ou=Users,dc=director,dc=example,dc=com user_filter: \"(memberOf=cn=OSuser,ou=Groups,dc=director,dc=example,dc=com)\" user_objectclass: person user_id_attribute: cn", "KeystoneLDAPBackendConfigs: domain1: url: ldaps://domain1.example.com user: cn=openstack,ou=Users,dc=director,dc=example,dc=com password: RedactedComplexPassword domain2: url: ldaps://domain2.example.com user: cn=openstack,ou=Users,dc=director,dc=example,dc=com password: RedactedComplexPassword" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/integrate_with_identity_service/ldap-director
Chapter 5. Storage classes and storage pools
Chapter 5. Storage classes and storage pools The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create a custom storage class if you want the storage class to have a different behavior. You can create multiple storage pools which map to storage classes that provide the following features: Enable applications with their own high availability to use persistent volumes with two replicas, potentially improving application performance. Save space for persistent volume claims using storage classes with compression enabled. Note Multiple storage classes and multiple pools are not supported for external mode OpenShift Data Foundation clusters. Note With a minimal cluster of a single device set, only two new storage classes can be created. Every storage cluster expansion allows two new additional storage classes. 5.1. Creating storage classes and pools You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Reclaim Policy is set to Delete as the default option. Use this setting. If you change the reclaim policy to Retain in the storage class, the persistent volume (PV) remains in Released state even after deleting the persistent volume claim (PVC). Volume binding mode is set to WaitForConsumer as the default option. If you choose the Immediate option, then the PV gets created immediately when creating the PVC. Select RBD Provisioner which is the plugin used for provisioning the persistent volumes. Select an existing Storage Pool from the list or create a new pool. Note Using 2-way replication data protection policy is not supported for the default pool. However, you can use 2-way replication if you are creating an additional pool. Create new pool Click Create New Pool . Enter Pool name . Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy. Select Enable compression if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Create to create the new storage pool. Click Finish after the pool is created. Optional: Select Enable Encryption checkbox. Click Create to create the storage class. 5.2. Creating a storage class for persistent volume encryption Prerequisites Based on your use case, you must ensure to configure access to KMS for one of the following: Using vaulttokens : Ensure to configure access as described in Configuring access to KMS using vaulttokens Using vaulttenantsa (Technology Preview): Ensure to configure access as described in Configuring access to KMS using vaulttenantsa Procedure In the OpenShift Web Console, navigate to Storage StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Select either Delete or Retain for the Reclaim Policy . By default, Delete is selected. Select either Immediate or WaitForFirstConsumer as the Volume binding mode . WaitForConsumer is set as the default option. Select RBD Provisioner openshift-storage.rbd.csi.ceph.com which is the plugin used for provisioning the persistent volumes. Select Storage Pool where the volume data is stored from the list or create a new pool. Select the Enable encryption checkbox. There are two options available to set the KMS connection details: Select existing KMS connection : Select an existing KMS connection from the drop-down list. The list is populated from the the connection details available in the csi-kms-connection-details ConfigMap. Create new KMS connection : This is applicable for vaulttokens only. Key Management Service Provider is set to Vault by default. Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Click Save . Click Create . Edit the ConfigMap to add the vaultBackend parameter if the HashiCorp Vault setup does not allow automatic detection of the Key/Value (KV) secret engine API version used by the backend path. Note vaultBackend is an optional parameters that is added to the configmap to specify the version of the KV secret engine API associated with the backend path. Ensure that the value matches the KV secret engine API version that is set for the backend path, otherwise it might result in a failure during persistent volume claim (PVC) creation. Identify the encryptionKMSID being used by the newly created storage class. On the OpenShift Web Console, navigate to Storage Storage Classes . Click the Storage class name YAML tab. Capture the encryptionKMSID being used by the storage class. Example: On the OpenShift Web Console, navigate to Workloads ConfigMaps . To view the KMS connection details, click csi-kms-connection-details . Edit the ConfigMap. Click Action menu (...) Edit ConfigMap . Add the vaultBackend parameter depending on the backend that is configured for the previously identified encryptionKMSID . You can assign kv for KV secret engine API, version 1 and kv-v2 for KV secret engine API, version 2. Example: Click Save steps The storage class can be used to create encrypted persistent volumes. For more information, see managing persistent volume claims . Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the HashiCorp product. For technical assistance with this product, contact HashiCorp .
[ "encryptionKMSID: 1-vault", "kind: ConfigMap apiVersion: v1 metadata: name: csi-kms-connection-details [...] data: 1-vault: |- { \"encryptionKMSType\": \"vaulttokens\", \"kmsServiceName\": \"1-vault\", [...] \"vaultBackend\": \"kv-v2\" } 2-vault: |- { \"encryptionKMSType\": \"vaulttenantsa\", [...] \"vaultBackend\": \"kv\" }" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/storage-classes-and-storage-pools_osp
Chapter 8. EgressRouter [network.operator.openshift.io/v1]
Chapter 8. EgressRouter [network.operator.openshift.io/v1] Description EgressRouter is a feature allowing the user to define an egress router that acts as a bridge between pods and external systems. The egress router runs a service that redirects egress traffic originating from a pod or a group of pods to a remote external system or multiple destinations as per configuration. It is consumed by the cluster-network-operator. More specifically, given an EgressRouter CR with <name>, the CNO will create and manage: - A service called <name> - An egress pod called <name> - A NAD called <name> Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). EgressRouter is a single egressrouter pod configuration object. Type object Required spec 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired egress router. status object Observed status of EgressRouter. 8.1.1. .spec Description Specification of the desired egress router. Type object Required addresses mode networkInterface Property Type Description addresses array List of IP addresses to configure on the pod's secondary interface. addresses[] object EgressRouterAddress contains a pair of IP CIDR and gateway to be configured on the router's interface mode string Mode depicts the mode that is used for the egress router. The default mode is "Redirect" and is the only supported mode currently. networkInterface object Specification of interface to create/use. The default is macvlan. Currently only macvlan is supported. redirect object Redirect represents the configuration parameters specific to redirect mode. 8.1.2. .spec.addresses Description List of IP addresses to configure on the pod's secondary interface. Type array 8.1.3. .spec.addresses[] Description EgressRouterAddress contains a pair of IP CIDR and gateway to be configured on the router's interface Type object Required ip Property Type Description gateway string IP address of the -hop gateway, if it cannot be automatically determined. Can be IPv4 or IPv6. ip string IP is the address to configure on the router's interface. Can be IPv4 or IPv6. 8.1.4. .spec.networkInterface Description Specification of interface to create/use. The default is macvlan. Currently only macvlan is supported. Type object Property Type Description macvlan object Arguments specific to the interfaceType macvlan 8.1.5. .spec.networkInterface.macvlan Description Arguments specific to the interfaceType macvlan Type object Required mode Property Type Description master string Name of the master interface. Need not be specified if it can be inferred from the IP address. mode string Mode depicts the mode that is used for the macvlan interface; one of Bridge|Private|VEPA|Passthru. The default mode is "Bridge". 8.1.6. .spec.redirect Description Redirect represents the configuration parameters specific to redirect mode. Type object Property Type Description fallbackIP string FallbackIP specifies the remote destination's IP address. Can be IPv4 or IPv6. If no redirect rules are specified, all traffic from the router are redirected to this IP. If redirect rules are specified, then any connections on any other port (undefined in the rules) on the router will be redirected to this IP. If redirect rules are specified and no fallback IP is provided, connections on other ports will simply be rejected. redirectRules array List of L4RedirectRules that define the DNAT redirection from the pod to the destination in redirect mode. redirectRules[] object L4RedirectRule defines a DNAT redirection from a given port to a destination IP and port. 8.1.7. .spec.redirect.redirectRules Description List of L4RedirectRules that define the DNAT redirection from the pod to the destination in redirect mode. Type array 8.1.8. .spec.redirect.redirectRules[] Description L4RedirectRule defines a DNAT redirection from a given port to a destination IP and port. Type object Required destinationIP port protocol Property Type Description destinationIP string IP specifies the remote destination's IP address. Can be IPv4 or IPv6. port integer Port is the port number to which clients should send traffic to be redirected. protocol string Protocol can be TCP, SCTP or UDP. targetPort integer TargetPort allows specifying the port number on the remote destination to which the traffic gets redirected to. If unspecified, the value from "Port" is used. 8.1.9. .status Description Observed status of EgressRouter. Type object Required conditions Property Type Description conditions array Observed status of the egress router conditions[] object EgressRouterStatusCondition represents the state of the egress router's managed and monitored components. 8.1.10. .status.conditions Description Observed status of the egress router Type array 8.1.11. .status.conditions[] Description EgressRouterStatusCondition represents the state of the egress router's managed and monitored components. Type object Required status type Property Type Description lastTransitionTime `` LastTransitionTime is the time of the last update to the current status property. message string Message provides additional information about the current condition. This is only to be consumed by humans. It may contain Line Feed characters (U+000A), which should be rendered as new lines. reason string Reason is the CamelCase reason for the condition's current status. status string Status of the condition, one of True, False, Unknown. type string Type specifies the aspect reported by this condition; one of Available, Progressing, Degraded 8.2. API endpoints The following API endpoints are available: /apis/network.operator.openshift.io/v1/egressrouters GET : list objects of kind EgressRouter /apis/network.operator.openshift.io/v1/namespaces/{namespace}/egressrouters DELETE : delete collection of EgressRouter GET : list objects of kind EgressRouter POST : create an EgressRouter /apis/network.operator.openshift.io/v1/namespaces/{namespace}/egressrouters/{name} DELETE : delete an EgressRouter GET : read the specified EgressRouter PATCH : partially update the specified EgressRouter PUT : replace the specified EgressRouter /apis/network.operator.openshift.io/v1/namespaces/{namespace}/egressrouters/{name}/status GET : read status of the specified EgressRouter PATCH : partially update status of the specified EgressRouter PUT : replace status of the specified EgressRouter 8.2.1. /apis/network.operator.openshift.io/v1/egressrouters Table 8.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind EgressRouter Table 8.2. HTTP responses HTTP code Reponse body 200 - OK EgressRouterList schema 401 - Unauthorized Empty 8.2.2. /apis/network.operator.openshift.io/v1/namespaces/{namespace}/egressrouters Table 8.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of EgressRouter Table 8.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind EgressRouter Table 8.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.8. HTTP responses HTTP code Reponse body 200 - OK EgressRouterList schema 401 - Unauthorized Empty HTTP method POST Description create an EgressRouter Table 8.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.10. Body parameters Parameter Type Description body EgressRouter schema Table 8.11. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 201 - Created EgressRouter schema 202 - Accepted EgressRouter schema 401 - Unauthorized Empty 8.2.3. /apis/network.operator.openshift.io/v1/namespaces/{namespace}/egressrouters/{name} Table 8.12. Global path parameters Parameter Type Description name string name of the EgressRouter namespace string object name and auth scope, such as for teams and projects Table 8.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an EgressRouter Table 8.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 8.15. Body parameters Parameter Type Description body DeleteOptions schema Table 8.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified EgressRouter Table 8.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 8.18. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified EgressRouter Table 8.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.20. Body parameters Parameter Type Description body Patch schema Table 8.21. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified EgressRouter Table 8.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.23. Body parameters Parameter Type Description body EgressRouter schema Table 8.24. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 201 - Created EgressRouter schema 401 - Unauthorized Empty 8.2.4. /apis/network.operator.openshift.io/v1/namespaces/{namespace}/egressrouters/{name}/status Table 8.25. Global path parameters Parameter Type Description name string name of the EgressRouter namespace string object name and auth scope, such as for teams and projects Table 8.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified EgressRouter Table 8.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 8.28. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified EgressRouter Table 8.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.30. Body parameters Parameter Type Description body Patch schema Table 8.31. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified EgressRouter Table 8.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.33. Body parameters Parameter Type Description body EgressRouter schema Table 8.34. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 201 - Created EgressRouter schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/network_apis/egressrouter-network-operator-openshift-io-v1
Chapter 5. Admin Client configuration properties
Chapter 5. Admin Client configuration properties bootstrap.controllers Type: list Default: "" Importance: high A list of host/port pairs to use for establishing the initial connection to the KRaft controller quorum. This list should be in the form host1:port1,host2:port2,... . bootstrap.servers Type: list Default: "" Importance: high A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping-this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,... . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). ssl.key.password Type: password Default: null Importance: high The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'. ssl.keystore.certificate.chain Type: password Default: null Importance: high Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates. ssl.keystore.key Type: password Default: null Importance: high Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'. ssl.keystore.location Type: string Default: null Importance: high The location of the key store file. This is optional for client and can be used for two-way authentication for client. ssl.keystore.password Type: password Default: null Importance: high The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format. ssl.truststore.certificates Type: password Default: null Importance: high Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates. ssl.truststore.location Type: string Default: null Importance: high The location of the trust store file. ssl.truststore.password Type: password Default: null Importance: high The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format. client.dns.lookup Type: string Default: use_all_dns_ips Valid Values: [use_all_dns_ips, resolve_canonical_bootstrap_servers_only] Importance: medium Controls how the client uses DNS lookups. If set to use_all_dns_ips , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to resolve_canonical_bootstrap_servers_only , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as use_all_dns_ips . client.id Type: string Default: "" Importance: medium An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. connections.max.idle.ms Type: long Default: 300000 (5 minutes) Importance: medium Close idle connections after the number of milliseconds specified by this config. default.api.timeout.ms Type: int Default: 60000 (1 minute) Valid Values: [0,... ] Importance: medium Specifies the timeout (in milliseconds) for client APIs. This configuration is used as the default timeout for all client operations that do not specify a timeout parameter. receive.buffer.bytes Type: int Default: 65536 (64 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. request.timeout.ms Type: int Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: medium The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. sasl.client.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. sasl.jaas.config Type: password Default: null Importance: medium JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here . The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*; . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;. sasl.kerberos.service.name Type: string Default: null Importance: medium The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. sasl.login.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler. sasl.login.class Type: class Default: null Importance: medium The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin. sasl.mechanism Type: string Default: GSSAPI Importance: medium SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. sasl.oauthbearer.jwks.endpoint.url Type: string Default: null Importance: medium The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.token.endpoint.url Type: string Default: null Importance: medium The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization. security.protocol Type: string Default: PLAINTEXT Valid Values: (case insensitive) [SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT] Importance: medium Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. send.buffer.bytes Type: int Default: 131072 (128 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. socket.connection.setup.timeout.max.ms Type: long Default: 30000 (30 seconds) Importance: medium The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value. socket.connection.setup.timeout.ms Type: long Default: 10000 (10 seconds) Importance: medium The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the socket.connection.setup.timeout.max.ms value. ssl.enabled.protocols Type: list Default: TLSv1.2,TLSv1.3 Importance: medium The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol . ssl.keystore.type Type: string Default: JKS Importance: medium The file format of the key store file. This is optional for client. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM]. ssl.protocol Type: string Default: TLSv1.3 Importance: medium The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'. ssl.provider Type: string Default: null Importance: medium The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. ssl.truststore.type Type: string Default: JKS Importance: medium The file format of the trust store file. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM]. auto.include.jmx.reporter Type: boolean Default: true Importance: low Deprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters . This configuration will be removed in Kafka 4.0, users should instead include org.apache.kafka.common.metrics.JmxReporter in metric.reporters in order to enable the JmxReporter. enable.metrics.push Type: boolean Default: true Importance: low Whether to enable pushing of client metrics to the cluster, if the cluster has a client metrics subscription which matches this client. metadata.max.age.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: low The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. metadata.recovery.strategy Type: string Default: none Valid Values: (case insensitive) [REBOOTSTRAP, NONE] Importance: low Controls how the client recovers when none of the brokers known to it is available. If set to none , the client fails. If set to rebootstrap , the client repeats the bootstrap process using bootstrap.servers . Rebootstrapping is useful when a client communicates with brokers so infrequently that the set of brokers may change entirely before the client refreshes metadata. Metadata recovery is triggered when all last-known brokers appear unavailable simultaneously. Brokers appear unavailable when disconnected and no current retry attempt is in-progress. Consider increasing reconnect.backoff.ms and reconnect.backoff.max.ms and decreasing socket.connection.setup.timeout.ms and socket.connection.setup.timeout.max.ms for the client. metric.reporters Type: list Default: "" Importance: low A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Valid Values: [INFO, DEBUG, TRACE] Importance: low The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The window of time a metrics sample is computed over. reconnect.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. reconnect.backoff.ms Type: long Default: 50 Valid Values: [0,... ] Importance: low The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the reconnect.backoff.max.ms value. retries Type: int Default: 2147483647 Valid Values: [0,... ,2147483647] Importance: low Setting a value greater than zero will cause the client to resend any request that fails with a potentially transient error. It is recommended to set the value to either zero or MAX_VALUE and use corresponding timeout parameters to control how long a client should retry a request. retry.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, the backoff per client will increase exponentially for each failed request, up to this maximum. To prevent all clients from being synchronized upon retry, a randomized jitter with a factor of 0.2 will be applied to the backoff, resulting in the backoff falling within a range between 20% below and 20% above the computed value. If retry.backoff.ms is set to be higher than retry.backoff.max.ms , then retry.backoff.max.ms will be used as a constant backoff from the beginning without any exponential increase. retry.backoff.ms Type: long Default: 100 Valid Values: [0,... ] Importance: low The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the retry.backoff.max.ms value. sasl.kerberos.kinit.cmd Type: string Default: /usr/bin/kinit Importance: low Kerberos kinit command path. sasl.kerberos.min.time.before.relogin Type: long Default: 60000 Importance: low Login thread sleep time between refresh attempts. sasl.kerberos.ticket.renew.jitter Type: double Default: 0.05 Importance: low Percentage of random jitter added to the renewal time. sasl.kerberos.ticket.renew.window.factor Type: double Default: 0.8 Importance: low Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. sasl.login.connect.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER. sasl.login.read.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER. sasl.login.refresh.buffer.seconds Type: short Default: 300 Valid Values: [0,... ,3600] Importance: low The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.min.period.seconds Type: short Default: 60 Valid Values: [0,... ,900] Importance: low The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.factor Type: double Default: 0.8 Valid Values: [0.5,... ,1.0] Importance: low Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.jitter Type: double Default: 0.05 Valid Values: [0.0,... ,0.25] Importance: low The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.oauthbearer.clock.skew.seconds Type: int Default: 30 Importance: low The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker. sasl.oauthbearer.expected.audience Type: list Default: null Importance: low The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.expected.issuer Type: string Default: null Importance: low The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.jwks.endpoint.refresh.ms Type: long Default: 3600000 (1 hour) Importance: low The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT. sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.jwks.endpoint.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.scope.claim.name Type: string Default: scope Importance: low The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. sasl.oauthbearer.sub.claim.name Type: string Default: sub Importance: low The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. security.providers Type: string Default: null Importance: low A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface. ssl.cipher.suites Type: list Default: null Importance: low A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. ssl.endpoint.identification.algorithm Type: string Default: https Importance: low The endpoint identification algorithm to validate server hostname using server certificate. ssl.engine.factory.class Type: class Default: null Importance: low The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connections from mTLS clients to brokers due to the extra code for examining the certificate chain provided by the client. Note further that the implementation uses a custom truststore based on the standard Java truststore and thus might be considered a security risk due to not being as mature as the standard one. ssl.keymanager.algorithm Type: string Default: SunX509 Importance: low The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. ssl.secure.random.implementation Type: string Default: null Importance: low The SecureRandom PRNG implementation to use for SSL cryptography operations. ssl.trustmanager.algorithm Type: string Default: PKIX Importance: low The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/kafka_configuration_properties/admin-client-configuration-properties-str
13.11. Security Policy
13.11. Security Policy The Security Policy spoke allows you to configure the installed system following restrictions and recommendations ( compliance policies ) defined by the Security Content Automation Protocol (SCAP) standard. This functionality is provided by an add-on which has been enabled by default since Red Hat Enterprise Linux 7.2. When enabled, the packages necessary to provide this functionality will automatically be installed. However, by default, no policies are enforced, meaning that no checks are performed during or after installation unless specifically configured. The Red Hat Enterprise Linux 7 Security Guide provides detailed information about security compliance including background information, practical examples, and additional resources. Important Applying a security policy is not necessary on all systems. This screen should only be used when a specific policy is mandated by your organization rules or government regulations. If you apply a security policy to the system, it will be installed using restrictions and recommendations defined in the selected profile. The openscap-scanner package will also be added to your package selection, providing a preinstalled tool for compliance and vulnerability scanning. After the installation finishes, the system will be automatically scanned to verify compliance. The results of this scan will be saved to the /root/openscap_data directory on the installed system. Pre-defined policies which are available in this screen are provided by SCAP Security Guide . See the OpenSCAP Portal for links to detailed information about each available profile. You can also load additional profiles from an HTTP, HTTPS or FTP server. Figure 13.8. Security policy selection screen To configure the use of security policies on the system, first enable configuration by setting the Apply security policy switch to ON . If the switch is in the OFF position, controls in the rest of this screen have no effect. After enabling security policy configuration using the switch, select one of the profiles listed in the top window of the screen, and click the Select profile below. When a profile is selected, a green check mark will appear on the right side, and the bottom field will display whether any changes will be made before beginning the installation. Note None of the profiles available by default perform any changes before the installation begins. However, loading a custom profile as described below can require some pre-installation actions. To use a custom profile, click the Change content button in the top left corner. This will open another screen where you can enter an URL of a valid security content. To go back to the default security content selection screen, click Use SCAP Security Guide in the top left corner. Custom profiles can be loaded from an HTTP , HTTPS or FTP server. Use the full address of the content, including the protocol (such as http:// ). A network connection must be active (enabled in Section 13.13, "Network & Hostname" ) before you can load a custom profile. The content type will be detected automatically by the installer. After you select a profile, or if you want to leave the screen, click Done in the top left corner to return to Section 13.7, "The Installation Summary Screen" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-security-policy-ppc
6.5. Cloning a Virtual Machine
6.5. Cloning a Virtual Machine You can clone virtual machines without having to create a template or a snapshot first. Important The Clone VM button is disabled while virtual machines are running; you must shut down a virtual machine before you can clone it. Cloning Virtual Machines Click Compute Virtual Machines and select the virtual machine to clone. Click More Actions ( ), then click Clone VM . Enter a Clone Name for the new virtual machine. Click OK .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/cloning_a_virtual_machine
4.362. yaboot
4.362. yaboot 4.362.1. RHBA-2011:1767 - yaboot bug fix update An updated yaboot package that fixes two bugs is now available for Red Hat Enterprise Linux 6. The yaboot package provides a boot loader for Open Firmware based PowerPC systems. It can be used to boot IBM eServer System p machines. Bug Fixes BZ# 638654 Previously, yaboot could not check whether an IP address is valid. As a consequence, yaboot netboot failed to operate in an environment where the gateway was not the same as the 'tftp' server, even though the 'tftp' server was on the same subnet. With this update, an IP address validity check has been added. Now,yaboot netbot operates as expected. BZ# 746340 Previously, yaboot discarded passed parameters to anaconda after the Client Architecture Support (CAS) was rebooted. This update upgrades the yaboot binary and modifies the source file. Now, the parameters are passed to the anaconda installer as expected. All users of yaboot are advised to upgrade to this updated package, which fixes these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/yaboot
Appendix A. Revision History
Appendix A. Revision History Revision History Revision 6.6.0-1 Wed 7 Sep 2016 Christian Huffman BZ-1350611: Updated JGroups ENCRYPT details. Revision 6.6.0-0 Mon 25 Jan 2016 Christian Huffman Initial draft for 6.6.0.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/security_guide/appe-revision_history
4.2.4. Hard Drives
4.2.4. Hard Drives All the technologies discussed so far are volatile in nature. In other words, data contained in volatile storage is lost when the power is turned off. Hard drives, on the other hand, are non-volatile -- the data they contain remains there, even after the power is removed. Because of this, hard drives occupy a special place in the storage spectrum. Their non-volatile nature makes them ideal for storing programs and data for longer-term use. Another unique aspect to hard drives is that, unlike RAM and cache memory, it is not possible to execute programs directly when they are stored on hard drives; instead, they must first be read into RAM. Also different from cache and RAM is the speed of data storage and retrieval; hard drives are at least an order of magnitude slower than the all-electronic technologies used for cache and RAM. The difference in speed is due mainly to their electromechanical nature. There are four distinct phases taking place during each data transfer to or from a hard drive. The following list illustrates these phases, along with the time it would take a typical high-performance drive, on average, to complete each: Access arm movement (5.5 milliseconds) Disk rotation (.1 milliseconds) Heads reading/writing data (.00014 milliseconds) Data transfer to/from the drive's electronics (.003 Milliseconds) Of these, only the last phase is not dependent on any mechanical operation. Note Although there is much more to learn about hard drives, disk storage technologies are discussed in more depth in Chapter 5, Managing Storage . For the time being, it is only necessary to keep in mind the huge speed difference between RAM and disk-based technologies and that their storage capacity usually exceeds that of RAM by a factor of at least 10, and often by 100 or more.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-memory-drives
Chapter 4. User-managed encryption for IBM Cloud
Chapter 4. User-managed encryption for IBM Cloud By default, provider-managed encryption is used to secure the following when you deploy an OpenShift Container Platform cluster: The root (boot) volume of control plane and compute machines Persistent volumes (data volumes) that are provisioned after the cluster is deployed You can override the default behavior by specifying an IBM(R) Key Protect for IBM Cloud(R) (Key Protect) root key as part of the installation process. When you bring our own root key, you modify the installation configuration file ( install-config.yaml ) to specify the Cloud Resource Name (CRN) of the root key by using the encryptionKey parameter. You can specify that: The same root key be used be used for all cluster machines. You do so by specifying the key as part of the cluster's default machine configuration. When specified as part of the default machine configuration, all managed storage classes are updated with this key. As such, data volumes that are provisioned after the installation are also encrypted using this key. Separate root keys be used for the control plane and compute machine pools. For more information about the encryptionKey parameter, see Additional IBM Cloud configuration parameters . Note Make sure you have integrated Key Protect with your IBM Cloud Block Storage service. For more information, see the Key Protect documentation . 4.1. steps Install an OpenShift Container Platform cluster: Installing a cluster on IBM Cloud with customizations Installing a cluster on IBM Cloud with network customizations Installing a cluster on IBM Cloud into an existing VPC Installing a private cluster on IBM Cloud
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_ibm_cloud/user-managed-encryption-ibm-cloud
1.2. System Permissions
1.2. System Permissions Permissions enable users to perform actions on objects, where objects are either individual objects or container objects. Any permissions that apply to a container object also apply to all members of that container. Figure 1.2. Permissions & Roles Figure 1.3. Red Hat Virtualization Object Hierarchy 1.2.1. User Properties Roles and permissions are the properties of the user. Roles are predefined sets of privileges that permit access to different levels of physical and virtual resources. Multilevel administration provides a finely grained hierarchy of permissions. For example, a data center administrator has permissions to manage all objects in the data center, while a host administrator has system administrator permissions to a single physical host. A user can have permissions to use a single virtual machine but not make any changes to the virtual machine configurations, while another user can be assigned system permissions to a virtual machine. 1.2.2. User and Administrator Roles Red Hat Virtualization provides a range of pre-configured roles, from an administrator with system-wide permissions to an end user with access to a single virtual machine. While you cannot change or remove the default roles, you can clone and customize them, or create new roles according to your requirements. There are two types of roles: Administrator Role: Allows access to the Administration Portal for managing physical and virtual resources. An administrator role confers permissions for actions to be performed in the VM Portal; however, it has no bearing on what a user can see in the VM Portal. User Role: Allows access to the VM Portal for managing and accessing virtual machines and templates. A user role determines what a user can see in the VM Portal. Permissions granted to a user with an administrator role are reflected in the actions available to that user in the VM Portal. 1.2.3. User Roles Explained The table below describes basic user roles which confer permissions to access and configure virtual machines in the VM Portal. Table 1.1. Red Hat Virtualization User Roles - Basic Role Privileges Notes UserRole Can access and use virtual machines and pools. Can log in to the VM Portal, use assigned virtual machines and pools, view virtual machine state and details. PowerUserRole Can create and manage virtual machines and templates. Apply this role to a user for the whole environment with the Configure window, or for specific data centers or clusters. For example, if a PowerUserRole is applied on a data center level, the PowerUser can create virtual machines and templates in the data center. UserVmManager System administrator of a virtual machine. Can manage virtual machines and create and use snapshots. A user who creates a virtual machine in the VM Portal is automatically assigned the UserVmManager role on the machine. The table below describes advanced user roles which allow you to do more fine tuning of permissions for resources in the VM Portal. Table 1.2. Red Hat Virtualization User Roles - Advanced Role Privileges Notes UserTemplateBasedVm Limited privileges to only use Templates. Can use templates to create virtual machines. DiskOperator Virtual disk user. Can use, view and edit virtual disks. Inherits permissions to use the virtual machine to which the virtual disk is attached. VmCreator Can create virtual machines in the VM Portal. This role is not applied to a specific virtual machine; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers or clusters. When applying this role to a cluster, you must also apply the DiskCreator role on an entire data center, or on specific storage domains. TemplateCreator Can create, edit, manage and remove virtual machine templates within assigned resources. This role is not applied to a specific template; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers, clusters, or storage domains. DiskCreator Can create, edit, manage and remove virtual disks within assigned clusters or data centers. This role is not applied to a specific virtual disk; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers or storage domains. TemplateOwner Can edit and delete the template, assign and manage user permissions for the template. This role is automatically assigned to the user who creates a template. Other users who do not have TemplateOwner permissions on a template cannot view or use the template. VnicProfileUser Logical network and network interface user for virtual machine and template. Can attach or detach network interfaces from specific logical networks. 1.2.4. Administrator Roles Explained The table below describes basic administrator roles which confer permissions to access and configure resources in the Administration Portal. Table 1.3. Red Hat Virtualization System Administrator Roles - Basic Role Privileges Notes SuperUser System Administrator of the Red Hat Virtualization environment. Has full permissions across all objects and levels, can manage all objects across all data centers. ClusterAdmin Cluster Administrator. Possesses administrative permissions for all objects underneath a specific cluster. DataCenterAdmin Data Center Administrator. Possesses administrative permissions for all objects underneath a specific data center except for storage. Important Do not use the administrative user for the directory server as the Red Hat Virtualization administrative user. Create a user in the directory server specifically for use as the Red Hat Virtualization administrative user. The table below describes advanced administrator roles which allow you to do more fine tuning of permissions for resources in the Administration Portal. Table 1.4. Red Hat Virtualization System Administrator Roles - Advanced Role Privileges Notes TemplateAdmin Administrator of a virtual machine template. Can create, delete, and configure the storage domains and network details of templates, and move templates between domains. StorageAdmin Storage Administrator. Can create, delete, configure, and manage an assigned storage domain. HostAdmin Host Administrator. Can attach, remove, configure, and manage a specific host. NetworkAdmin Network Administrator. Can configure and manage the network of a particular data center or cluster. A network administrator of a data center or cluster inherits network permissions for virtual pools within the cluster. VmPoolAdmin System Administrator of a virtual pool. Can create, delete, and configure a virtual pool; assign and remove virtual pool users; and perform basic operations on a virtual machine in the pool. GlusterAdmin Gluster Storage Administrator. Can create, delete, configure, and manage Gluster storage volumes. VmImporterExporter Import and export Administrator of a virtual machine. Can import and export virtual machines. Able to view all virtual machines and templates exported by other users. 1.2.5. Assigning an Administrator or User Role to a Resource Assign administrator or user roles to resources to allow users to access or manage that resource. Assigning a Role to a Resource Find and click the resource's name to open the details view. Click the Permissions tab to list the assigned users, the user's role, and the inherited permissions for the selected resource. Click Add . Enter the name or user name of an existing user into the Search text box and click Go . Select a user from the resulting list of possible matches. Select a role from the Role to Assign drop-down list. Click OK . The user now has the inherited permissions of that role enabled for that resource. 1.2.6. Removing an Administrator or User Role from a Resource Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource. Removing a Role from a Resource Find and click the resource's name to open the details view. Click the Permissions tab to list the assigned users, the user's role, and the inherited permissions for the selected resource. Select the user to remove from the resource. Click Remove . Click OK . 1.2.7. Managing System Permissions for a Data Center As the SuperUser , the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster. A data center administrator is a system administration role for a specific data center only. This is useful in virtualization environments with multiple data centers where each data center requires an administrator. The DataCenterAdmin role is a hierarchical model; a user assigned the data center administrator role for a data center can manage all objects in the data center with the exception of storage for that data center. Use the Configure button in the header bar to assign a data center administrator for all data centers in the environment. The data center administrator role permits the following actions: Create and remove clusters associated with the data center. Add and remove hosts, virtual machines, and pools associated with the data center. Edit user permissions for virtual machines associated with the data center. Note You can only assign roles and permissions to existing users. You can change the system administrator of a data center by removing the existing system administrator and adding the new system administrator. 1.2.8. Data Center Administrator Roles Explained Data Center Permission Roles The table below describes the administrator roles and privileges applicable to data center administration. Table 1.5. Red Hat Virtualization System Administrator Roles Role Privileges Notes DataCenterAdmin Data Center Administrator Can use, create, delete, manage all physical and virtual resources within a specific data center except for storage, including clusters, hosts, templates and virtual machines. NetworkAdmin Network Administrator Can configure and manage the network of a particular data center. A network administrator of a data center inherits network permissions for virtual machines within the data center as well. 1.2.9. Managing System Permissions for a Cluster As the SuperUser , the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster. A cluster administrator is a system administration role for a specific cluster only. This is useful in data centers with multiple clusters, where each cluster requires a system administrator. The ClusterAdmin role is a hierarchical model: a user assigned the cluster administrator role for a cluster can manage all objects in the cluster. Use the Configure button in the header bar to assign a cluster administrator for all clusters in the environment. The cluster administrator role permits the following actions: Create and remove associated clusters. Add and remove hosts, virtual machines, and pools associated with the cluster. Edit user permissions for virtual machines associated with the cluster. Note You can only assign roles and permissions to existing users. You can also change the system administrator of a cluster by removing the existing system administrator and adding the new system administrator. 1.2.10. Cluster Administrator Roles Explained Cluster Permission Roles The table below describes the administrator roles and privileges applicable to cluster administration. Table 1.6. Red Hat Virtualization System Administrator Roles Role Privileges Notes ClusterAdmin Cluster Administrator Can use, create, delete, manage all physical and virtual resources in a specific cluster, including hosts, templates and virtual machines. Can configure network properties within the cluster such as designating display networks, or marking a network as required or non-required. However, a ClusterAdmin does not have permissions to attach or detach networks from a cluster, to do so NetworkAdmin permissions are required. NetworkAdmin Network Administrator Can configure and manage the network of a particular cluster. A network administrator of a cluster inherits network permissions for virtual machines within the cluster as well. 1.2.11. Managing System Permissions for a Network As the SuperUser , the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster. A network administrator is a system administration role that can be applied for a specific network, or for all networks on a data center, cluster, host, virtual machine, or template. A network user can perform limited administration roles, such as viewing and attaching networks on a specific virtual machine or template. You can use the Configure button in the header bar to assign a network administrator for all networks in the environment. The network administrator role permits the following actions: Create, edit and remove networks. Edit the configuration of the network, including configuring port mirroring. Attach and detach networks from resources including clusters and virtual machines. The user who creates a network is automatically assigned NetworkAdmin permissions on the created network. You can also change the administrator of a network by removing the existing administrator and adding the new administrator. 1.2.12. Network Administrator and User Roles Explained Network Permission Roles The table below describes the administrator and user roles and privileges applicable to network administration. Table 1.7. Red Hat Virtualization Network Administrator and User Roles Role Privileges Notes NetworkAdmin Network Administrator for data center, cluster, host, virtual machine, or template. The user who creates a network is automatically assigned NetworkAdmin permissions on the created network. Can configure and manage the network of a particular data center, cluster, host, virtual machine, or template. A network administrator of a data center or cluster inherits network permissions for virtual pools within the cluster. To configure port mirroring on a virtual machine network, apply the NetworkAdmin role on the network and the UserVmManager role on the virtual machine. VnicProfileUser Logical network and network interface user for virtual machine and template. Can attach or detach network interfaces from specific logical networks. 1.2.13. Managing System Permissions for a Host As the SuperUser , the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster. A host administrator is a system administration role for a specific host only. This is useful in clusters with multiple hosts, where each host requires a system administrator. You can use the Configure button in the header bar to assign a host administrator for all hosts in the environment. The host administrator role permits the following actions: Edit the configuration of the host. Set up the logical networks. Remove the host. You can also change the system administrator of a host by removing the existing system administrator and adding the new system administrator. 1.2.14. Host Administrator Roles Explained Host Permission Roles The table below describes the administrator roles and privileges applicable to host administration. Table 1.8. Red Hat Virtualization System Administrator Roles Role Privileges Notes HostAdmin Host Administrator Can configure, manage, and remove a specific host. Can also perform network-related operations on a specific host. 1.2.15. Managing System Permissions for a Storage Domain As the SuperUser , the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster. A storage administrator is a system administration role for a specific storage domain only. This is useful in data centers with multiple storage domains, where each storage domain requires a system administrator. Use the Configure button in the header bar to assign a storage administrator for all storage domains in the environment. The storage domain administrator role permits the following actions: Edit the configuration of the storage domain. Move the storage domain into maintenance mode. Remove the storage domain. Note You can only assign roles and permissions to existing users. You can also change the system administrator of a storage domain by removing the existing system administrator and adding the new system administrator. 1.2.16. Storage Administrator Roles Explained Storage Domain Permission Roles The table below describes the administrator roles and privileges applicable to storage domain administration. Table 1.9. Red Hat Virtualization System Administrator Roles Role Privileges Notes StorageAdmin Storage Administrator Can create, delete, configure and manage a specific storage domain. GlusterAdmin Gluster Storage Administrator Can create, delete, configure and manage Gluster storage volumes. 1.2.17. Managing System Permissions for a Virtual Machine Pool As the SuperUser , the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster. A virtual machine pool administrator is a system administration role for virtual machine pools in a data center. This role can be applied to specific virtual machine pools, to a data center, or to the whole virtualized environment; this is useful to allow different users to manage certain virtual machine pool resources. The virtual machine pool administrator role permits the following actions: Create, edit, and remove pools. Add and detach virtual machines from the pool. Note You can only assign roles and permissions to existing users. 1.2.18. Virtual Machine Pool Administrator Roles Explained Pool Permission Roles The table below describes the administrator roles and privileges applicable to pool administration. Table 1.10. Red Hat Virtualization System Administrator Roles Role Privileges Notes VmPoolAdmin System Administrator role of a virtual pool. Can create, delete, and configure a virtual pool, assign and remove virtual pool users, and perform basic operations on a virtual machine. ClusterAdmin Cluster Administrator Can use, create, delete, manage all virtual machine pools in a specific cluster. 1.2.19. Managing System Permissions for a Virtual Disk As the SuperUser , the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster. Red Hat Virtualization Manager provides two default virtual disk user roles, but no default virtual disk administrator roles. One of these user roles, the DiskCreator role, enables the administration of virtual disks from the VM Portal. This role can be applied to specific virtual machines, to a data center, to a specific storage domain, or to the whole virtualized environment; this is useful to allow different users to manage different virtual resources. The virtual disk creator role permits the following actions: Create, edit, and remove virtual disks associated with a virtual machine or other resources. Edit user permissions for virtual disks. Note You can only assign roles and permissions to existing users. 1.2.20. Virtual Disk User Roles Explained Virtual Disk User Permission Roles The table below describes the user roles and privileges applicable to using and administrating virtual disks in the VM Portal. Table 1.11. Red Hat Virtualization System Administrator Roles Role Privileges Notes DiskOperator Virtual disk user. Can use, view and edit virtual disks. Inherits permissions to use the virtual machine to which the virtual disk is attached. DiskCreator Can create, edit, manage and remove virtual disks within assigned clusters or data centers. This role is not applied to a specific virtual disk; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers, clusters, or storage domains. 1.2.21. Setting a Legacy SPICE Cipher SPICE consoles use FIPS-compliant encryption by default, with a cipher string. The default SPICE cipher string is: kECDHE+FIPS:kDHE+FIPS:kRSA+FIPS:!eNULL:!aNULL This string is generally sufficient. However, if you have a virtual machine with an older operating system or SPICE client, where either one or the other does not support FIPS-compliant encryption, you must use a weaker cipher string. Otherwise, a connection security error may occur if you install a new cluster or a new host in an existing cluster and try to connect to that virtual machine. You can change the cipher string by using an Ansible playbook. Changing the cipher string On the Manager machine, create a file in the directory /usr/share/ovirt-engine/playbooks . For example: Enter the following in the file and save it: name: oVirt - setup weaker SPICE encryption for old clients hosts: hostname vars: host_deploy_spice_cipher_string: 'DEFAULT:-RC4:-3DES:-DES' roles: - ovirt-host-deploy-spice-encryption Run the file you just created: Alternatively, you can reconfigure the host with the Ansible playbook ovirt-host-deploy using the --extra-vars option with the variable host_deploy_spice_cipher_string , as follows:
[ "vim /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml", "name: oVirt - setup weaker SPICE encryption for old clients hosts: hostname vars: host_deploy_spice_cipher_string: 'DEFAULT:-RC4:-3DES:-DES' roles: - ovirt-host-deploy-spice-encryption", "ansible-playbook -l hostname /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml", "ansible-playbook -l hostname --extra-vars host_deploy_spice_cipher_string=\"DEFAULT:-RC4:-3DES:-DES\" /usr/share/ovirt-engine/playbooks/ovirt-host-deploy.yml" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-system_permissions
Hardware accelerators
Hardware accelerators OpenShift Container Platform 4.17 Hardware accelerators Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/hardware_accelerators/index
4.145. libsndfile
4.145. libsndfile 4.145.1. RHBA-2011:1226 - libsndfile bug fix update An updated libsndfile package that fixes one bug is now available for Red Hat Enterprise Linux 6. The libsndfile package provides a library for reading and writing sound files. Bug Fix BZ# 664323 Prior to this update, the libsndfile package was built without the Ogg container format support. As a result, applications using the libsndfile library were not able to work with the Ogg format. With this update, the problem has been fixed so that applications can now work with the Ogg format as expected. All users of libsndfile are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libsndfile
Connectivity Link observability guide
Connectivity Link observability guide Red Hat Connectivity Link 1.0 Observe and monitor Gateways, APIs, and applications on OpenShift Red Hat Connectivity Link documentation team
null
https://docs.redhat.com/en/documentation/red_hat_connectivity_link/1.0/html/connectivity_link_observability_guide/index
Chapter 4. Application backup and restore
Chapter 4. Application backup and restore 4.1. OADP features and plug-ins OpenShift API for Data Protection (OADP) features provide options for backing up and restoring applications. The default plug-ins enable Velero to integrate with certain cloud providers and to back up and restore OpenShift Container Platform resources. 4.1.1. OADP features OpenShift API for Data Protection (OADP) supports the following features: Backup You can back up all resources in your cluster or you can filter the resources by type, namespace, or label. OADP backs up Kubernetes objects and internal images by saving them as an archive file on object storage. OADP backs up persistent volumes (PVs) by creating snapshots with the native cloud snapshot API or with the Container Storage Interface (CSI). For cloud providers that do not support snapshots, OADP backs up resources and PV data with Restic. Restore You can restore resources and PVs from a backup. You can restore all objects in a backup or filter the restored objects by namespace, PV, or label. Schedule You can schedule backups at specified intervals. Hooks You can use hooks to run commands in a container on a pod, for example, fsfreeze to freeze a file system. You can configure a hook to run before or after a backup or restore. Restore hooks can run in an init container or in the application container. 4.1.2. OADP plug-ins The OpenShift API for Data Protection (OADP) provides default Velero plug-ins that are integrated with storage providers to support backup and snapshot operations. You can create custom plug-ins based on the Velero plug-ins. OADP also provides plug-ins for OpenShift Container Platform resource backups and Container Storage Interface (CSI) snapshots. Table 4.1. OADP plug-ins OADP plug-in Function Storage location aws Backs up and restores Kubernetes objects by using object store. AWS S3 Backs up and restores volumes by using snapshots. AWS EBS azure Backs up and restores Kubernetes objects by using object store. Microsoft Azure Blob storage Backs up and restores volumes by using snapshots. Microsoft Azure Managed Disks gcp Backs up and restores Kubernetes objects by using object store. Google Cloud Storage Backs up and restores volumes by using snapshots. Google Compute Engine Disks openshift Backs up and restores OpenShift Container Platform resources by using object store. [1] Object store csi Backs up and restores volumes by using CSI snapshots. [2] Cloud storage that supports CSI snapshots Mandatory. The csi plug-in uses the Velero CSI beta snapshot API . 4.1.3. About OADP Velero plug-ins You can configure two types of plug-ins when you install Velero: Default cloud provider plug-ins Custom plug-ins Both types of plug-in are optional, but most users configure at least one cloud provider plug-in. 4.1.3.1. Default Velero cloud provider plug-ins You can install any of the following default Velero cloud provider plug-ins when you configure the oadp_v1alpha1_dpa.yaml file during deployment: aws (Amazon Web Services) gcp (Google Cloud Platform) azure (Microsoft Azure) openshift (OpenShift Velero plug-in) csi (Container Storage Interface) kubevirt (KubeVirt) You specify the desired default plug-ins in the oadp_v1alpha1_dpa.yaml file during deployment. Example file The following .yaml file installs the openshift , aws , azure , and gcp plug-ins: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws - azure - gcp 4.1.3.2. Custom Velero plug-ins You can install a custom Velero plug-in by specifying the plug-in image and name when you configure the oadp_v1alpha1_dpa.yaml file during deployment. You specify the desired custom plug-ins in the oadp_v1alpha1_dpa.yaml file during deployment. Example file The following .yaml file installs the default openshift , azure , and gcp plug-ins and a custom plug-in that has the name custom-plugin-example and the image quay.io/example-repo/custom-velero-plugin : apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - azure - gcp customPlugins: - name: custom-plugin-example image: quay.io/example-repo/custom-velero-plugin 4.2. Installing and configuring OADP 4.2.1. About installing OADP As a cluster administrator, you install the OpenShift API for Data Protection (OADP) by installing the OADP Operator. The OADP Operator installs Velero 1.7 . To back up Kubernetes resources and internal images, you must have object storage as a backup location, such as one of the following storage types: Amazon Web Services Microsoft Azure Google Cloud Platform Multicloud Object Gateway S3-compatible object storage, such as Noobaa or Minio Important The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . You can back up persistent volumes (PVs) by using snapshots or Restic. To back up PVs with snapshots, you must have a cloud provider that supports either a native snapshot API or Container Storage Interface (CSI) snapshots, such as one of the following cloud providers: Amazon Web Services Microsoft Azure Google Cloud Platform CSI snapshot-enabled cloud provider, such as OpenShift Container Storage If your cloud provider does not support snapshots or if your storage is NFS, you can back up applications with Restic . You create a Secret object for your storage provider credentials and then you install the Data Protection Application. Additional resources Overview of backup locations and snapshot locations in the Velero documentation . 4.2.2. Installing and configuring the OpenShift API for Data Protection with Amazon Web Services You install the OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) by installing the OADP Operator, configuring AWS for Velero, and then installing the Data Protection Application. Important The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 4.2.2.1. Installing the OADP Operator You install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.7 by using Operator Lifecycle Manager (OLM). The OADP Operator installs Velero 1.7 . Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the OADP Operator . Select the OADP Operator and click Install . Click Install to install the Operator in the openshift-adp project. Click Operators Installed Operators to verify the installation. 4.2.2.2. Configuring Amazon Web Services You configure Amazon Web Services (AWS) for the OpenShift API for Data Protection (OADP). Prerequisites You must have the AWS CLI installed. Procedure Set the BUCKET variable: USD BUCKET=<your_bucket> Set the REGION variable: USD REGION=<your_region> Create an AWS S3 bucket: USD aws s3api create-bucket \ --bucket USDBUCKET \ --region USDREGION \ --create-bucket-configuration LocationConstraint=USDREGION 1 1 us-east-1 does not support a LocationConstraint . If your region is us-east-1 , omit --create-bucket-configuration LocationConstraint=USDREGION . Create an IAM user: USD aws iam create-user --user-name velero 1 1 If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster. Create a velero-policy.json file: USD cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}" ] } ] } EOF Attach the policies to give the velero user the necessary permissions: USD aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json Create an access key for the velero user: USD aws iam create-access-key --user-name velero Example output { "AccessKey": { "UserName": "velero", "Status": "Active", "CreateDate": "2017-07-31T22:24:41.576Z", "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, "AccessKeyId": <AWS_ACCESS_KEY_ID> } } Create a credentials-velero file: USD cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF You use the credentials-velero file to create a Secret object for AWS before you install the Data Protection Application. 4.2.2.3. Creating a secret for backup and snapshot locations You create a Secret object for the backup and snapshot locations if they use the same credentials. The default name of the Secret is cloud-credentials . Prerequisites Your object storage and cloud storage must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Note The DataProtectionApplication custom resource (CR) requires a Secret for installation. If no spec.backupLocations.credential.name value is specified, the default name is used. If you do not want to specify the backup locations or the snapshot locations, you must create a Secret with the default name by using an empty credentials-velero file. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.2.2.3.1. Configuring secrets for different backup and snapshot location credentials If your backup and snapshot locations use different credentials, you create separate profiles in the credentials-velero file. Then, you create a Secret object and specify the profiles in the DataProtectionApplication custom resource (CR). Procedure Create a credentials-velero file with separate profiles for the backup and snapshot locations, as in the following example: [backupStorage] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> [volumeSnapshot] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> Create a Secret object with the credentials-velero file: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero 1 Add the profiles to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> prefix: <prefix> config: region: us-east-1 profile: "backupStorage" credential: key: cloud name: cloud-credentials snapshotLocations: - name: default velero: provider: aws config: region: us-west-2 profile: "volumeSnapshot" 4.2.2.4. Configuring the Data Protection Application You can configure Velero resource allocations and enable self-signed CA certificates. 4.2.2.4.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... configuration: velero: podConfig: resourceAllocations: limits: cpu: "1" 1 memory: 512Mi 2 requests: cpu: 500m 3 memory: 256Mi 4 1 1 Specify the value in millicpus or CPU units. Default value is 500m or 1 CPU unit. 2 Default value is 512Mi . 3 Default value is 500m or 1 CPU unit. 4 Default value is 256Mi . 4.2.2.4.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 ... 1 Specify the Base46-encoded CA certificate string. 2 Must be false to disable SSL/TLS security. 4.2.2.5. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create a Secret with the default name, cloud-credentials , which contains separate profiles for the backup and snapshot location credentials. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - openshift <.> - aws restic: enable: true <.> backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> <.> prefix: <prefix> <.> config: region: <region> profile: "default" credential: key: cloud name: cloud-credentials <.> snapshotLocations: <.> - name: default velero: provider: aws config: region: <region> <.> profile: "default" <.> The openshift plug-in is mandatory in order to back up and restore namespaces on an OpenShift Container Platform cluster. <.> Set to false if you want to disable the Restic installation. Restic deploys a daemon set, which means that each worker node has Restic pods running. You configure Restic for backups by adding spec.defaultVolumesToRestic: true to the Backup CR. <.> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. <.> Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. <.> Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials , is used. If you specify a custom name, the custom name is used for the backup location. <.> You do not need to specify a snapshot location if you use CSI snapshots or Restic to back up PVs. <.> The snapshot location must be in the same region as the PVs. Click Create . Verify the installation by viewing the OADP resources: USD oc get all -n openshift-adp Example output 4.2.2.5.1. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 featureFlags: - EnableCSI 2 1 Add the csi default plug-in. 2 Add the EnableCSI feature flag. 4.2.3. Installing and configuring the OpenShift API for Data Protection with Microsoft Azure You install the OpenShift API for Data Protection (OADP) with Microsoft Azure by installing the OADP Operator, configuring Azure for Velero, and then installing the Data Protection Application. Important The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 4.2.3.1. Installing the OADP Operator You install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.7 by using Operator Lifecycle Manager (OLM). The OADP Operator installs Velero 1.7 . Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the OADP Operator . Select the OADP Operator and click Install . Click Install to install the Operator in the openshift-adp project. Click Operators Installed Operators to verify the installation. 4.2.3.2. Configuring Microsoft Azure You configure a Microsoft Azure for the OpenShift API for Data Protection (OADP). Prerequisites You must have the Azure CLI installed. Procedure Log in to Azure: USD az login Set the AZURE_RESOURCE_GROUP variable: USD AZURE_RESOURCE_GROUP=Velero_Backups Create an Azure resource group: USD az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1 1 Specify your location. Set the AZURE_STORAGE_ACCOUNT_ID variable: USD AZURE_STORAGE_ACCOUNT_ID="veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')" Create an Azure storage account: USD az storage account create \ --name USDAZURE_STORAGE_ACCOUNT_ID \ --resource-group USDAZURE_RESOURCE_GROUP \ --sku Standard_GRS \ --encryption-services blob \ --https-only true \ --kind BlobStorage \ --access-tier Hot Set the BLOB_CONTAINER variable: USD BLOB_CONTAINER=velero Create an Azure Blob storage container: USD az storage container create \ -n USDBLOB_CONTAINER \ --public-access off \ --account-name USDAZURE_STORAGE_ACCOUNT_ID Obtain the storage account access key: USD AZURE_STORAGE_ACCOUNT_ACCESS_KEY=`az storage account keys list \ --account-name USDAZURE_STORAGE_ACCOUNT_ID \ --query "[?keyName == 'key1'].value" -o tsv` Create a credentials-velero file: USD cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_STORAGE_ACCOUNT_ACCESS_KEY=USD{AZURE_STORAGE_ACCOUNT_ACCESS_KEY} 1 AZURE_CLOUD_NAME=AzurePublicCloud EOF 1 Mandatory. You cannot back up internal images if the credentials-velero file contains only the service principal credentials. You use the credentials-velero file to create a Secret object for Azure before you install the Data Protection Application. 4.2.3.3. Creating a secret for backup and snapshot locations You create a Secret object for the backup and snapshot locations if they use the same credentials. The default name of the Secret is cloud-credentials-azure . Prerequisites Your object storage and cloud storage must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Note The DataProtectionApplication custom resource (CR) requires a Secret for installation. If no spec.backupLocations.credential.name value is specified, the default name is used. If you do not want to specify the backup locations or the snapshot locations, you must create a Secret with the default name by using an empty credentials-velero file. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.2.3.3.1. Configuring secrets for different backup and snapshot location credentials If your backup and snapshot locations use different credentials, you create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials-azure . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: config: resourceGroup: <azure_resource_group> storageAccount: <azure_storage_account_id> subscriptionId: <azure_subscription_id> storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: <custom_secret> 1 provider: azure default: true objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: "true" name: default provider: azure 1 Backup location Secret with custom name. 4.2.3.4. Configuring the Data Protection Application You can configure Velero resource allocations and enable self-signed CA certificates. 4.2.3.4.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... configuration: velero: podConfig: resourceAllocations: limits: cpu: "1" 1 memory: 512Mi 2 requests: cpu: 500m 3 memory: 256Mi 4 1 Specify the value in millicpus or CPU units. Default value is 500m or 1 CPU unit. 2 Default value is 512Mi . 3 Default value is 500m or 1 CPU unit. 4 Default value is 256Mi . 4.2.3.4.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 ... 1 Specify the Base46-encoded CA certificate string. 2 Must be false to disable SSL/TLS security. 4.2.3.5. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials-azure . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with the default name, cloud-credentials-azure , for the snapshot location. This Secret is not referenced in the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - azure - openshift <.> restic: enable: true <.> backupLocations: - velero: config: resourceGroup: <azure_resource_group> <.> storageAccount: <azure_storage_account_id> <.> subscriptionId: <azure_subscription_id> <.> storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: cloud-credentials-azure <.> provider: azure default: true objectStorage: bucket: <bucket_name> <.> prefix: <prefix> <.> snapshotLocations: <.> - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: "true" name: default provider: azure <.> The openshift plug-in is mandatory in order to back up and restore namespaces on an OpenShift Container Platform cluster. <.> Set to false if you want to disable the Restic installation. Restic deploys a daemon set, which means that each worker node has Restic pods running. You configure Restic for backups by adding spec.defaultVolumesToRestic: true to the Backup CR. <.> Specify the Azure resource group. <.> Specify the Azure storage account ID. <.> Specify the Azure subscription ID. <.> If you do not specify this value, the default name, cloud-credentials-azure , is used. If you specify a custom name, the custom name is used for the backup location. <.> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. <.> Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. <.> You do not need to specify a snapshot location if you use CSI snapshots or Restic to back up PVs. Click Create . Verify the installation by viewing the OADP resources: USD oc get all -n openshift-adp Example output 4.2.3.5.1. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 featureFlags: - EnableCSI 2 1 Add the csi default plug-in. 2 Add the EnableCSI feature flag. 4.2.4. Installing and configuring the OpenShift API for Data Protection with Google Cloud Platform You install the OpenShift API for Data Protection (OADP) with Google Cloud Platform (GCP) by installing the OADP Operator, configuring GCP for Velero, and then installing the Data Protection Application. Important The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 4.2.4.1. Installing the OADP Operator You install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.7 by using Operator Lifecycle Manager (OLM). The OADP Operator installs Velero 1.7 . Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the OADP Operator . Select the OADP Operator and click Install . Click Install to install the Operator in the openshift-adp project. Click Operators Installed Operators to verify the installation. 4.2.4.2. Configuring Google Cloud Platform You configure Google Cloud Platform (GCP) for the OpenShift API for Data Protection (OADP). Prerequisites You must have the gcloud and gsutil CLI tools installed. See the Google cloud documentation for details. Procedure Log in to GCP: USD gcloud auth login Set the BUCKET variable: USD BUCKET=<bucket> 1 1 Specify your bucket name. Create the storage bucket: USD gsutil mb gs://USDBUCKET/ Set the PROJECT_ID variable to your active project: USD PROJECT_ID=USD(gcloud config get-value project) Create a service account: USD gcloud iam service-accounts create velero \ --display-name "Velero service account" List your service accounts: USD gcloud iam service-accounts list Set the SERVICE_ACCOUNT_EMAIL variable to match its email value: USD SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)') Attach the policies to give the velero user the necessary permissions: USD ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get ) Create the velero.server custom role: USD gcloud iam roles create velero.server \ --project USDPROJECT_ID \ --title "Velero Server" \ --permissions "USD(IFS=","; echo "USD{ROLE_PERMISSIONS[*]}")" Add IAM policy binding to the project: USD gcloud projects add-iam-policy-binding USDPROJECT_ID \ --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL \ --role projects/USDPROJECT_ID/roles/velero.server Update the IAM service account: USD gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET} Save the IAM service account keys to the credentials-velero file in the current directory: USD gcloud iam service-accounts keys create credentials-velero \ --iam-account USDSERVICE_ACCOUNT_EMAIL You use the credentials-velero file to create a Secret object for GCP before you install the Data Protection Application. 4.2.4.3. Creating a secret for backup and snapshot locations You create a Secret object for the backup and snapshot locations if they use the same credentials. The default name of the Secret is cloud-credentials-gcp . Prerequisites Your object storage and cloud storage must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.2.4.3.1. Configuring secrets for different backup and snapshot location credentials If your backup and snapshot locations use different credentials, you create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials-gcp . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 1 Backup location Secret with custom name. 4.2.4.4. Configuring the Data Protection Application You can configure Velero resource allocations and enable self-signed CA certificates. 4.2.4.4.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... configuration: velero: podConfig: resourceAllocations: limits: cpu: "1" 1 memory: 512Mi 2 requests: cpu: 500m 3 memory: 256Mi 4 1 Specify the value in millicpus or CPU units. Default value is 500m or 1 CPU unit. 2 Default value is 512Mi . 3 Default value is 500m or 1 CPU unit. 4 Default value is 256Mi . 4.2.4.4.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 ... 1 Specify the Base46-encoded CA certificate string. 2 Must be false to disable SSL/TLS security. 4.2.4.5. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials-gcp . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with the default name, cloud-credentials-gcp , for the snapshot location. This Secret is not referenced in the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - gcp - openshift <.> restic: enable: true <.> backupLocations: - velero: provider: gcp default: true credential: key: cloud name: cloud-credentials-gcp <.> objectStorage: bucket: <bucket_name> <.> prefix: <prefix> <.> snapshotLocations: <.> - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 <.> <.> The openshift plug-in is mandatory in order to back up and restore namespaces on an OpenShift Container Platform cluster. <.> Set to false if you want to disable the Restic installation. Restic deploys a daemon set, which means that each worker node has Restic pods running. You configure Restic for backups by adding spec.defaultVolumesToRestic: true to the Backup CR. <.> If you do not specify this value, the default name, cloud-credentials-gcp , is used. If you specify a custom name, the custom name is used for the backup location. <.> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. <.> Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. <.> You do not need to specify a snapshot location if you use CSI snapshots or Restic to back up PVs. <.> The snapshot location must be in the same region as the PVs. Click Create . Verify the installation by viewing the OADP resources: USD oc get all -n openshift-adp Example output 4.2.4.5.1. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 featureFlags: - EnableCSI 2 1 Add the csi default plug-in. 2 Add the EnableCSI feature flag. 4.2.5. Installing and configuring the OpenShift API for Data Protection with Multicloud Object Gateway You install the OpenShift API for Data Protection (OADP) with Multicloud Object Gateway (MCG) by installing the OADP Operator, creating a Secret object, and then installing the Data Protection Application. MCG is a component of OpenShift Container Storage (OCS). You configure MCG as a backup location in the DataProtectionApplication custom resource (CR). Important The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . If your cloud provider has a native snapshot API, configure a snapshot location. If your cloud provider does not support snapshots or if your storage is NFS, you can create backups with Restic. You do not need to specify a snapshot location in the DataProtectionApplication CR for Restic or Container Storage Interface (CSI) snapshots. To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. For details, see Using Operator Lifecycle Manager on restricted networks . 4.2.5.1. Installing the OADP Operator You install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.7 by using Operator Lifecycle Manager (OLM). The OADP Operator installs Velero 1.7 . Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the OADP Operator . Select the OADP Operator and click Install . Click Install to install the Operator in the openshift-adp project. Click Operators Installed Operators to verify the installation. 4.2.5.2. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials in order to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP). MCG is a component of OpenShift Container Storage. Prerequisites You must deploy OpenShift Container Storage by using the appropriate OpenShift Container Storage deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. Create a credentials-velero file: USD cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF You use the credentials-velero file to create a Secret object when you install the Data Protection Application. 4.2.5.3. Creating a secret for backup and snapshot locations You create a Secret object for the backup and snapshot locations if they use the same credentials. The default name of the Secret is cloud-credentials . Prerequisites Your object storage and cloud storage must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.2.5.3.1. Configuring secrets for different backup and snapshot location credentials If your backup and snapshot locations use different credentials, you create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - aws - openshift restic: enable: true backupLocations: - velero: config: profile: "default" region: minio s3Url: <url> insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" provider: aws default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> 1 Backup location Secret with custom name. 4.2.5.4. Configuring the Data Protection Application You can configure Velero resource allocations and enable self-signed CA certificates. 4.2.5.4.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... configuration: velero: podConfig: resourceAllocations: limits: cpu: "1" 1 memory: 512Mi 2 requests: cpu: 500m 3 memory: 256Mi 4 1 Specify the value in millicpus or CPU units. Default value is 500m or 1 CPU unit. 2 Default value is 512Mi . 3 Default value is 500m or 1 CPU unit. 4 Default value is 256Mi . 4.2.5.4.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 ... 1 Specify the Base46-encoded CA certificate string. 2 Must be false to disable SSL/TLS security. 4.2.5.5. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with the default name, cloud-credentials , for the snapshot location. This Secret is not referenced in the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - aws - openshift <.> restic: enable: true <.> backupLocations: - velero: config: profile: "default" region: minio s3Url: <url> <.> insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" provider: aws default: true credential: key: cloud name: cloud-credentials <.> objectStorage: bucket: <bucket_name> <.> prefix: <prefix> <.> <.> The openshift plug-in is mandatory in order to back up and restore namespaces on an OpenShift Container Platform cluster. <.> Set to false if you want to disable the Restic installation. Restic deploys a daemon set, which means that each worker node has Restic pods running. You configure Restic for backups by adding spec.defaultVolumesToRestic: true to the Backup CR. <.> Specify the URL of the S3 endpoint. <.> If you do not specify this value, the default name, cloud-credentials , is used. If you specify a custom name, the custom name is used for the backup location. <.> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. <.> Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verify the installation by viewing the OADP resources: USD oc get all -n openshift-adp Example output 4.2.5.5.1. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 featureFlags: - EnableCSI 2 1 Add the csi default plug-in. 2 Add the EnableCSI feature flag. 4.2.6. Installing and configuring the OpenShift API for Data Protection with OpenShift Container Storage You install the OpenShift API for Data Protection (OADP) with OpenShift Container Storage (OCS) by installing the OADP Operator and configuring a backup location and a snapshot location. Then, you install the Data Protection Application. You can configure Multicloud Object Gateway or any S3-compatible object storage as a backup location in the DataProtectionApplication custom resource (CR). Important The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . If the cloud provider has a native snapshot API, you can configure cloud storage as a snapshot location in the DataProtectionApplication CR. You do not need to specify a snapshot location for Restic or Container Storage Interface (CSI) snapshots. To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. For details, see Using Operator Lifecycle Manager on restricted networks . 4.2.6.1. Installing the OADP Operator You install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.7 by using Operator Lifecycle Manager (OLM). The OADP Operator installs Velero 1.7 . Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the OADP Operator . Select the OADP Operator and click Install . Click Install to install the Operator in the openshift-adp project. Click Operators Installed Operators to verify the installation. Note After you install the OADP Operator, you configure object storage as a backup location and cloud storage as a snapshot location, if the cloud provider supports a native snapshot API. If the cloud provider does not support snapshots or if your storage is NFS, you can create backups with Restic . Restic does not require a snapshot location. 4.2.6.2. Creating a secret for backup and snapshot locations You create a Secret object for the backup and snapshot locations if they use the same credentials. The default name of the Secret is cloud-credentials , unless you specify a default plug-in for the backup storage provider. Prerequisites Your object storage and cloud storage must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.2.6.2.1. Configuring secrets for different backup and snapshot location credentials If your backup and snapshot locations use different credentials, you create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - csi - openshift featureFlags: - EnableCSI restic: enable: true backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> 1 Backup location Secret with custom name. 4.2.6.3. Configuring the Data Protection Application You can configure Velero resource allocations and enable self-signed CA certificates. 4.2.6.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... configuration: velero: podConfig: resourceAllocations: limits: cpu: "1" 1 memory: 512Mi 2 requests: cpu: 500m 3 memory: 256Mi 4 1 Specify the value in millicpus or CPU units. Default value is 500m or 1 CPU unit. 2 Default value is 512Mi . 3 Default value is 500m or 1 CPU unit. 4 Default value is 256Mi . 4.2.6.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 ... 1 Specify the Base46-encoded CA certificate string. 2 Must be false to disable SSL/TLS security. 4.2.6.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with the default name, cloud-credentials , for the snapshot location. This Secret is not referenced in the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - gcp <.> - csi <.> - openshift <.> restic: enable: true <.> backupLocations: - velero: provider: gcp <.> default: true credential: key: cloud name: <default_secret> <.> objectStorage: bucket: <bucket_name> <.> prefix: <prefix> <.> <.> Specify the default plug-in for the backup provider, for example, gcp , if appropriate. <.> Specify the csi default plug-in if you use CSI snapshots to back up PVs. The csi plug-in uses the Velero CSI beta snapshot APIs . You do not need to configure a snapshot location. <.> The openshift plug-in is mandatory in order to back up and restore namespaces on an OpenShift Container Platform cluster. <.> Set to false if you want to disable the Restic installation. Restic deploys a daemon set, which means that each worker node has Restic pods running. You configure Restic for backups by adding spec.defaultVolumesToRestic: true to the Backup CR. <.> Specify the backup provider. <.> If you use a default plug-in for the backup provider, you must specify the correct default name for the Secret , for example, cloud-credentials-gcp . If you specify a custom name, the custom name is used for the backup location. If you do not specify a Secret name, the default name is used. <.> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. <.> Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verify the installation by viewing the OADP resources: USD oc get all -n openshift-adp Example output 4.2.6.4.1. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 featureFlags: - EnableCSI 2 1 Add the csi default plug-in. 2 Add the EnableCSI feature flag. 4.2.7. Uninstalling the OpenShift API for Data Protection You uninstall the OpenShift API for Data Protection (OADP) by deleting the OADP Operator. See Deleting Operators from a cluster for details. 4.3. Backing up and restoring 4.3.1. Backing up applications You back up applications by creating a Backup custom resource (CR). The Backup CR creates backup files for Kubernetes resources and internal images, on S3 object storage, and snapshots for persistent volumes (PVs), if the cloud provider uses a native snapshot API or the Container Storage Interface (CSI) to create snapshots, such as OpenShift Container Storage 4. For more information, see CSI volume snapshots . Important The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . If your cloud provider has a native snapshot API or supports Container Storage Interface (CSI) snapshots , the Backup CR backs up persistent volumes by creating snapshots. For more information, see the Overview of CSI volume snapshots in the OpenShift Container Platform documentation. If your cloud provider does not support snapshots or if your applications are on NFS data volumes, you can create backups by using Restic . You can create backup hooks to run commands before or after the backup operation. You can schedule backups by creating a Schedule CR instead of a Backup CR. 4.3.1.1. Creating a Backup CR You back up Kubernetes images, internal images, and persistent volumes (PVs) by creating a Backup custom resource (CR). Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. Backup location prerequisites: You must have S3 object storage configured for Velero. You must have a backup location configured in the DataProtectionApplication CR. Snapshot location prerequisites: Your cloud provider must have a native snapshot API or support Container Storage Interface (CSI) snapshots. For CSI snapshots, you must create a VolumeSnapshotClass CR to register the CSI driver. You must have a volume location configured in the DataProtectionApplication CR. Procedure Retrieve the backupStorageLocations CRs: USD oc get backupStorageLocations Example output NAME PHASE LAST VALIDATED AGE DEFAULT velero-sample-1 Available 11s 31m Create a Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 storageLocation: <velero-sample-1> 2 ttl: 720h0m0s 1 Specify an array of namespaces to back up. 2 Specify the name of the backupStorageLocations CR. Verify that the status of the Backup CR is Completed : USD oc get backup -n openshift-adp <backup> -o jsonpath='{.status.phase}' 4.3.1.2. Backing up persistent volumes with CSI snapshots You back up persistent volumes with Container Storage Interface (CSI) snapshots by creating a VolumeSnapshotClass custom resource (CR) to register the CSI driver before you create the Backup CR. Prerequisites The cloud provider must support CSI snapshots. You must enable CSI in the DataProtectionApplication CR. Procedure Create a VolumeSnapshotClass CR, as in the following examples: Ceph RBD apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass deletionPolicy: Retain metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: "true" snapshotter: openshift-storage.rbd.csi.ceph.com driver: openshift-storage.rbd.csi.ceph.com parameters: clusterID: openshift-storage csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage Ceph FS apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: "true" driver: openshift-storage.cephfs.csi.ceph.com deletionPolicy: Retain parameters: clusterID: openshift-storage csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage Other cloud providers apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: "true" driver: <csi_driver> deletionPolicy: Retain You can now create a Backup CR. 4.3.1.3. Backing up applications with Restic You back up Kubernetes resources, internal images, and persistent volumes with Restic by editing the Backup custom resource (CR). You do not need to specify a snapshot location in the DataProtectionApplication CR. Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. You must not disable the default Restic installation by setting spec.configuration.restic.enable to false in the DataProtectionApplication CR. The DataProtectionApplication CR must be in a Ready state. Procedure Edit the Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: defaultVolumesToRestic: true 1 ... 1 Add defaultVolumesToRestic: true to the spec block. 4.3.1.4. Creating backup hooks You create backup hooks to run commands in a container in a pod by editing the Backup custom resource (CR). Pre hooks run before the pod is backed up. Post hooks run after the backup. Procedure Add a hook to the spec.hooks block of the Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server pre: 4 - exec: container: <container> 5 command: - /bin/uname 6 - -a onError: Fail 7 timeout: 30s 8 post: 9 ... 1 Array of namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces. 2 Currently, pods are the only supported resource. 3 Optional: This hook only applies to objects matching the label selector. 4 Array of hooks to run before the backup. 5 Optional: If the container is not specified, the command runs in the first container in the pod. 6 Array of commands that the hook runs. 7 Allowed values for error handling are Fail and Continue . The default is Fail . 8 Optional: How long to wait for the commands to run. The default is 30s . 9 This block defines an array of hooks to run after the backup, with the same parameters as the pre-backup hooks. 4.3.1.5. Scheduling backups You schedule backups by creating a Schedule custom resource (CR) instead of a Backup CR. Warning Leave enough time in your backup schedule for a backup to finish before another backup is created. For example, if a backup of a namespace typically takes 10 minutes, do not schedule backups more frequently than every 15 minutes. Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. Procedure Retrieve the backupStorageLocations CRs: USD oc get backupStorageLocations Example output NAME PHASE LAST VALIDATED AGE DEFAULT velero-sample-1 Available 11s 31m Create a Schedule CR, as in the following example: USD cat << EOF | oc apply -f - apiVersion: velero.io/v1 kind: Schedule metadata: name: <schedule> namespace: openshift-adp spec: schedule: 0 7 * * * 1 template: hooks: {} includedNamespaces: - <namespace> 2 storageLocation: <velero-sample-1> 3 defaultVolumesToRestic: true 4 ttl: 720h0m0s EOF 1 cron expression to schedule the backup, for example, 0 7 * * * to perform a backup every day at 7:00. 2 Array of namespaces to back up. 3 Name of the backupStorageLocations CR. 4 Optional: Add the defaultVolumesToRestic: true key-value pair if you are backing up volumes with Restic. Verify that the status of the Schedule CR is Completed after the scheduled backup runs: USD oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}' 4.3.2. Restoring applications You restore application backups by creating a Restore custom resources (CRs) . You can create restore hooks to run commands in init containers, before the application container starts, or in the application container itself. 4.3.2.1. Creating a Restore CR You restore a Backup custom resource (CR) by creating a Restore CR. Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. You must have a Velero Backup CR. Adjust the requested size so the persistent volume (PV) capacity matches the requested size at backup time. Procedure Create a Restore CR, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true 1 Name of the Backup CR. Verify that the status of the Restore CR is Completed : USD oc get restore -n openshift-adp <restore> -o jsonpath='{.status.phase}' Verify that the backup resources have been restored: USD oc get all -n <namespace> 1 1 Namespace that you backed up. 4.3.2.2. Creating restore hooks You create restore hooks to run commands in a container in a pod while restoring your application by editing the Restore custom resource (CR). You can create two types of restore hooks: An init hook adds an init container to a pod to perform setup tasks before the application container starts. If you restore a Restic backup, the restic-wait init container is added before the restore hook init container. An exec hook runs commands or scripts in a container of a restored pod. Procedure Add a hook to the spec.hooks block of the Restore CR, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c - exec: container: <container> 4 command: - /bin/bash 5 - -c - "psql < /backup/backup.sql" waitTimeout: 5m 6 execTimeout: 1m 7 onError: Continue 8 1 Optional: Array of namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces. 2 Currently, pods are the only supported resource. 3 Optional: This hook only applies to objects matching the label selector. 4 Optional: If the container is not specified, the command runs in the first container in the pod. 5 Array of commands that the hook runs. 6 Optional: If the waitTimeout is not specified, the restore waits indefinitely. You can specify how long to wait for a container to start and for preceding hooks in the container to complete. The wait timeout starts when the container is restored and might require time for the container to pull the image and mount the volumes. 7 Optional: How long to wait for the commands to run. The default is 30s . 8 Allowed values for error handling are Fail and Continue : Continue : Only command failures are logged. Fail : No more restore hooks run in any container in any pod. The status of the Restore CR will be PartiallyFailed . 4.4. Troubleshooting You can debug Velero custom resources (CRs) by using the OpenShift CLI tool or the Velero CLI tool . The Velero CLI tool provides more detailed logs and information. You can check installation issues , backup and restore CR issues , and Restic issues . You can collect logs, CR information, and Prometheus metric data by using the must-gather tool . You can obtain the Velero CLI tool by: Downloading the Velero CLI tool Accessing the Velero binary in the Velero deployment in the cluster 4.4.1. Downloading the Velero CLI tool You can download and install the Velero CLI tool by following the instructions on the Velero documentation page . The page includes instructions for: macOS by using Homebrew GitHub Windows by using Chocolatey Prerequisites You have access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled. You have installed kubectl locally. Procedure Open a browser and navigate to "Install the CLI" on the Verleo website . Follow the appropriate procedure for macOS, GitHub, or Windows. Download the Velero version appropriate for your version of OADP, according to the table that follows: Table 4.2. OADP-Velero version relationship OADP version Velero version 0.2.6 1.6.0 0.5.5 1.7.1 1.0.0 1.7.1 1.0.1 1.7.1 1.0.2 1.7.1 1.0.3 1.7.1 4.4.2. Accessing the Velero binary in the Velero deployment in the cluster You can use a shell command to access the Velero binary in the Velero deployment in the cluster. Prerequisites Your DataProtectionApplication custom resource has a status of Reconcile complete . Procedure Enter the following command to set the needed alias: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' 4.4.3. Debugging Velero resources with the OpenShift CLI tool You can debug a failed backup or restore by checking Velero custom resources (CRs) and the Velero pod log with the OpenShift CLI tool. Velero CRs Use the oc describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc describe <velero_cr> <cr_name> Velero pod logs Use the oc logs command to retrieve the Velero pod logs: USD oc logs pod/<velero> Velero pod debug logs You can specify the Velero log level in the DataProtectionApplication resource as shown in the following example. Note This option is available starting from OADP 1.0.3. apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample spec: configuration: velero: logLevel: warning The following logLevel values are available: trace debug info warning error fatal panic It is recommended to use debug for most logs. 4.4.4. Debugging Velero resources with the Velero CLI tool You can debug Backup and Restore custom resources (CRs) and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool. Syntax Use the oc exec command to run a Velero CLI command: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> <command> <cr_name> Example USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Help option Use the velero --help option to list all Velero CLI commands: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ --help Describe command Use the velero describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> describe <cr_name> Example USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Logs command Use the velero logs command to retrieve the logs of a Backup or Restore CR: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> logs <cr_name> Example USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf 4.4.5. Installation issues You might encounter issues caused by using invalid directories or incorrect credentials when you install the Data Protection Application. 4.4.5.1. Backup storage contains invalid directories The Velero pod log displays the error message, Backup storage contains invalid top-level directories . Cause The object storage contains top-level directories that are not Velero directories. Solution If the object storage is not dedicated to Velero, you must specify a prefix for the bucket by setting the spec.backupLocations.velero.objectStorage.prefix parameter in the DataProtectionApplication manifest. 4.4.5.2. Incorrect AWS credentials The oadp-aws-registry pod log displays the error message, InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records. The Velero pod log displays the error message, NoCredentialProviders: no valid providers in chain . Cause The credentials-velero file used to create the Secret object is incorrectly formatted. Solution Ensure that the credentials-velero file is correctly formatted, as in the following example: Example credentials-velero file 1 AWS default profile. 2 Do not enclose the values with quotation marks ( " , ' ). 4.4.6. Backup and Restore CR issues You might encounter these common issues with Backup and Restore custom resources (CRs). 4.4.6.1. Backup CR cannot retrieve volume The Backup CR displays the error message, InvalidVolume.NotFound: The volume 'vol-xxxx' does not exist . Cause The persistent volume (PV) and the snapshot locations are in different regions. Solution Edit the value of the spec.snapshotLocations.velero.config.region key in the DataProtectionApplication manifest so that the snapshot location is in the same region as the PV. Create a new Backup CR. 4.4.6.2. Backup CR status remains in progress The status of a Backup CR remains in the InProgress phase and does not complete. Cause If a backup is interrupted, it cannot be resumed. Solution Retrieve the details of the Backup CR: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ backup describe <backup> Delete the Backup CR: USD oc delete backup <backup> -n openshift-adp You do not need to clean up the backup location because a Backup CR in progress has not uploaded files to object storage. Create a new Backup CR. 4.4.7. Restic issues You might encounter these issues when you back up applications with Restic. 4.4.7.1. Restic permission error for NFS data volumes with root_squash enabled The Restic pod log displays the error message, controller=pod-volume-backup error="fork/exec/usr/bin/restic: permission denied" . Cause If your NFS data volumes have root_squash enabled, Restic maps to nfsnobody and does not have permission to create backups. Solution You can resolve this issue by creating a supplemental group for Restic and adding the group ID to the DataProtectionApplication manifest: Create a supplemental group for Restic on the NFS data volume. Set the setgid bit on the NFS directories so that group ownership is inherited. Add the spec.configuration.restic.supplementalGroups parameter and the group ID to the DataProtectionApplication manifest, as in the following example: spec: configuration: restic: enable: true supplementalGroups: - <group_id> 1 1 Specify the supplemental group ID. Wait for the Restic pods to restart so that the changes are applied. 4.4.7.2. Restore CR of Restic backup is "PartiallyFailed", "Failed", or remains "InProgress" The Restore CR of a Restic backup completes with a PartiallyFailed or Failed status or it remains InProgress and does not complete. If the status is PartiallyFailed or Failed , the Velero pod log displays the error message, level=error msg="unable to successfully complete restic restores of pod's volumes" . If the status is InProgress , the Restore CR logs are unavailable and no errors appear in the Restic pod logs. Cause The DeploymentConfig object redeploys the Restore pod, causing the Restore CR to fail. Solution Create a Restore CR that excludes the ReplicationController , DeploymentConfig , and TemplateInstances resources: USD velero restore create --from-backup=<backup> -n openshift-adp \ 1 --include-namespaces <namespace> \ 2 --exclude-resources replicationcontroller,deploymentconfig,templateinstances.template.openshift.io \ --restore-volumes=true 1 Specify the name of the Backup CR. 2 Specify the include-namespaces in the Backup CR. Verify that the status of the Restore CR is Completed : USD oc get restore -n openshift-adp <restore> -o jsonpath='{.status.phase}' Create a Restore CR that includes the ReplicationController and DeploymentConfig resources: USD velero restore create --from-backup=<backup> -n openshift-adp \ --include-namespaces <namespace> \ --include-resources replicationcontroller,deploymentconfig \ --restore-volumes=true Verify that the status of the Restore CR is Completed : USD oc get restore -n openshift-adp <restore> -o jsonpath='{.status.phase}' Verify that the backup resources have been restored: USD oc get all -n <namespace> 4.4.7.3. Restic Backup CR cannot be recreated after bucket is emptied If you create a Restic Backup CR for a namespace, empty the S3 bucket, and then recreate the Backup CR for the same namespace, the recreated Backup CR fails. The velero pod log displays the error message, msg="Error checking repository for stale locks" . Cause Velero does not create the Restic repository from the ResticRepository manifest if the Restic directories are deleted on object storage. See ( Velero issue 4421 ) for details. 4.4.8. Using the must-gather tool You can collect logs, metrics, and information about OADP custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. You can run the must-gather tool with the following data collection options: Full must-gather data collection collects Prometheus metrics, pod logs, and Velero CR information for all namespaces where the OADP Operator is installed. Essential must-gather data collection collects pod logs and Velero CR information for a specific duration of time, for example, one hour or 24 hours. Prometheus metrics and duplicate logs are not included. must-gather data collection with timeout. Data collection can take a long time if there are many failed Backup CRs. You can improve performance by setting a timeout value. Prometheus metrics data dump downloads an archive file containing the metrics data collected by Prometheus. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI installed. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command for one of the following data collection options: Full must-gather data collection, including Prometheus metrics: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.0 The data is saved as must-gather/must-gather.tar.gz . You can upload this file to a support case on the Red Hat Customer Portal . Essential must-gather data collection, without Prometheus metrics, for a specific time duration: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.0 \ -- /usr/bin/gather_<time>_essential 1 1 Specify the time in hours. Allowed values are 1h , 6h , 24h , 72h , or all , for example, gather_1h_essential or gather_all_essential . must-gather data collection with timeout: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.0 \ -- /usr/bin/gather_with_timeout <timeout> 1 1 Specify a timeout value in seconds. Prometheus metrics data dump: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.0 \ -- /usr/bin/gather_metrics_dump This operation can take a long time. The data is saved as must-gather/metrics/prom_data.tar.gz . Viewing metrics data with the Prometheus console You can view the metrics data with the Prometheus console. Procedure Decompress the prom_data.tar.gz file: USD tar -xvzf must-gather/metrics/prom_data.tar.gz Create a local Prometheus instance: USD make prometheus-run The command outputs the Prometheus URL. Output Started Prometheus on http://localhost:9090 Launch a web browser and navigate to the URL to view the data by using the Prometheus web console. After you have viewed the data, delete the Prometheus instance and data: USD make prometheus-cleanup
[ "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws - azure - gcp", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - azure - gcp customPlugins: - name: custom-plugin-example image: quay.io/example-repo/custom-velero-plugin", "BUCKET=<your_bucket>", "REGION=<your_region>", "aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1", "aws iam create-user --user-name velero 1", "cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF", "aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json", "aws iam create-access-key --user-name velero", "{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }", "cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF", "oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero", "[backupStorage] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> [volumeSnapshot] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>", "oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero 1", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> prefix: <prefix> config: region: us-east-1 profile: \"backupStorage\" credential: key: cloud name: cloud-credentials snapshotLocations: - name: default velero: provider: aws config: region: us-west-2 profile: \"volumeSnapshot\"", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: resourceAllocations: limits: cpu: \"1\" 1 memory: 512Mi 2 requests: cpu: 500m 3 memory: 256Mi 4", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - openshift <.> - aws restic: enable: true <.> backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> <.> prefix: <prefix> <.> config: region: <region> profile: \"default\" credential: key: cloud name: cloud-credentials <.> snapshotLocations: <.> - name: default velero: provider: aws config: region: <region> <.> profile: \"default\"", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/oadp-velero-sample-1-aws-registry-5d6968cbdd-d5w9k 1/1 Running 0 95s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/oadp-velero-sample-1-aws-registry-svc ClusterIP 172.30.130.230 <none> 5000/TCP 95s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/oadp-velero-sample-1-aws-registry 1/1 1 1 96s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/oadp-velero-sample-1-aws-registry-5d6968cbdd 1 1 1 96s replicaset.apps/velero-588db7f655 1 1 1 96s", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1 featureFlags: - EnableCSI 2", "az login", "AZURE_RESOURCE_GROUP=Velero_Backups", "az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1", "AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"", "az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot", "BLOB_CONTAINER=velero", "az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID", "AZURE_STORAGE_ACCOUNT_ACCESS_KEY=`az storage account keys list --account-name USDAZURE_STORAGE_ACCOUNT_ID --query \"[?keyName == 'key1'].value\" -o tsv`", "cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_STORAGE_ACCOUNT_ACCESS_KEY=USD{AZURE_STORAGE_ACCOUNT_ACCESS_KEY} 1 AZURE_CLOUD_NAME=AzurePublicCloud EOF", "oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: resourceGroup: <azure_resource_group> storageAccount: <azure_storage_account_id> subscriptionId: <azure_subscription_id> storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: <custom_secret> 1 provider: azure default: true objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" name: default provider: azure", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: resourceAllocations: limits: cpu: \"1\" 1 memory: 512Mi 2 requests: cpu: 500m 3 memory: 256Mi 4", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - azure - openshift <.> restic: enable: true <.> backupLocations: - velero: config: resourceGroup: <azure_resource_group> <.> storageAccount: <azure_storage_account_id> <.> subscriptionId: <azure_subscription_id> <.> storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: cloud-credentials-azure <.> provider: azure default: true objectStorage: bucket: <bucket_name> <.> prefix: <prefix> <.> snapshotLocations: <.> - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" name: default provider: azure", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/oadp-velero-sample-1-aws-registry-5d6968cbdd-d5w9k 1/1 Running 0 95s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/oadp-velero-sample-1-aws-registry-svc ClusterIP 172.30.130.230 <none> 5000/TCP 95s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/oadp-velero-sample-1-aws-registry 1/1 1 1 96s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/oadp-velero-sample-1-aws-registry-5d6968cbdd 1 1 1 96s replicaset.apps/velero-588db7f655 1 1 1 96s", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1 featureFlags: - EnableCSI 2", "gcloud auth login", "BUCKET=<bucket> 1", "gsutil mb gs://USDBUCKET/", "PROJECT_ID=USD(gcloud config get-value project)", "gcloud iam service-accounts create velero --display-name \"Velero service account\"", "gcloud iam service-accounts list", "SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')", "ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get )", "gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"", "gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server", "gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}", "gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL", "oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: resourceAllocations: limits: cpu: \"1\" 1 memory: 512Mi 2 requests: cpu: 500m 3 memory: 256Mi 4", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - gcp - openshift <.> restic: enable: true <.> backupLocations: - velero: provider: gcp default: true credential: key: cloud name: cloud-credentials-gcp <.> objectStorage: bucket: <bucket_name> <.> prefix: <prefix> <.> snapshotLocations: <.> - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 <.>", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/oadp-velero-sample-1-aws-registry-5d6968cbdd-d5w9k 1/1 Running 0 95s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/oadp-velero-sample-1-aws-registry-svc ClusterIP 172.30.130.230 <none> 5000/TCP 95s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/oadp-velero-sample-1-aws-registry 1/1 1 1 96s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/oadp-velero-sample-1-aws-registry-5d6968cbdd 1 1 1 96s replicaset.apps/velero-588db7f655 1 1 1 96s", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1 featureFlags: - EnableCSI 2", "cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF", "oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - aws - openshift restic: enable: true backupLocations: - velero: config: profile: \"default\" region: minio s3Url: <url> insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: resourceAllocations: limits: cpu: \"1\" 1 memory: 512Mi 2 requests: cpu: 500m 3 memory: 256Mi 4", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - aws - openshift <.> restic: enable: true <.> backupLocations: - velero: config: profile: \"default\" region: minio s3Url: <url> <.> insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials <.> objectStorage: bucket: <bucket_name> <.> prefix: <prefix> <.>", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/oadp-velero-sample-1-aws-registry-5d6968cbdd-d5w9k 1/1 Running 0 95s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/oadp-velero-sample-1-aws-registry-svc ClusterIP 172.30.130.230 <none> 5000/TCP 95s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/oadp-velero-sample-1-aws-registry 1/1 1 1 96s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/oadp-velero-sample-1-aws-registry-5d6968cbdd 1 1 1 96s replicaset.apps/velero-588db7f655 1 1 1 96s", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1 featureFlags: - EnableCSI 2", "oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - csi - openshift featureFlags: - EnableCSI restic: enable: true backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: resourceAllocations: limits: cpu: \"1\" 1 memory: 512Mi 2 requests: cpu: 500m 3 memory: 256Mi 4", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - gcp <.> - csi <.> - openshift <.> restic: enable: true <.> backupLocations: - velero: provider: gcp <.> default: true credential: key: cloud name: <default_secret> <.> objectStorage: bucket: <bucket_name> <.> prefix: <prefix> <.>", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/oadp-velero-sample-1-aws-registry-5d6968cbdd-d5w9k 1/1 Running 0 95s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/oadp-velero-sample-1-aws-registry-svc ClusterIP 172.30.130.230 <none> 5000/TCP 95s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/oadp-velero-sample-1-aws-registry 1/1 1 1 96s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/oadp-velero-sample-1-aws-registry-5d6968cbdd 1 1 1 96s replicaset.apps/velero-588db7f655 1 1 1 96s", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1 featureFlags: - EnableCSI 2", "oc get backupStorageLocations", "NAME PHASE LAST VALIDATED AGE DEFAULT velero-sample-1 Available 11s 31m", "apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 storageLocation: <velero-sample-1> 2 ttl: 720h0m0s", "oc get backup -n openshift-adp <backup> -o jsonpath='{.status.phase}'", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass deletionPolicy: Retain metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: \"true\" snapshotter: openshift-storage.rbd.csi.ceph.com driver: openshift-storage.rbd.csi.ceph.com parameters: clusterID: openshift-storage csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: \"true\" driver: openshift-storage.cephfs.csi.ceph.com deletionPolicy: Retain parameters: clusterID: openshift-storage csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: \"true\" driver: <csi_driver> deletionPolicy: Retain", "apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: defaultVolumesToRestic: true 1", "apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server pre: 4 - exec: container: <container> 5 command: - /bin/uname 6 - -a onError: Fail 7 timeout: 30s 8 post: 9", "oc get backupStorageLocations", "NAME PHASE LAST VALIDATED AGE DEFAULT velero-sample-1 Available 11s 31m", "cat << EOF | oc apply -f - apiVersion: velero.io/v1 kind: Schedule metadata: name: <schedule> namespace: openshift-adp spec: schedule: 0 7 * * * 1 template: hooks: {} includedNamespaces: - <namespace> 2 storageLocation: <velero-sample-1> 3 defaultVolumesToRestic: true 4 ttl: 720h0m0s EOF", "oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}'", "apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true", "oc get restore -n openshift-adp <restore> -o jsonpath='{.status.phase}'", "oc get all -n <namespace> 1", "apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c - exec: container: <container> 4 command: - /bin/bash 5 - -c - \"psql < /backup/backup.sql\" waitTimeout: 5m 6 execTimeout: 1m 7 onError: Continue 8", "alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'", "oc describe <velero_cr> <cr_name>", "oc logs pod/<velero>", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample spec: configuration: velero: logLevel: warning", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero --help", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "[default] 1 aws_access_key_id=AKIAIOSFODNN7EXAMPLE 2 aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", "oc -n {namespace} exec deployment/velero -c velero -- ./velero backup describe <backup>", "oc delete backup <backup> -n openshift-adp", "spec: configuration: restic: enable: true supplementalGroups: - <group_id> 1", "velero restore create --from-backup=<backup> -n openshift-adp \\ 1 --include-namespaces <namespace> \\ 2 --exclude-resources replicationcontroller,deploymentconfig,templateinstances.template.openshift.io --restore-volumes=true", "oc get restore -n openshift-adp <restore> -o jsonpath='{.status.phase}'", "velero restore create --from-backup=<backup> -n openshift-adp --include-namespaces <namespace> --include-resources replicationcontroller,deploymentconfig --restore-volumes=true", "oc get restore -n openshift-adp <restore> -o jsonpath='{.status.phase}'", "oc get all -n <namespace>", "oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.0", "oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.0 -- /usr/bin/gather_<time>_essential 1", "oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.0 -- /usr/bin/gather_with_timeout <timeout> 1", "oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.0 -- /usr/bin/gather_metrics_dump", "tar -xvzf must-gather/metrics/prom_data.tar.gz", "make prometheus-run", "Started Prometheus on http://localhost:9090", "make prometheus-cleanup" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/backup_and_restore/application-backup-and-restore
8.3.7. sealert Messages
8.3.7. sealert Messages Denials are assigned IDs, as seen in /var/log/messages . The following is an example AVC denial (logged to messages ) that occurred when the Apache HTTP Server (running in the httpd_t domain) attempted to access the /var/www/html/file1 file (labeled with the samba_share_t type): As suggested, run the sealert -l 84e0b04d-d0ad-4347-8317-22e74f6cd020 command to view the complete message. This command only works on the local machine, and presents the same information as the sealert GUI: Summary A brief summary of the denied action. This is the same as the denial in /var/log/messages . In this example, the httpd process was denied access to a file ( file1 ), which is labeled with the samba_share_t type. Detailed Description A more verbose description. In this example, file1 is labeled with the samba_share_t type. This type is used for files and directories that you want to export via Samba. The description suggests changing the type to a type that can be accessed by the Apache HTTP Server and Samba, if such access is desired. Allowing Access A suggestion for how to allow access. This may be relabeling files, enabling a Boolean, or making a local policy module. In this case, the suggestion is to label the file with a type accessible to both the Apache HTTP Server and Samba. Fix Command A suggested command to allow access and resolve the denial. In this example, it gives the command to change the file1 type to public_content_t , which is accessible to the Apache HTTP Server and Samba. Additional Information Information that is useful in bug reports, such as the policy package name and version ( selinux-policy-3.5.13-11.fc12 ), but may not help towards solving why the denial occurred. Raw Audit Messages The raw audit messages from /var/log/audit/audit.log that are associated with the denial. Refer to Section 8.3.6, "Raw Audit Messages" for information about each item in the AVC denial.
[ "hostname setroubleshoot: SELinux is preventing httpd (httpd_t) \"getattr\" to /var/www/html/file1 (samba_share_t). For complete SELinux messages. run sealert -l 84e0b04d-d0ad-4347-8317-22e74f6cd020", "~]USD sealert -l 84e0b04d-d0ad-4347-8317-22e74f6cd020 Summary: SELinux is preventing httpd (httpd_t) \"getattr\" to /var/www/html/file1 (samba_share_t). Detailed Description: SELinux denied access to /var/www/html/file1 requested by httpd. /var/www/html/file1 has a context used for sharing by different program. If you would like to share /var/www/html/file1 from httpd also, you need to change its file context to public_content_t. If you did not intend to this access, this could signal a intrusion attempt. Allowing Access: You can alter the file context by executing chcon -t public_content_t '/var/www/html/file1' Fix Command: chcon -t public_content_t '/var/www/html/file1' Additional Information: Source Context unconfined_u:system_r:httpd_t:s0 Target Context unconfined_u:object_r:samba_share_t:s0 Target Objects /var/www/html/file1 [ file ] Source httpd Source Path /usr/sbin/httpd Port <Unknown> Host hostname Source RPM Packages httpd-2.2.10-2 Target RPM Packages Policy RPM selinux-policy-3.5.13-11.fc12 Selinux Enabled True Policy Type targeted MLS Enabled True Enforcing Mode Enforcing Plugin Name public_content Host Name hostname Platform Linux hostname 2.6.27.4-68.fc12.i686 #1 SMP Thu Oct 30 00:49:42 EDT 2008 i686 i686 Alert Count 4 First Seen Wed Nov 5 18:53:05 2008 Last Seen Wed Nov 5 01:22:58 2008 Local ID 84e0b04d-d0ad-4347-8317-22e74f6cd020 Line Numbers Raw Audit Messages node= hostname type=AVC msg=audit(1225812178.788:101): avc: denied { getattr } for pid=2441 comm=\"httpd\" path=\"/var/www/html/file1\" dev=dm-0 ino=284916 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:samba_share_t:s0 tclass=file node= hostname type=SYSCALL msg=audit(1225812178.788:101): arch=40000003 syscall=196 success=no exit=-13 a0=b8e97188 a1=bf87aaac a2=54dff4 a3=2008171 items=0 ppid=2439 pid=2441 auid=502 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=3 comm=\"httpd\" exe=\"/usr/sbin/httpd\" subj=unconfined_u:system_r:httpd_t:s0 key=(null)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-fixing_problems-sealert_messages
Managing and allocating storage resources
Managing and allocating storage resources Red Hat OpenShift Data Foundation 4.17 Instructions on how to allocate storage to core services and hosted applications in OpenShift Data Foundation, including snapshot and clone. Red Hat Storage Documentation Team Abstract This document explains how to allocate storage to core services and hosted applications in Red Hat OpenShift Data Foundation.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/managing_and_allocating_storage_resources/index
Chapter 2. APIService [apiregistration.k8s.io/v1]
Chapter 2. APIService [apiregistration.k8s.io/v1] Description APIService represents a server for a particular GroupVersion. Name must be "version.group". Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object APIServiceSpec contains information for locating and communicating with a server. Only https is supported, though you are able to disable certificate verification. status object APIServiceStatus contains derived information about an API server 2.1.1. .spec Description APIServiceSpec contains information for locating and communicating with a server. Only https is supported, though you are able to disable certificate verification. Type object Required groupPriorityMinimum versionPriority Property Type Description caBundle string CABundle is a PEM encoded CA bundle which will be used to validate an API server's serving certificate. If unspecified, system trust roots on the apiserver are used. group string Group is the API group name this server hosts groupPriorityMinimum integer GroupPriorityMininum is the priority this group should have at least. Higher priority means that the group is preferred by clients over lower priority ones. Note that other versions of this group might specify even higher GroupPriorityMininum values such that the whole group gets a higher priority. The primary sort is based on GroupPriorityMinimum, ordered highest number to lowest (20 before 10). The secondary sort is based on the alphabetical comparison of the name of the object. (v1.bar before v1.foo) We'd recommend something like: *.k8s.io (except extensions) at 18000 and PaaSes (OpenShift, Deis) are recommended to be in the 2000s insecureSkipTLSVerify boolean InsecureSkipTLSVerify disables TLS certificate verification when communicating with this server. This is strongly discouraged. You should use the CABundle instead. service object ServiceReference holds a reference to Service.legacy.k8s.io version string Version is the API version this server hosts. For example, "v1" versionPriority integer VersionPriority controls the ordering of this API version inside of its group. Must be greater than zero. The primary sort is based on VersionPriority, ordered highest to lowest (20 before 10). Since it's inside of a group, the number can be small, probably in the 10s. In case of equal version priorities, the version string will be used to compute the order inside a group. If the version string is "kube-like", it will sort above non "kube-like" version strings, which are ordered lexicographically. "Kube-like" versions start with a "v", then are followed by a number (the major version), then optionally the string "alpha" or "beta" and another number (the minor version). These are sorted first by GA > beta > alpha (where GA is a version with no suffix such as beta or alpha), and then by comparing major version, then minor version. An example sorted list of versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2, foo1, foo10. 2.1.2. .spec.service Description ServiceReference holds a reference to Service.legacy.k8s.io Type object Property Type Description name string Name is the name of the service namespace string Namespace is the namespace of the service port integer If specified, the port on the service that hosting webhook. Default to 443 for backward compatibility. port should be a valid port number (1-65535, inclusive). 2.1.3. .status Description APIServiceStatus contains derived information about an API server Type object Property Type Description conditions array Current service state of apiService. conditions[] object APIServiceCondition describes the state of an APIService at a particular point 2.1.4. .status.conditions Description Current service state of apiService. Type array 2.1.5. .status.conditions[] Description APIServiceCondition describes the state of an APIService at a particular point Type object Required type status Property Type Description lastTransitionTime Time Last time the condition transitioned from one status to another. message string Human-readable message indicating details about last transition. reason string Unique, one-word, CamelCase reason for the condition's last transition. status string Status is the status of the condition. Can be True, False, Unknown. type string Type is the type of the condition. 2.2. API endpoints The following API endpoints are available: /apis/apiregistration.k8s.io/v1/apiservices DELETE : delete collection of APIService GET : list or watch objects of kind APIService POST : create an APIService /apis/apiregistration.k8s.io/v1/watch/apiservices GET : watch individual changes to a list of APIService. deprecated: use the 'watch' parameter with a list operation instead. /apis/apiregistration.k8s.io/v1/apiservices/{name} DELETE : delete an APIService GET : read the specified APIService PATCH : partially update the specified APIService PUT : replace the specified APIService /apis/apiregistration.k8s.io/v1/watch/apiservices/{name} GET : watch changes to an object of kind APIService. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apiregistration.k8s.io/v1/apiservices/{name}/status GET : read status of the specified APIService PATCH : partially update status of the specified APIService PUT : replace status of the specified APIService 2.2.1. /apis/apiregistration.k8s.io/v1/apiservices Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of APIService Table 2.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 2.3. Body parameters Parameter Type Description body DeleteOptions schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind APIService Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK APIServiceList schema 401 - Unauthorized Empty HTTP method POST Description create an APIService Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.8. Body parameters Parameter Type Description body APIService schema Table 2.9. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 202 - Accepted APIService schema 401 - Unauthorized Empty 2.2.2. /apis/apiregistration.k8s.io/v1/watch/apiservices Table 2.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of APIService. deprecated: use the 'watch' parameter with a list operation instead. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/apiregistration.k8s.io/v1/apiservices/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the APIService Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an APIService Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified APIService Table 2.17. HTTP responses HTTP code Reponse body 200 - OK APIService schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified APIService Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.19. Body parameters Parameter Type Description body Patch schema Table 2.20. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified APIService Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. Body parameters Parameter Type Description body APIService schema Table 2.23. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 401 - Unauthorized Empty 2.2.4. /apis/apiregistration.k8s.io/v1/watch/apiservices/{name} Table 2.24. Global path parameters Parameter Type Description name string name of the APIService Table 2.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind APIService. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.5. /apis/apiregistration.k8s.io/v1/apiservices/{name}/status Table 2.27. Global path parameters Parameter Type Description name string name of the APIService Table 2.28. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified APIService Table 2.29. HTTP responses HTTP code Reponse body 200 - OK APIService schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified APIService Table 2.30. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.31. Body parameters Parameter Type Description body Patch schema Table 2.32. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified APIService Table 2.33. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.34. Body parameters Parameter Type Description body APIService schema Table 2.35. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/extension_apis/apiservice-apiregistration-k8s-io-v1
Chapter 9. Migrating Red Hat Ansible Automation Platform to Red Hat Ansible Automation Platform Operator
Chapter 9. Migrating Red Hat Ansible Automation Platform to Red Hat Ansible Automation Platform Operator Migrating your Red Hat Ansible Automation Platform deployment to the Ansible Automation Platform Operator allows you to take advantage of the benefits provided by a Kubernetes native operator, including simplified upgrades and full lifecycle support for your Red Hat Ansible Automation Platform deployments. Note Upgrades of Event-Driven Ansible version 2.4 to 2.5 are not supported. Database migrations between Event-Driven Ansible 2.4 and Event-Driven Ansible 2.5 are not compatible. Use these procedures to migrate any of the following deployments to the Ansible Automation Platform Operator: OpenShift cluster A to OpenShift cluster B OpenShift namespace A to OpenShift namespace B Virtual machine (VM) based or containerized VM Ansible Automation Platform 2.5 Ansible Automation Platform 2.5 9.1. Migration considerations If you are upgrading from any version of Ansible Automation Platform older than 2.4, you must upgrade through Ansible Automation Platform first. If you are on OpenShift Container Platform 3 and you want to upgrade to OpenShift Container Platform 4, you must provision a fresh OpenShift Container Platform version 4 cluster and then migrate the Ansible Automation Platform to the new cluster. 9.2. Preparing for migration Before migrating your current Ansible Automation Platform deployment to Ansible Automation Platform Operator, you must back up your existing data, and create Kubernetes secrets for your secret key and postgresql configuration. Note If you are migrating both automation controller and automation hub instances, repeat the steps in Creating a secret key secret and Creating a postgresql configuration secret for both and then proceed to Migrating data to the Ansible Automation Platform Operator . 9.2.1. Migrating to Ansible Automation Platform Operator Prerequisites To migrate Ansible Automation Platform deployment to Ansible Automation Platform Operator, you must have the following: Secret key secret Postgresql configuration Role-based Access Control for the namespaces on the new OpenShift cluster The new OpenShift cluster must be able to connect to the PostgreSQL database Note You can store the secret key information in the inventory file before the initial Red Hat Ansible Automation Platform installation. If you are unable to remember your secret key or have trouble locating your inventory file, contact Ansible support through the Red Hat Customer portal. Before migrating your data from Ansible Automation Platform 2.4, you must back up your data for loss prevention. Procedure Log in to your current deployment project. Run USD ./setup.sh -b to create a backup of your current data or deployment. 9.2.2. Creating a secret key secret To migrate your data to Ansible Automation Platform Operator on OpenShift Container Platform, you must create a secret key. If you are migrating automation controller, automation hub, and Event-Driven Ansible you must have a secret key for each that matches the secret key defined in the inventory file during your initial installation. Otherwise, the migrated data remains encrypted and unusable after migration. Note When specifying the symmetric encryption secret key on the custom resources, note that for automation controller the field is called secret_key_name . But for automation hub and Event-Driven Ansible, the field is called db_fields_encryption_secret . Note In the Kubernetes secrets, automation controller and Event-Driven Ansible use the same stringData key ( secret_key ) but, automation hub uses a different key ( database_fields.symmetric.key ). Procedure Locate the old secret keys in the inventory file you used to deploy Ansible Automation Platform in your installation. Create a YAML file for your secret keys: --- apiVersion: v1 kind: Secret metadata: name: <controller-resourcename>-secret-key namespace: <target-namespace> stringData: secret_key: <content of /etc/tower/SECRET_KEY from old controller> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: <eda-resourcename>-secret-key namespace: <target-namespace> stringData: secret_key: </etc/ansible-automation-platform/eda/SECRET_KEY> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: <hub-resourcename>-secret-key namespace: <target-namespace> stringData: database_fields.symmetric.key: </etc/pulp/certs/database_fields.symmetric.key> type: Opaque Note If admin_password_secret is not provided, the operator looks for a secret named <resourcename>-admin-password for the admin password. If it is not present, the operator generates a password and create a secret from it named <resourcename>-admin-password . Apply the secret key YAML to the cluster: oc apply -f <yaml-file> 9.2.3. Creating a postgresql configuration secret For migration to be successful, you must provide access to the database for your existing deployment. Procedure Create a YAML file for your postgresql configuration secret: apiVersion: v1 kind: Secret metadata: name: <resourcename>-old-postgres-configuration namespace: <target namespace> stringData: host: "<external ip or url resolvable by the cluster>" port: "<external port, this usually defaults to 5432>" database: "<desired database name>" username: "<username to connect as>" password: "<password to connect with>" type: Opaque Apply the postgresql configuration yaml to the cluster: oc apply -f <old-postgres-configuration.yml> 9.2.4. Verifying network connectivity To ensure successful migration of your data, verify that you have network connectivity from your new operator deployment to your old deployment database. Prerequisites Take note of the host and port information from your existing deployment. This information is located in the postgres.py file located in the conf.d directory. Procedure Create a YAML file to verify the connection between your new deployment and your old deployment database: apiVersion: v1 kind: Pod metadata: name: dbchecker spec: containers: - name: dbchecker image: registry.redhat.io/rhel8/postgresql-13:latest command: ["sleep"] args: ["600"] Apply the connection checker yaml file to your new project deployment: oc project ansible-automation-platform oc apply -f connection_checker.yaml Verify that the connection checker pod is running: oc get pods Connect to a pod shell: oc rsh dbchecker After the shell session opens in the pod, verify that the new project can connect to your old project cluster: pg_isready -h <old-host-address> -p <old-port-number> -U AutomationContoller Example 9.3. Migrating data to the Ansible Automation Platform Operator When migrating a 2.5 containerized or RPM installed deployment to OpenShift Container Platform you must create a secret with credentials to access the PostgreSQL database from the original deployment, then specify it when creating the Ansible Automation Platform object. Important The operator does not support Event-Driven Ansible migration at this time. Prerequisites You have completed the following procedures: Installing the Red Hat Ansible Automation Platform Operator on Red Hat OpenShift Creating a secret key Creating a postgresql configuration secret Verifying network connectivity 9.3.1. Creating an Ansible Automation Platform object Use the following steps to create an AnsibleAutomationPlatform custom resource object. Procedure Log in to Red Hat OpenShift Container Platform . Navigate to Operators Installed Operators . Select the Ansible Automation Platform Operator installed on your project namespace. Select the Ansible Automation Platform tab. Click Create AnsibleAutomationPlatform . Seclect YAML view and paste in the following, modified accordingly: --- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: postgres_configuration_secret: external-postgres-configuration controller: disabled: false postgres_configuration_secret: external-controller-postgres-configuration secret_key_secret: controller-secret-key hub: disabled: false postgres_configuration_secret: external-hub-postgres-configuration db_fields_encryption_secret: hub-db-fields-encryption-secret Click Create . 9.4. Post migration cleanup After data migration, delete unnecessary instance groups and unlink the old database configuration secret from the automation controller resource definition. 9.4.1. Deleting Instance Groups post migration Procedure Log in to Red Hat Ansible Automation Platform as the administrator with the password you created during migration. Note If you did not create an administrator password during migration, one was automatically created for you. To locate this password, go to your project, select Workloads Secrets and open controller-admin-password. From there you can copy the password and paste it into the Red Hat Ansible Automation Platform password field. Select Automation Execution Infrastructure Instance Groups . Select all Instance Groups except controlplane and default. Click Delete . 9.4.2. Unlinking the old database configuration secret post migration Log in to Red Hat OpenShift Container Platform . Navigate to Operators Installed Operators . Select the Ansible Automation Platform Operator installed on your project namespace. Select the Automation Controller tab. Click your AutomationController object. You can then view the object through the Form view or YAML view . The following inputs are available through the YAML view . Locate the old_postgres_configuration_secret item within the spec section of the YAML contents. Delete the line that contains this item. Click Save .
[ "--- apiVersion: v1 kind: Secret metadata: name: <controller-resourcename>-secret-key namespace: <target-namespace> stringData: secret_key: <content of /etc/tower/SECRET_KEY from old controller> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: <eda-resourcename>-secret-key namespace: <target-namespace> stringData: secret_key: </etc/ansible-automation-platform/eda/SECRET_KEY> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: <hub-resourcename>-secret-key namespace: <target-namespace> stringData: database_fields.symmetric.key: </etc/pulp/certs/database_fields.symmetric.key> type: Opaque", "apply -f <yaml-file>", "apiVersion: v1 kind: Secret metadata: name: <resourcename>-old-postgres-configuration namespace: <target namespace> stringData: host: \"<external ip or url resolvable by the cluster>\" port: \"<external port, this usually defaults to 5432>\" database: \"<desired database name>\" username: \"<username to connect as>\" password: \"<password to connect with>\" type: Opaque", "apply -f <old-postgres-configuration.yml>", "apiVersion: v1 kind: Pod metadata: name: dbchecker spec: containers: - name: dbchecker image: registry.redhat.io/rhel8/postgresql-13:latest command: [\"sleep\"] args: [\"600\"]", "project ansible-automation-platform apply -f connection_checker.yaml", "get pods", "rsh dbchecker", "pg_isready -h <old-host-address> -p <old-port-number> -U AutomationContoller", "<old-host-address>:<old-port-number> - accepting connections", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: postgres_configuration_secret: external-postgres-configuration controller: disabled: false postgres_configuration_secret: external-controller-postgres-configuration secret_key_secret: controller-secret-key hub: disabled: false postgres_configuration_secret: external-hub-postgres-configuration db_fields_encryption_secret: hub-db-fields-encryption-secret" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/installing_on_openshift_container_platform/aap-migration
8.219. talk
8.219. talk 8.219.1. RHBA-2013:1148 - talk bug fix update Updated talk packages that fix one bug are now available for Red Hat Enterprise Linux 6. The talk utility is a communication program that copies lines from one terminal to the terminal of another user. Bug Fix BZ#691355 The talk utility allows a user to specify the target user in the "username.hostname" form. Consequent to this, versions of the utility did not support usernames that contained a period. With this update, a new command line option (that is, "-x") has been added to enforce the use of the "username@hostname" form, so that the username can contain periods. As well, the corresponding manual page has been extended to provide a complete list of supported command line arguments. Users of talk are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/talk
Chapter 66. Slack Source
Chapter 66. Slack Source Receive messages from a Slack channel. 66.1. Configuration Options The following table summarizes the configuration options available for the slack-source Kamelet: Property Name Description Type Default Example channel * Channel The Slack channel to receive messages from string "#myroom" token * Token The token to access Slack. A Slack app is needed. This app needs to have channels:history and channels:read permissions. The Bot User OAuth Access Token is the kind of token needed. string Note Fields marked with an asterisk (*) are mandatory. 66.2. Dependencies At runtime, the slack-source Kamelet relies upon the presence of the following dependencies: camel:kamelet camel:slack camel:jackson 66.3. Usage This section describes how you can use the slack-source . 66.3.1. Knative Source You can use the slack-source Kamelet as a Knative source by binding it to a Knative object. slack-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: slack-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: slack-source properties: channel: "#myroom" token: "The Token" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 66.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 66.3.1.2. Procedure for using the cluster CLI Save the slack-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f slack-source-binding.yaml 66.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind slack-source -p "source.channel=#myroom" -p "source.token=The Token" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 66.3.2. Kafka Source You can use the slack-source Kamelet as a Kafka source by binding it to a Kafka topic. slack-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: slack-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: slack-source properties: channel: "#myroom" token: "The Token" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 66.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 66.3.2.2. Procedure for using the cluster CLI Save the slack-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f slack-source-binding.yaml 66.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind slack-source -p "source.channel=#myroom" -p "source.token=The Token" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 66.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/slack-source.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: slack-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: slack-source properties: channel: \"#myroom\" token: \"The Token\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f slack-source-binding.yaml", "kamel bind slack-source -p \"source.channel=#myroom\" -p \"source.token=The Token\" channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: slack-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: slack-source properties: channel: \"#myroom\" token: \"The Token\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f slack-source-binding.yaml", "kamel bind slack-source -p \"source.channel=#myroom\" -p \"source.token=The Token\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/slack-source
Chapter 11. Interoperability
Chapter 11. Interoperability This chapter discusses how to use AMQ JavaScript in combination with other AMQ components. For an overview of the compatibility of AMQ components, see the product introduction . 11.1. Interoperating with other AMQP clients AMQP messages are composed using the AMQP type system . This common format is one of the reasons AMQP clients in different languages are able to interoperate with each other. When sending messages, AMQ JavaScript automatically converts language-native types to AMQP-encoded data. When receiving messages, the reverse conversion takes place. Note More information about AMQP types is available at the interactive type reference maintained by the Apache Qpid project. Table 11.1. AMQP types AMQP type Description null An empty value boolean A true or false value char A single Unicode character string A sequence of Unicode characters binary A sequence of bytes byte A signed 8-bit integer short A signed 16-bit integer int A signed 32-bit integer long A signed 64-bit integer ubyte An unsigned 8-bit integer ushort An unsigned 16-bit integer uint An unsigned 32-bit integer ulong An unsigned 64-bit integer float A 32-bit floating point number double A 64-bit floating point number array A sequence of values of a single type list A sequence of values of variable type map A mapping from distinct keys to values uuid A universally unique identifier symbol A 7-bit ASCII string from a constrained domain timestamp An absolute point in time JavaScript has fewer native types than AMQP can encode. To send messages containing specific AMQP types, use the wrap_ functions from the rhea/types.js module. Table 11.2. AMQ JavaScript types before encoding and after decoding AMQP type AMQ JavaScript type before encoding AMQ JavaScript type after decoding null null null boolean boolean boolean char wrap_char(number) number string string string binary wrap_binary(string) string byte wrap_byte(number) number short wrap_short(number) number int wrap_int(number) number long wrap_long(number) number ubyte wrap_ubyte(number) number ushort wrap_ushort(number) number uint wrap_uint(number) number ulong wrap_ulong(number) number float wrap_float(number) number double wrap_double(number) number array wrap_array(Array, code) Array list wrap_list(Array) Array map wrap_map(object) object uuid wrap_uuid(number) number symbol wrap_symbol(string) string timestamp wrap_timestamp(number) number Table 11.3. AMQ JavaScript and other AMQ client types (1 of 2) AMQ JavaScript type before encoding AMQ C++ type AMQ .NET type null nullptr null boolean bool System.Boolean wrap_char(number) wchar_t System.Char string std::string System.String wrap_binary(string) proton::binary System.Byte[] wrap_byte(number) int8_t System.SByte wrap_short(number) int16_t System.Int16 wrap_int(number) int32_t System.Int32 wrap_long(number) int64_t System.Int64 wrap_ubyte(number) uint8_t System.Byte wrap_ushort(number) uint16_t System.UInt16 wrap_uint(number) uint32_t System.UInt32 wrap_ulong(number) uint64_t System.UInt64 wrap_float(number) float System.Single wrap_double(number) double System.Double wrap_array(Array, code) - - wrap_list(Array) std::vector Amqp.List wrap_map(object) std::map Amqp.Map wrap_uuid(number) proton::uuid System.Guid wrap_symbol(string) proton::symbol Amqp.Symbol wrap_timestamp(number) proton::timestamp System.DateTime Table 11.4. AMQ JavaScript and other AMQ client types (2 of 2) AMQ JavaScript type before encoding AMQ Python type AMQ Ruby type null None nil boolean bool true, false wrap_char(number) unicode String string unicode String wrap_binary(string) bytes String wrap_byte(number) int Integer wrap_short(number) int Integer wrap_int(number) long Integer wrap_long(number) long Integer wrap_ubyte(number) long Integer wrap_ushort(number) long Integer wrap_uint(number) long Integer wrap_ulong(number) long Integer wrap_float(number) float Float wrap_double(number) float Float wrap_array(Array, code) proton.Array Array wrap_list(Array) list Array wrap_map(object) dict Hash wrap_uuid(number) - - wrap_symbol(string) str Symbol wrap_timestamp(number) long Time 11.2. Interoperating with AMQ JMS AMQP defines a standard mapping to the JMS messaging model. This section discusses the various aspects of that mapping. For more information, see the AMQ JMS Interoperability chapter. JMS message types AMQ JavaScript provides a single message type whose body type can vary. By contrast, the JMS API uses different message types to represent different kinds of data. The table below indicates how particular body types map to JMS message types. For more explicit control of the resulting JMS message type, you can set the x-opt-jms-msg-type message annotation. See the AMQ JMS Interoperability chapter for more information. Table 11.5. AMQ JavaScript and JMS message types AMQ JavaScript body type JMS message type string TextMessage null TextMessage wrap_binary(string) BytesMessage Any other type ObjectMessage 11.3. Connecting to AMQ Broker AMQ Broker is designed to interoperate with AMQP 1.0 clients. Check the following to ensure the broker is configured for AMQP messaging: Port 5672 in the network firewall is open. The AMQ Broker AMQP acceptor is enabled. See Default acceptor settings . The necessary addresses are configured on the broker. See Addresses, Queues, and Topics . The broker is configured to permit access from your client, and the client is configured to send the required credentials. See Broker Security . 11.4. Connecting to AMQ Interconnect AMQ Interconnect works with any AMQP 1.0 client. Check the following to ensure the components are configured correctly: Port 5672 in the network firewall is open. The router is configured to permit access from your client, and the client is configured to send the required credentials. See Securing network connections .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_javascript_client/interoperability
19.3. Near Caches Eviction
19.3. Near Caches Eviction Eviction of near caches can be configured by defining the maximum number of entries to keep in the near cache. When eviction is enabled, an LRU LinkedHashMap is used, and is protected by a ReentrantReadWrite lock to deal with concurrent updates. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/near_caches_eviction
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/release_notes/proc_providing-feedback-on-red-hat-documentation
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/designing_your_decision_management_architecture_for_red_hat_decision_manager/snip-conscious-language_decision-management-architecture
Chapter 13. Removing
Chapter 13. Removing The steps for removing the Red Hat build of OpenTelemetry from an OpenShift Container Platform cluster are as follows: Shut down all Red Hat build of OpenTelemetry pods. Remove any OpenTelemetryCollector instances. Remove the Red Hat build of OpenTelemetry Operator. 13.1. Removing an OpenTelemetry Collector instance by using the web console You can remove an OpenTelemetry Collector instance in the Administrator view of the web console. Prerequisites You are logged in to the web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. Procedure Go to Operators Installed Operators Red Hat build of OpenTelemetry Operator OpenTelemetryInstrumentation or OpenTelemetryCollector . To remove the relevant instance, select Delete ... Delete . Optional: Remove the Red Hat build of OpenTelemetry Operator. 13.2. Removing an OpenTelemetry Collector instance by using the CLI You can remove an OpenTelemetry Collector instance on the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run oc login : USD oc login --username=<your_username> Procedure Get the name of the OpenTelemetry Collector instance by running the following command: USD oc get deployments -n <project_of_opentelemetry_instance> Remove the OpenTelemetry Collector instance by running the following command: USD oc delete opentelemetrycollectors <opentelemetry_instance_name> -n <project_of_opentelemetry_instance> Optional: Remove the Red Hat build of OpenTelemetry Operator. Verification To verify successful removal of the OpenTelemetry Collector instance, run oc get deployments again: USD oc get deployments -n <project_of_opentelemetry_instance> 13.3. Additional resources Deleting Operators from a cluster Getting started with the OpenShift CLI
[ "oc login --username=<your_username>", "oc get deployments -n <project_of_opentelemetry_instance>", "oc delete opentelemetrycollectors <opentelemetry_instance_name> -n <project_of_opentelemetry_instance>", "oc get deployments -n <project_of_opentelemetry_instance>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/red_hat_build_of_opentelemetry/dist-tracing-otel-removing
Chapter 3. Additional Use Cases for SSO with Red Hat JBoss Enterprise Application Platform
Chapter 3. Additional Use Cases for SSO with Red Hat JBoss Enterprise Application Platform In addition to the out-of-the-box functionality, JBoss EAP supports additional use cases for SSO, including SAML for browser-based SSO, desktop-based SSO, and SSO via a secure token service. 3.1. Browser-Based SSO Using SAML In a browser-based SSO scenario, one or more web applications, known as service providers, are connected to a centralized identity provider in a hub-and-spoke architecture. The identity provider (IDP) acts as the central source, or hub, for identity and role information to all the service providers, or spokes. When an unauthenticated user attempts to access one of the service providers, that user is redirected to an IDP to perform the authentication. The IDP authenticates the user, issues a SAML token specifying the role of the principal, and redirects them to the requested service provider. From there, that SAML token is used across all of the associated service providers to determine the user's identity and access. This specific method of using SSO starting at the service providers is known as a service provider-initiated flow. Like many SSO systems, JBoss EAP uses IDPs and SPs. Both of these components are enabled to be run within JBoss EAP instances and work in conjunction with the JBoss EAP security subsystem. SAML v2, FORM-based web application security, and HTTP/POST and HTTP/Redirect Bindings are also used to implement SSO. To create an identity provider, create a security domain, for example idp-domain , in a JBoss EAP instance that defines an authentication and authorization mechanism, for example LDAP or a database, to serve as the identity store. A web application, for example IDP.war , is configured to use additional modules, org.picketlink , required for running an IDP in conjunction with idp-domain and is deployed to that same JBoss EAP instance. IDP.war will serve as an identity provider. To create a service provider, a security domain is created, for example sp-domain , that uses SAML2LoginModule as an authentication mechanism. A web application, for example SP.war , is configured to use additional modules, org.picketlink , and contains a service provider valve that uses sp-domain . SP.war is deployed to an JBoss EAP instance where sp-domain is configured and is now a service provider. This process can be replicated for one or more SPs, for example SP2.war , SP3.war , and so on, and across one or more JBoss EAP instances. 3.1.1. Identity Provider Initiated Flow In most SSO scenarios, the SP starts the flow by sending an authentication request to the IDP, which sends a SAML response to SP with a valid assertion. This is known as a SP-initiated flow. The SAML 2.0 specifications define another flow, one called IDP-initiated or Unsolicited Response flow. In this scenario, the service provider does not initiate the authentication flow to receive a SAML response from the IDP. The flow starts on the IDP side. Once authenticated, the user can choose a specific SP from a list and get redirected to the SP's URL. No special configuration is necessary to enable this flow. Walkthrough User accesses the IDP. The IDP, seeing that there is neither a SAML request nor a response, assumes an IDP-initiated flow scenario using SAML. The IDP challenges the user to authenticate. Upon authentication, the IDP shows the hosted section where the user gets a page that links to all the SP applications. The user chooses an SP application. The IDP redirects the user to the SP with a SAML assertion in the query parameter, SAML response. The SP checks the SAML assertion and provides access. 3.1.2. Global Logout A global logout initiated at one SP logs out the user from the IDP and all the SPs. If a user attempts to access secured portions of any SP or IDP after performing a global logout, they must reauthenticate. 3.2. Desktop-Based SSO A desktop-based SSO scenario enables a principal to be shared across the desktop, usually governed by an Active Directory or Kerberos server, and a set of web applications which are the SPs. In this case, the desktop IDP serves as the IDP for the web applications. In a typical setup, the user logs in to a desktop governed by the Active Directory domain. The user accesses a web application via a web browser configured with JBoss Negotiation and hosted on the JBoss EAP. The web browser transfers the sign-on information from the local machine of the user to the web application. JBoss EAP uses background GSS messages with the Active Directory or any Kerberos Server to validate the user. This enables the user to achieve a seamless SSO into the web application. To set up a desktop-based SSO as an IDP for a web application, a security domain is created that connects to the IDP server. A NegotiationAuthenticator is added as a valve to the desired web application, and JBoss Negotiation is added to the SP container's class path. Alternatively, an IDP can be set up similarly to the browser-based SSO scenario but using the desktop-based SSO provider as an identity store. 3.3. SSO Using STS JBoss EAP offers several login modules for SPs to connect to an STS. It can also run an STS ( PicketLinkSTS ). More specifically, the PicketLinkSTS defines several interfaces to other security token services and provides extension points. Implementations can be plugged in by using configuration, and the default values can be specified for some properties via configuration. This means that the PicketLinkSTS generates and manages the security tokens but does not issue tokens of a specific type. Instead, it defines generic interfaces that allow multiple token providers to be plugged in. As a result, it can be configured to deal with various types of token, as long as a token provider exists for each token type. It also specifies the format of the security token request and response messages. The following steps are the order that the security token requests are processed when using the JBoss EAP STS. A client sends a security token request to PicketLinkSTS . PicketLinkSTS parses the request message and generates a Jakarta XML Binding object model. PicketLinkSTS reads the configuration file and creates the STSConfiguration object, if needed. It obtains a reference to the WSTrustRequestHandler from the configuration and delegates the request processing to the handler instance. The request handler uses the STSConfiguration to set default values when needed, for example when the request does not specify a token lifetime value. The WSTrustRequestHandler creates the WSTrustRequestContext and sets the Jakarta XML Binding request object and the caller principal it received from PicketLinkSTS . The WSTrustRequestHandler uses the STSConfiguration to get the SecurityTokenProvider that must be used to process the request based on the type of the token that is being requested. It invokes the provider and passes the constructed WSTrustRequestContext as a parameter. The SecurityTokenProvider instance processes the token request and stores the issued token in the request context. The WSTrustRequestHandler obtains the token from the context, encrypts it, if needed, and constructs the WS-Trust response object containing the security token. PicketLinkSTS dictates the response generated by the request handler and returns it to the client. An STS login module, for example STSIssuingLoginModule, STSValidatingLoginModule, SAML2STSLoginModule, and so on, is typically configured as part of the security setup of a JEE container to use an STS for authenticating users. The STS may be collocated on the same container as the login module or be accessed remotely through web service calls or another technology.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/security_architecture/additional_sso_usecases
14.8.16. smbtar
14.8.16. smbtar smbtar <options> The smbtar program performs backup and restores of Windows-based share files and directories to a local tape archive. Though similar to the tar command, the two are not compatible.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-samba-programs-smbtar
function::print_syms
function::print_syms Name function::print_syms - Print out kernel stack from string Synopsis Arguments callers String with list of hexadecimal (kernel) addresses Description This function performs a symbolic lookup of the addresses in the given string, which are assumed to be the result of prior calls to stack , callers , and similar functions. Prints one line per address, including the address, the name of the function containing the address, and an estimate of its position within that function, as obtained by symdata . Returns nothing.
[ "print_syms(callers:string)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-print-syms
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue . Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/deploying_and_managing_rhel_systems_in_hybrid_clouds/proc_providing-feedback-on-red-hat-documentation_host-management-services
2.9.2. Process Behavior in the Root Control Group
2.9.2. Process Behavior in the Root Control Group Certain blkio and cpu configuration options affect processes (tasks) running in the root cgroup in a different way than those in a subgroup. Consider the following example: Create two subgroups under one root group: /rootgroup/red/ and /rootgroup/blue/ In each subgroup and in the root group, define the cpu.shares configuration option and set it to 1 . In the scenario configured above, one process placed in each group (that is, one task in /rootgroup/tasks , /rootgroup/red/tasks and /rootgroup/blue/tasks ) consumes 33.33% of the CPU: Any other processes placed in subgroups blue and red result in the 33.33% percent of the CPU assigned to that specific subgroup to be split among the multiple processes in that subgroup. However, multiple processes placed in the root group cause the CPU resource to be split per process, rather than per group. For example, if /rootgroup/ contains three processes, /rootgroup/red/ contains one process and /rootgroup/blue/ contains one process, and the cpu.shares option is set to 1 in all groups, the CPU resource is divided as follows: Therefore, it is recommended to move all processes from the root group to a specific subgroup when using the blkio and cpu configuration options which divide an available resource based on a weight or a share (for example, cpu.shares or blkio.weight ). To move all tasks from the root group into a specific subgroup, you can use the following commands:
[ "/rootgroup/ process: 33.33% /rootgroup/blue/ process: 33.33% /rootgroup/red/ process: 33.33%", "/rootgroup/ processes: 20% + 20% + 20% /rootgroup/blue/ process: 20% /rootgroup/red/ process: 20%", "rootgroup]# cat tasks >> red/tasks rootgroup]# echo > tasks" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/process_behavior
Chapter 46. EntityOperatorSpec schema reference
Chapter 46. EntityOperatorSpec schema reference Used in: KafkaSpec Property Description topicOperator Configuration of the Topic Operator. EntityTopicOperatorSpec userOperator Configuration of the User Operator. EntityUserOperatorSpec tlsSidecar TLS sidecar configuration. TlsSidecar template Template for Entity Operator resources. The template allows users to specify how a Deployment and Pod is generated. EntityOperatorTemplate
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-EntityOperatorSpec-reference
3.2.2.3. Other Ways of Securing SSH
3.2.2.3. Other Ways of Securing SSH Protocol Version Even though the implementation of the SSH protocol supplied with Red Hat Enterprise Linux supports both the SSH-1 and SSH-2 versions of the protocol, only the latter should be used whenever possible. The SSH-2 version contains a number of improvements over the older SSH-1, and the majority of advanced configuration options is only available when using SSH-2. Users are encouraged to make use of SSH-2 in order to maximize the extent to which the SSH protocol protects the authentication and communication for which it is used. The version or versions of the protocol supported by the sshd daemon can be specified using the Protocol configuration directive in the /etc/ssh/sshd_config file. The default setting is 2 . Key Types While the ssh-keygen command generates a pair of SSH-2 RSA keys by default, using the -t option, it can be instructed to generate DSA or ECDSA keys as well. The ECDSA (Elliptic Curve Digital Signature Algorithm) offers better performance at the same symmetric key length. It also generates shorter keys. Non-Default Port By default, the sshd daemon listens on the 22 network port. Changing the port reduces the exposure of the system to attacks based on automated network scanning, thus increasing security through obscurity. The port can be specified using the Port directive in the /etc/ssh/sshd_config configuration file. Note also that the default SELinux policy must be changed to allow for the use of a non-default port. You can do this by modifying the ssh_port_t SELinux type by typing the following command as root : In the above command, replace port_number with the new port number specified using the Port directive. No Root Login Provided that your particular use case does not require the possibility of logging in as the root user, you should consider setting the PermitRootLogin configuration directive to no in the /etc/ssh/sshd_config file. By disabling the possibility of logging in as the root user, the administrator can audit which user runs what privileged command after they log in as regular users and then gain root rights. Important This section draws attention to the most common ways of securing an SSH setup. By no means should this list of suggested measures be considered exhaustive or definitive. Refer to sshd_config(5) for a description of all configuration directives available for modifying the behavior of the sshd daemon and to ssh(1) for an explanation of basic SSH concepts.
[ "~]# semanage -a -t ssh_port_t -p tcp port_number" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-encryption-data_in_motion-secure_shell-other_ways
Chapter 4. Examples
Chapter 4. Examples This chapter demonstrates the use of AMQ C++ through example programs. For more examples, see the AMQ C++ example suite and the Qpid Proton C++ examples . Note The code presented in this guide uses C++11 features. AMQ C++ is also compatible with C++03, but the code requires minor modifications. 4.1. Sending messages This client program connects to a server using <connection-url> , creates a sender for target <address> , sends a message containing <message-body> , closes the connection, and exits. Example: Sending messages #include <proton/connection.hpp> #include <proton/container.hpp> #include <proton/message.hpp> #include <proton/messaging_handler.hpp> #include <proton/sender.hpp> #include <proton/target.hpp> #include <iostream> #include <string> struct send_handler : public proton::messaging_handler { std::string conn_url_ {}; std::string address_ {}; std::string message_body_ {}; void on_container_start(proton::container& cont) override { cont.connect(conn_url_); // To connect with a user and password: // // proton::connection_options opts {}; // opts.user("<user>"); // opts.password("<password>"); // // cont.connect(conn_url_, opts); } void on_connection_open(proton::connection& conn) override { conn.open_sender(address_); } void on_sender_open(proton::sender& snd) override { std::cout << "SEND: Opened sender for target address '" << snd.target().address() << "'\n"; } void on_sendable(proton::sender& snd) override { proton::message msg {message_body_}; snd.send(msg); std::cout << "SEND: Sent message '" << msg.body() << "'\n"; snd.close(); snd.connection().close(); } }; int main(int argc, char** argv) { if (argc != 4) { std::cerr << "Usage: send <connection-url> <address> <message-body>\n"; return 1; } send_handler handler {}; handler.conn_url_ = argv[1]; handler.address_ = argv[2]; handler.message_body_ = argv[3]; proton::container cont {handler}; try { cont.run(); } catch (const std::exception& e) { std::cerr << e.what() << "\n"; return 1; } return 0; } Running the example To run the example program, copy it to a local file, compile it, and execute it from the command line. For more information, see Chapter 3, Getting started . USD g++ send.cpp -o send -std=c++11 -lstdc++ -lqpid-proton-cpp USD ./send amqp://localhost queue1 hello 4.2. Receiving messages This client program connects to a server using <connection-url> , creates a receiver for source <address> , and receives messages until it is terminated or it reaches <count> messages. Example: Receiving messages #include <proton/connection.hpp> #include <proton/container.hpp> #include <proton/delivery.hpp> #include <proton/message.hpp> #include <proton/messaging_handler.hpp> #include <proton/receiver.hpp> #include <proton/source.hpp> #include <iostream> #include <string> struct receive_handler : public proton::messaging_handler { std::string conn_url_ {}; std::string address_ {}; int desired_ {0}; int received_ {0}; void on_container_start(proton::container& cont) override { cont.connect(conn_url_); // To connect with a user and password: // // proton::connection_options opts {}; // opts.user("<user>"); // opts.password("<password>"); // // cont.connect(conn_url_, opts); } void on_connection_open(proton::connection& conn) override { conn.open_receiver(address_); } void on_receiver_open(proton::receiver& rcv) override { std::cout << "RECEIVE: Opened receiver for source address '" << rcv.source().address() << "'\n"; } void on_message(proton::delivery& dlv, proton::message& msg) override { std::cout << "RECEIVE: Received message '" << msg.body() << "'\n"; received_++; if (received_ == desired_) { dlv.receiver().close(); dlv.connection().close(); } } }; int main(int argc, char** argv) { if (argc != 3 && argc != 4) { std::cerr << "Usage: receive <connection-url> <address> [<message-count>]\n"; return 1; } receive_handler handler {}; handler.conn_url_ = argv[1]; handler.address_ = argv[2]; if (argc == 4) { handler.desired_ = std::stoi(argv[3]); } proton::container cont {handler}; try { cont.run(); } catch (const std::exception& e) { std::cerr << e.what() << "\n"; return 1; } return 0; } Running the example To run the example program, copy it to a local file, compile it, and execute it from the command line. For more information, see Chapter 3, Getting started . USD g++ receive.cpp -o receive -std=c++11 -lstdc++ -lqpid-proton-cpp USD ./receive amqp://localhost queue1
[ "#include <proton/connection.hpp> #include <proton/container.hpp> #include <proton/message.hpp> #include <proton/messaging_handler.hpp> #include <proton/sender.hpp> #include <proton/target.hpp> #include <iostream> #include <string> struct send_handler : public proton::messaging_handler { std::string conn_url_ {}; std::string address_ {}; std::string message_body_ {}; void on_container_start(proton::container& cont) override { cont.connect(conn_url_); // To connect with a user and password: // // proton::connection_options opts {}; // opts.user(\"<user>\"); // opts.password(\"<password>\"); // // cont.connect(conn_url_, opts); } void on_connection_open(proton::connection& conn) override { conn.open_sender(address_); } void on_sender_open(proton::sender& snd) override { std::cout << \"SEND: Opened sender for target address '\" << snd.target().address() << \"'\\n\"; } void on_sendable(proton::sender& snd) override { proton::message msg {message_body_}; snd.send(msg); std::cout << \"SEND: Sent message '\" << msg.body() << \"'\\n\"; snd.close(); snd.connection().close(); } }; int main(int argc, char** argv) { if (argc != 4) { std::cerr << \"Usage: send <connection-url> <address> <message-body>\\n\"; return 1; } send_handler handler {}; handler.conn_url_ = argv[1]; handler.address_ = argv[2]; handler.message_body_ = argv[3]; proton::container cont {handler}; try { cont.run(); } catch (const std::exception& e) { std::cerr << e.what() << \"\\n\"; return 1; } return 0; }", "g++ send.cpp -o send -std=c++11 -lstdc++ -lqpid-proton-cpp ./send amqp://localhost queue1 hello", "#include <proton/connection.hpp> #include <proton/container.hpp> #include <proton/delivery.hpp> #include <proton/message.hpp> #include <proton/messaging_handler.hpp> #include <proton/receiver.hpp> #include <proton/source.hpp> #include <iostream> #include <string> struct receive_handler : public proton::messaging_handler { std::string conn_url_ {}; std::string address_ {}; int desired_ {0}; int received_ {0}; void on_container_start(proton::container& cont) override { cont.connect(conn_url_); // To connect with a user and password: // // proton::connection_options opts {}; // opts.user(\"<user>\"); // opts.password(\"<password>\"); // // cont.connect(conn_url_, opts); } void on_connection_open(proton::connection& conn) override { conn.open_receiver(address_); } void on_receiver_open(proton::receiver& rcv) override { std::cout << \"RECEIVE: Opened receiver for source address '\" << rcv.source().address() << \"'\\n\"; } void on_message(proton::delivery& dlv, proton::message& msg) override { std::cout << \"RECEIVE: Received message '\" << msg.body() << \"'\\n\"; received_++; if (received_ == desired_) { dlv.receiver().close(); dlv.connection().close(); } } }; int main(int argc, char** argv) { if (argc != 3 && argc != 4) { std::cerr << \"Usage: receive <connection-url> <address> [<message-count>]\\n\"; return 1; } receive_handler handler {}; handler.conn_url_ = argv[1]; handler.address_ = argv[2]; if (argc == 4) { handler.desired_ = std::stoi(argv[3]); } proton::container cont {handler}; try { cont.run(); } catch (const std::exception& e) { std::cerr << e.what() << \"\\n\"; return 1; } return 0; }", "g++ receive.cpp -o receive -std=c++11 -lstdc++ -lqpid-proton-cpp ./receive amqp://localhost queue1" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_cpp_client/examples
function::vm_fault_contains
function::vm_fault_contains Name function::vm_fault_contains - Test return value for page fault reason Synopsis Arguments value The fault_type returned by vm.page_fault.return test The type of fault to test for (VM_FAULT_OOM or similar)
[ "function vm_fault_contains:long(value:long,test:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-vm-fault-contains
Chapter 2. Identity Management Integration
Chapter 2. Identity Management Integration This chapter describes how to integrate Identity Service (keystone) with Red Hat Identity Management. In this use case, Identity Service authenticates certain Red Hat Identity Management (IdM) users, while retaining authorization settings and critical service accounts in the Identity Service database. As a result, Identity Service has read-only access to IdM for user account authentication, while retaining management over the privileges assigned to authenticated accounts. Note If you are using director, see Chapter 4, Using domain-specific LDAP backends with director . This is because the configuration files referenced below are managed by Puppet. Consequently, any custom configuration you add might be overwritten whenever you run the openstack overcloud deploy process. Note For additional integration options using novajoin , see Chapter 3, Integrate with IdM using novajoin . 2.1. Key terms Authentication - The process of using a password to verify that the user is who they claim to be. Authorization - Validating that authenticated users have proper permissions to the systems they're attempting to access. Domain - Refers to the additional back ends configured in Identity Service. For example, Identity Service can be configured to authenticate users from external IdM environments. The resulting collection of users can be thought of as a domain . 2.2. Assumptions This example deployment makes the following assumptions: Red Hat Identity Management is configured and operational. Red Hat OpenStack Platform is configured and operational. DNS name resolution is fully functional and all hosts are registered appropriately. 2.3. Impact Statement These steps allow IdM users to authenticate to OpenStack and access resources. OpenStack service accounts (such as keystone and glance), and authorization management (permissions and roles) will remain in the Identity Service database. Permissions and roles are assigned to the IdM accounts using Identity Service management tools. 2.3.1. High Availability options This configuration creates a dependency on the availability of a single IdM server: Project users will be affected if Identity Service is unable to authenticate to the IdM Server. There are a number of options available to manage this risk, for example: you might configure keystone to query a DNS alias or a load balancing appliance, rather than an individual IdM server. You can also configure keystone to query a different IdM server, should one become unavailable. See Section 2.11, "Configure for high availability" for more information. 2.4. Outage requirements The Identity Service will need to be restarted in order to add the IdM back end. The Compute services on all nodes will need to be restarted in order to switch over to keystone v3 . Users will be unable to access the dashboard until their accounts have been created in IdM. To reduce downtime, consider pre-staging the IdM accounts well in advance of this change. 2.5. Firewall configuration If firewalls are filtering traffic between IdM and OpenStack, you will need to allow access through the following port: Source Destination Type Port OpenStack Controller Node Red Hat Identity Management LDAPS TCP 636 2.6. Configure the IdM server Run these commands on the IdM server: Create the LDAP lookup account. This account is used by Identity Service to query the IdM LDAP service: Note Review the password expiration settings of this account, once created. Create a group for OpenStack users, called grp-openstack . Only members of this group can have permissions assigned in OpenStack Identity. Set the svc-ldap account password, and add it to the grp-openstack group: Login as svc-ldap user and perform the password change when prompted: 2.7. Configure the LDAPS certificate Note When using multiple domains for LDAP authentication, you might receive various errors, such as Unable to retrieve authorized projects , or Peer's Certificate issuer is not recognized . This can arise if keystone uses the incorrect certificate for a certain domain. As a workaround, merge all of the LDAPS public keys into a single .crt bundle, and configure all of your keystone domains to use this file. In your IdM environment, locate the LDAPS certificate. This file can be located using /etc/openldap/ldap.conf : Copy the file to the OpenStack node that runs the keystone service. For example, this command uses scp to copy ca.crt to the node named node.lab.local : On the OpenStack node, convert the .crt to .pem: Copy the .crt to the certificate directory. This is the location that the keystone service will use to access the certificate: Note Optionally, if you need to run diagnostic commands, such as ldapsearch , you will also need to add the certificate to the RHEL certificate store. For example: 2.8. Configure Identity Service These steps prepare Identity Service for integration with IdM. Note If you are using director, note that the configuration files referenced below are managed by Puppet. Consequently, any custom configuration you add might be overwritten whenever you run the openstack overcloud deploy process. To apply these settings to director-based deployments, see Chapter 4, Using domain-specific LDAP backends with director . 2.8.1. Configure the controller Note If you intend to update any configuration files, you need to be aware that certain OpenStack services now run within containers; this applies to keystone, nova, and cinder, among others. As a result, there are certain administration practices to consider: Do not update any configuration file you might find on the physical node's host operating system, for example, /etc/cinder/cinder.conf . This is because the containerized service does not reference this file. Do not update the configuration file running within the container. This is because any changes are lost once you restart the container. Instead, if you need to add any changes to containerized services, you will need to update the configuration file that is used to generate the container. These are stored within /var/lib/config-data/puppet-generated/ For example: keystone: /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf cinder: /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf nova: /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf Any changes will then be applied once you restart the container. For example: sudo systemctl restart tripleo_keystone Perform this procedure on the controller running the keystone service: Configure SELinux: The output might include messages similar to this. They can be ignored: Create the domains directory: Configure Identity Service to use multiple back ends: Note You might need to install crudini using dnf install crudini . Note If you are using director, note that /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf is managed by Puppet. Consequently, any custom configuration you add might be overwritten whenever you run the openstack overcloud deploy process. As a result, you might need to re-add this configuration manually each time. For director-based deployments, see Chapter 4, Using domain-specific LDAP backends with director . Enable multiple domains in dashboard. Add these lines to /var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard/local_settings : Note If you are using director, note that /var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard/local_settings is managed by Puppet. Consequently, any custom configuration you add might be overwritten whenever you run the openstack overcloud deploy process. As a result, you might need to re-add this configuration manually each time. Restart the horizon container to apply the settings: Configure an additional back end: Create the keystone domain for IdM integration. You will need to decide on a name to use for your new keystone domain, and then create the domain. For example, this command creates a keystone domain named LAB : Note If this command is not available, check that you have enabled keystone v3 for your command line session. Create the configuration file: To add the IdM back end, enter the LDAP settings in a new file called /var/lib/config-data/puppet-generated/keystone/etc/keystone/domains/keystone.LAB.conf (where LAB is the domain name created previously). You will need to edit the sample settings below to suit your IdM deployment: Explanation of each setting: Setting Description url The IdM server to use for authentication. Uses LDAPS port 636 . user The account in IdM to use for LDAP queries. password The plaintext password of the IdM account used above. user_filter Filters the users presented to Identity Service. As a result, only members of the grp-openstack group can have permissions defined in Identity Service. user_tree_dn The path to the OpenStack accounts in IdM. user_objectclass Defines the type of LDAP user. For IdM, use the inetUser type. user_id_attribute Maps the IdM value to use for user IDs. user_name_attribute Maps the IdM value to use for names . user_mail_attribute Maps the IdM value to use for user email addresses. user_pass_attribute Leave this value blank. Note Integration with an IdM group will only return direct members, and not nested groups. As a result, queries that rely on LDAP_MATCHING_RULE_IN_CHAIN or memberof:1.2.840.113556.1.4.1941: will not currently work with IdM. Change ownership of the config file to the keystone user: Grant the admin user access to the domain: Note This does not grant the OpenStack admin account any permissions in IdM. In this case, the term domain refers to OpenStack's usage of the keystone domain. Get the ID of the LAB domain: Get the ID value of the admin user: Get the ID value of the admin role: Use the returned domain and admin IDs to construct the command that adds the admin user to the admin role of the keystone LAB domain: Restart the keystone service to apply the changes: View the list of users in the IdM domain by adding the keystone domain name to the command: View the service accounts in the local keystone database: 2.8.2. Allow IdM group members to access Projects To allow authenticated users access to OpenStack resources, the recommended method is to authorize certain IdM groups to grant access to Projects. This saves the OpenStack administrators from having to allocate each user to a role in a Project. Instead, the IdM groups are granted roles in Projects. As a result, IdM users that are members of these IdM groups will be able to access pre-determined Projects. Note If you would prefer to manually manage the authorization of individual IdM users, see the Section 2.8.3, "Allow IdM users to access Projects" . This section presumes that the IdM administrator has already completed these steps: Create a group named grp-openstack-admin in IdM. Create a group named grp-openstack-demo in IdM. Add your IdM users to one of the above groups, as needed. Add your IdM users to the grp-openstack group. Have a designated project in mind. This example uses a project called demo , created using openstack project create --domain default --description "Demo Project" demo . These steps assign a role to an IdM group. Group members will then have permission to access OpenStack resources. Retrieve a list of IdM groups: Retrieve a list of roles: Grant the IdM groups access to Projects by adding them to one or more of these roles. For example, if you want users in the grp-openstack-demo group to be general users of the demo project, you must add the group to the _member_ role: As a result, members of grp-openstack-demo are able to log in to the dashboard by entering their IdM username and password, when also entering LAB in the Domain field: Note If users receive the error Error: Unable to retrieve container list. , and expect to be able to manage containers, then they must be added to the SwiftOperator role. 2.8.3. Allow IdM users to access Projects IdM users that are members of the grp-openstack IdM group can be granted permission to log in to a Project in the dashboard: Retrieve a list of IdM users: Retrieve a list of roles: Grant users access to Projects by adding them to one or more of these roles. For example, if you want user1 to be a general user of the demo project, you add them to the member role: Or, if you want user1 to be an administrative user of the demo project, you add them to the admin role: As a result, user1 is able to log in to the dashboard by entering their IdM username and password, when also entering LAB in the Domain field: Note If users receive the error Error: Unable to retrieve container list. , and expect to be able to manage containers, then they must be added to the SwiftOperator role. 2.9. Grant access to the Domain tab To allow the admin user to see the Domain tab, you will need to assign it the admin role in the default domain: Find the admin user's UUID: Add the admin role in the default domain to the admin user: As a result, the admin user can now see the Domain tab. 2.10. Creating a new project After you have completed these integration steps, when you create a new project you will need to decide whether to create it in the Default domain, or in the keystone domain you've just created. This decision can be reached by considering your workflow, and how you administer user accounts. The Default domain can be be thought of as an internal domain, used for service accounts and the admin project, so it might make sense for your AD-backed users to be placed within a different keystone domain; this does not strictly need to be the same keystone domain as the IdM users are in, and for separation purposes, there might be multiple keystone domains. 2.10.1. Changes to the dashboard log in process Configuring multiple domains in Identity Service enables a new Domain field in the dashboard login page. Users are expected to enter the domain that matches their login credentials. This field must be manually filled with one of the domains present in keystone. Use the openstack command to list the available entries. In this example, IdM accounts will need to specify the LAB domain. The built-in keystone accounts, such as admin , must specify Default as their domain: 2.10.2. Changes to the command line For certain commands, you might need to specify the applicable domain. For example, appending --domain LAB in this command returns users in the LAB domain (that are members of the grp-openstack group): Appending --domain Default returns the built-in keystone accounts: 2.10.3. Test IdM integration This procedure validates IdM integration by testing user access to dashboard features: Create a test user in IdM, and add the user to the grp-openstack IdM group. Add the user to the _member_ role of the demo project. Log in to the dashboard using the credentials of the IdM test user. Click on each of the tabs to confirm that they are presented successfully without error messages. Use the dashboard to build a test instance. Note If you experience issues with these steps, perform steps 3-5 with the built-in admin account. If successful, this demonstrates that OpenStack is still working as expected, and that an issue exists somewhere within the IdM <--> Identity integration settings. See Section 2.13, "Troubleshooting" . 2.11. Configure for high availability With keystone v3 enabled, you can make this configuration highly available by listing multiple IdM servers in the configuration file for that domain. Add a second server to the url entry. For example, updating the url setting in the keystone.LAB.conf file will have Identity Service send all query traffic to the first IdM server in the list, idm.lab.local : If a query to idm.lab.local fails due to it being unavailable, Identity Service will attempt to query the server in the list: idm2.lab.local . Note that this configuration does not perform queries in a round-robin fashion, so cannot be considered a load-balancing solution. Set the network timeout in /etc/openldap/ldap.conf : In addition, if you have firewalls configured between the controller and the IdM servers, then you should not configure the IdM servers to silently drop packets from the controller. This will allow python-keystoneclient to properly detect outages and redirect the request to the IdM server in the list. Note There might be connection delays while queries are being redirected to the second IdM server in the list. This is because the connection to the first server must first time out before the second is attempted. 2.12. Create a RC file for a non-admin user You might need to create a RC file for a non-admin user. For example: 2.13. Troubleshooting 2.13.1. Test LDAP connections Use ldapsearch to remotely perform test queries against the IdM server. A successful result here indicates that network connectivity is working, and the IdM services are up. In this example, a test query is performed against the server idm.lab.local on port 636: Note ldapsearch is a part of the openldap-clients package. You can install this using # dnf install openldap-clients . 2.13.2. Test port access Use nc to check that the LDAPS port (636) is remotely accessible. In this example, a probe is performed against the server idm.lab.local . Press ctrl-c to exit the prompt. Failure to establish a connection could indicate a firewall configuration issue.
[ "kinit admin ipa user-add First name: OpenStack Last name: LDAP User [radministrator]: svc-ldap", "ipa group-add --desc=\"OpenStack Users\" grp-openstack", "ipa passwd svc-ldap ipa group-add-member --users=svc-ldap grp-openstack", "kinit svc-ldap", "TLS_CACERT /etc/ipa/ca.crt", "scp /etc/ipa/ca.crt [email protected]:/root/", "openssl x509 -in ca.crt -out ca.pem -outform PEM", "cp ca.crt/etc/pki/ca-trust/source/anchors", "cp ca.pem /etc/pki/ca-trust/source/anchors/ update-ca-trust", "setsebool -P authlogin_nsswitch_use_ldap=on", "Full path required for exclude: net:[4026532245].", "mkdir /var/lib/config-data/puppet-generated/keystone/etc/keystone/domains/ chown 42425:42425 /var/lib/config-data/puppet-generated/keystone/etc/keystone/domains/", "crudini --set /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf identity domain_specific_drivers_enabled true crudini --set /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf identity domain_config_dir /etc/keystone/domains crudini --set /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf assignment driver sql", "OPENSTACK_API_VERSIONS = { \"identity\": 3 } OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'", "sudo systemctl restart tripleo_horizon", "openstack domain create LAB", "[ldap] url = ldaps://idm.lab.local user = uid=svc-ldap,cn=users,cn=accounts,dc=lab,dc=local user_filter = (memberOf=cn=grp-openstack,cn=groups,cn=accounts,dc=lab,dc=local) password = RedactedComplexPassword user_tree_dn = cn=users,cn=accounts,dc=lab,dc=local user_objectclass = inetUser user_id_attribute = uid user_name_attribute = uid user_mail_attribute = mail user_pass_attribute = group_tree_dn = cn=groups,cn=accounts,dc=lab,dc=local group_objectclass = groupOfNames group_id_attribute = cn group_name_attribute = cn group_member_attribute = member group_desc_attribute = description use_tls = False query_scope = sub chase_referrals = false tls_cacertfile =/etc/pki/ca-trust/source/anchors/anchorsca.crt [identity] driver = ldap", "chown 42425:42425 /var/lib/config-data/puppet-generated/keystone/etc/keystone/domains/keystone.LAB.conf", "openstack domain show LAB +---------+----------------------------------+ | Field | Value | +---------+----------------------------------+ | enabled | True | | id | 6800b0496429431ab1c4efbb3fe810d4 | | name | LAB | +---------+----------------------------------+", "openstack user list --domain default | grep admin | 3d75388d351846c6a880e53b2508172a | admin |", "openstack role list +----------------------------------+---------------+ | ID | Name | +----------------------------------+---------------+ | 544d48aaffde48f1b3c31a52c35f01f9 | SwiftOperator | | 6d005d783bf0436e882c55c62457d33d | ResellerAdmin | | 785c70b150ee4c778fe4de088070b4cf | admin | | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | +----------------------------------+---------------+", "openstack role add --domain 6800b0496429431ab1c4efbb3fe810d4 --user 3d75388d351846c6a880e53b2508172a 785c70b150ee4c778fe4de088070b4cf", "sudo podman restart keystone", "openstack user list --domain LAB", "openstack user list --domain default", "openstack group list --domain LAB +------------------------------------------------------------------+---------------------+ | ID | Name | +------------------------------------------------------------------+---------------------+ | 185277be62ae17e498a69f98a59b66934fb1d6b7f745f14f5f68953a665b8851 | grp-openstack | | a8d17f19f464c4548c18b97e4aa331820f9d3be52654aa8094e698a9182cbb88 | grp-openstack-admin | | d971bb3bd5e64a454cbd0cc7af4c0773e78d61b5f81321809f8323216938cae8 | grp-openstack-demo | +------------------------------------------------------------------+---------------------+", "openstack role list +----------------------------------+---------------+ | ID | Name | +----------------------------------+---------------+ | 0969957bce5e4f678ca6cef00e1abf8a | ResellerAdmin | | 1fcb3c9b50aa46ee8196aaaecc2b76b7 | admin | | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | | d3570730eb4b4780a7fed97eba197e1b | SwiftOperator | +----------------------------------+---------------+", "openstack role add --project demo --group d971bb3bd5e64a454cbd0cc7af4c0773e78d61b5f81321809f8323216938cae8 _member_", "openstack user list --domain LAB +------------------------------------------------------------------+----------------+ | ID | Name | +------------------------------------------------------------------+----------------+ | 1f24ec1f11aeb90520079c29f70afa060d22e2ce92b2eba7784c841ac418091e | user1 | | 12c062faddc5f8b065434d9ff6fce03eb9259537c93b411224588686e9a38bf1 | user2 | | afaf48031eb54c3e44e4cb0353f5b612084033ff70f63c22873d181fdae2e73c | user3 | | e47fc21dcf0d9716d2663766023e2d8dc15a6d9b01453854a898cabb2396826e | user4 | +------------------------------------------------------------------+----------------+", "openstack role list +----------------------------------+---------------+ | ID | Name | +----------------------------------+---------------+ | 544d48aaffde48f1b3c31a52c35f01f9 | SwiftOperator | | 6d005d783bf0436e882c55c62457d33d | ResellerAdmin | | 785c70b150ee4c778fe4de088070b4cf | admin | | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | +----------------------------------+---------------+", "openstack role add --project demo --user 1f24ec1f11aeb90520079c29f70afa060d22e2ce92b2eba7784c841ac418091e _member_", "openstack role add --project demo --user 1f24ec1f11aeb90520079c29f70afa060d22e2ce92b2eba7784c841ac418091e admin", "openstack user list | grep admin | a6a8adb6356f4a879f079485dad1321b | admin |", "openstack role add --domain default --user a6a8adb6356f4a879f079485dad1321b admin", "openstack domain list +----------------------------------+---------+---------+----------------------------------------------------------------------+ | ID | Name | Enabled | Description | +----------------------------------+---------+---------+----------------------------------------------------------------------+ | 6800b0496429431ab1c4efbb3fe810d4 | LAB | True | | | default | Default | True | Owns users and projects available on Identity API v2. | +----------------------------------+---------+---------+----------------------------------------------------------------------+", "openstack user list --domain LAB", "openstack user list --domain Default", "url = ldaps://idm.lab.local,ldaps://idm2.lab.local", "NETWORK_TIMEOUT 2", "cat overcloudrc-v3-user1 Clear any old environment that may conflict. for key in USD( set | awk '{FS=\"=\"} /^OS_/ {print USD1}' ); do unset USDkey ; done export OS_USERNAME=user1 export NOVA_VERSION=1.1 export OS_PROJECT_NAME=demo export OS_PASSWORD=RedactedComplexPassword export OS_NO_CACHE=True export COMPUTE_API_VERSION=1.1 export no_proxy=,10.0.0.5,192.168.2.11 export OS_CLOUDNAME=overcloud export OS_AUTH_URL=https://10.0.0.5:5000/v3 export OS_AUTH_TYPE=password export PYTHONWARNINGS=\"ignore:Certificate has no, ignore:A true SSLContext object is not available\" export OS_IDENTITY_API_VERSION=3 export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=LAB", "ldapsearch -D \"cn=directory manager\" -H ldaps://idm.lab.local:636 -b \"dc=lab,dc=local\" -s sub \"(objectclass=*)\" -w RedactedComplexPassword", "nc -v idm.lab.local 636 Ncat: Version 6.40 ( http://nmap.org/ncat ) Ncat: Connected to 192.168.200.10:636. ^C" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/integrate_with_identity_service/idm
14.5.23. Creating a Virtual Machine XML Dump (Configuration File)
14.5.23. Creating a Virtual Machine XML Dump (Configuration File) Output a guest virtual machine's XML configuration file with virsh : This command outputs the guest virtual machine's XML configuration file to standard out ( stdout ). You can save the data by piping the output to a file. An example of piping the output to a file called guest.xml : This file guest.xml can recreate the guest virtual machine (refer to Section 14.6, "Editing a Guest Virtual Machine's configuration file" . You can edit this XML configuration file to configure additional devices or to deploy additional guest virtual machines. An example of virsh dumpxml output: Note that the <shareable/> flag is set. This indicates the device is expected to be shared between domains (assuming the hypervisor and OS support this), which means that caching should be deactivated for that device.
[ "virsh dumpxml {guest-id, guestname or uuid}", "virsh dumpxml GuestID > guest.xml", "virsh dumpxml guest1-rhel6-64 <domain type='kvm'> <name>guest1-rhel6-64</name> <uuid>b8d7388a-bbf2-db3a-e962-b97ca6e514bd</uuid> <memory>2097152</memory> <currentMemory>2097152</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='rhel6.2.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='threads'/> <source file='/home/guest-images/guest1-rhel6-64.img'/> <target dev='vda' bus='virtio'/> <shareable/< <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <interface type='bridge'> <mac address='52:54:00:b9:35:a9'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes'/> <sound model='ich6'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='cirrus' vram='9216' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> </domain>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-Domain_Commands-Creating_a_virtual_machine_XML_dump_configuration_file
Installing Red Hat Developer Hub on Microsoft Azure Kubernetes Service
Installing Red Hat Developer Hub on Microsoft Azure Kubernetes Service Red Hat Developer Hub 1.3 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/installing_red_hat_developer_hub_on_microsoft_azure_kubernetes_service/index
Chapter 7. Virtualization
Chapter 7. Virtualization Increased Maximum Number of vCPUs in KVM The maximum number of supported virtual CPUs (vCPUs) in a KVM guest has been increased to 240. This increases the amount of virtual processing units that a user can assign to the guest, and therefore improves its performance potential. 5th Generation Intel Core New Instructions Support in QEMU, KVM, and libvirt API In Red Hat Enterprise Linux 7.1, the support for 5th Generation Intel Core processors has been added to the QEMU hypervisor, the KVM kernel code, and the libvirt API. This allows KVM guests to use the following instructions and features: ADCX, ADOX, RDSFEED, PREFETCHW, and supervisor mode access prevention (SMAP). USB 3.0 Support for KVM Guests Red Hat Enterprise Linux 7.1 features improved USB support by adding USB 3.0 host adapter (xHCI) emulation as a Technology Preview. Compression for the dump-guest-memory Command Since Red Hat Enterprise Linux 7.1, the dump-guest-memory command supports crash dump compression. This makes it possible for users who cannot use the virsh dump command to require less hard disk space for guest crash dumps. In addition, saving a compressed guest crash dump usually takes less time than saving a non-compressed one. Open Virtual Machine Firmware The Open Virtual Machine Firmware (OVMF) is available as a Technology Preview in Red Hat Enterprise Linux 7.1. OVMF is a UEFI secure boot environment for AMD64 and Intel 64 guests. Improve Network Performance on Hyper-V Several new features of the Hyper-V network driver have been introduced to improve network performance. For example, Receive-Side Scaling, Large Send Offload, Scatter/Gather I/O are now supported, and network throughput is increased. hypervfcopyd in hyperv-daemons The hypervfcopyd daemon has been added to the hyperv-daemons packages. hypervfcopyd is an implementation of file copy service functionality for Linux Guest running on Hyper-V 2012 R2 host. It enables the host to copy a file (over VMBUS) into the Linux Guest. New Features in libguestfs Red Hat Enterprise Linux 7.1 introduces a number of new features in libguestfs , a set of tools for accessing and modifying virtual machine disk images. Namely: virt-builder - a new tool for building virtual machine images. Use virt-builder to rapidly and securely create guests and customize them. virt-customize - a new tool for customizing virtual machine disk images. Use virt-customize to install packages, edit configuration files, run scripts, and set passwords. virt-diff - a new tool for showing differences between the file systems of two virtual machines. Use virt-diff to easily discover what files have been changed between snapshots. virt-log - a new tool for listing log files from guests. The virt-log tool supports a variety of guests including Linux traditional, Linux using journal, and Windows event log. virt-v2v - a new tool for converting guests from a foreign hypervisor to run on KVM, managed by libvirt, OpenStack, oVirt, Red Hat Enterprise Virtualization (RHEV), and several other targets. Currently, virt-v2v can convert Red Hat Enterprise Linux and Windows guests running on Xen and VMware ESX. Flight Recorder Tracing Support for flight recorder tracing has been introduced in Red Hat Enterprise Linux 7.1. Flight recorder tracing uses SystemTap to automatically capture qemu-kvm data as long as the guest machine is running. This provides an additional avenue for investigating qemu-kvm problems, more flexible than qemu-kvm core dumps. For detailed instructions on how to configure and use flight recorder tracing, see the Virtualization Deployment and Administration Guide . LPAR Watchdog for IBM System z As a Technology Preview, Red Hat Enterprise Linux 7.1 introduces a new watchdog driver for IBM System z. This enhanced watchdog supports Linux logical partitions (LPAR) as well as Linux guests in the z/VM hypervisor, and provides automatic reboot and automatic dump capabilities if a Linux system becomes unresponsive. RDMA-based Migration of Live Guests The support for Remote Direct Memory Access (RDMA)-based migration has been added to libvirt . As a result, it is now possible to use the new rdma:// migration URI to request migration over RDMA, which allows for significantly shorter live migration of large guests. Note that prior to using RDMA-based migration, RDMA has to be configured and libvirt has to be set up to use it. Removal of Q35 Chipset, PCI Express Bus, and AHCI Bus Emulation Red Hat Enterprise Linux 7.1 removes the emulation of the Q35 machine type, required also for supporting the PCI Express (PCIe) bus and the Advanced Host Controller Interface (AHCI) bus in KVM guest virtual machines. These features were previously available on Red Hat Enterprise Linux as Technology Previews. However, they are still being actively developed and might become available in the future as part of Red Hat products.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-7.1_release_notes-virtualization
Chapter 4. Targeted Policy
Chapter 4. Targeted Policy Targeted policy is the default SELinux policy used in Red Hat Enterprise Linux. When using targeted policy, processes that are targeted run in a confined domain, and processes that are not targeted run in an unconfined domain. For example, by default, logged-in users run in the unconfined_t domain, and system processes started by init run in the initrc_t domain; both of these domains are unconfined. Executable and writable memory checks may apply to both confined and unconfined domains. However, by default, subjects running in an unconfined domain cannot allocate writable memory and execute it. This reduces vulnerability to buffer overflow attacks. These memory checks are disabled by setting Booleans, which allow the SELinux policy to be modified at runtime. Boolean configuration is discussed later. 4.1. Confined Processes Almost every service that listens on a network, such as sshd or httpd , is confined in Red Hat Enterprise Linux. Also, most processes that run as the Linux root user and perform tasks for users, such as the passwd application, are confined. When a process is confined, it runs in its own domain, such as the httpd process running in the httpd_t domain. If a confined process is compromised by an attacker, depending on SELinux policy configuration, an attacker's access to resources and the possible damage they can do is limited. Complete this procedure to ensure that SELinux is enabled and the system is prepared to perform the following example: Procedure 4.1. How to Verify SELinux Status Run the sestatus command to confirm that SELinux is enabled, is running in enforcing mode, and that targeted policy is being used. The correct output should look similar to the output bellow. Refer to the section Section 5.4, "Permanent Changes in SELinux States and Modes" for detailed information about enabling and disabling SELinux. As the Linux root user, run the touch /var/www/html/testfile command to create a file. Run the ls -Z /var/www/html/testfile command to view the SELinux context: By default, Linux users run unconfined in Red Hat Enterprise Linux, which is why the testfile file is labeled with the SELinux unconfined_u user. RBAC is used for processes, not files. Roles do not have a meaning for files; the object_r role is a generic role used for files (on persistent storage and network file systems). Under the /proc/ directory, files related to processes may use the system_r role. The httpd_sys_content_t type allows the httpd process to access this file. The following example demonstrates how SELinux prevents the Apache HTTP Server ( httpd ) from reading files that are not correctly labeled, such as files intended for use by Samba. This is an example, and should not be used in production. It assumes that the httpd and wget packages are installed, the SELinux targeted policy is used, and that SELinux is running in enforcing mode. Procedure 4.2. An Example of Confined Process As the Linux root user, run the service httpd start command to start the httpd process. The output is as follows if httpd starts successfully: Change into a directory where your Linux user has write access to, and run the wget http://localhost/testfile command. Unless there are changes to the default configuration, this command succeeds: The chcon command relabels files; however, such label changes do not survive when the file system is relabeled. For permanent changes that survive a file system relabel, use the semanage command, which is discussed later. As the Linux root user, run the following command to change the type to a type used by Samba: Run the ls -Z /var/www/html/testfile command to view the changes: Note: the current DAC permissions allow the httpd process access to testfile . Change into a directory where your Linux user has write access to, and run the wget http://localhost/testfile command. Unless there are changes to the default configuration, this command fails: As the Linux root user, run the rm -i /var/www/html/testfile command to remove testfile . If you do not require httpd to be running, as the Linux root user, run the service httpd stop command to stop httpd : This example demonstrates the additional security added by SELinux. Although DAC rules allowed the httpd process access to testfile in step 2, because the file was labeled with a type that the httpd process does not have access to, SELinux denied access. If the auditd daemon is running, an error similar to the following is logged to /var/log/audit/audit.log : Also, an error similar to the following is logged to /var/log/httpd/error_log :
[ "~]USD sestatus SELinux status: enabled SELinuxfs mount: /selinux Current mode: enforcing Mode from config file: enforcing Policy version: 24 Policy from config file: targeted", "-rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 /var/www/html/testfile", "~]# service httpd start Starting httpd: [ OK ]", "~]USD wget http://localhost/testfile --2009-11-06 17:43:01-- http://localhost/testfile Resolving localhost... 127.0.0.1 Connecting to localhost|127.0.0.1|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 0 [text/plain] Saving to: `testfile' [ <=> ] 0 --.-K/s in 0s 2009-11-06 17:43:01 (0.00 B/s) - `testfile' saved [0/0]", "~]# chcon -t samba_share_t /var/www/html/testfile", "-rw-r--r-- root root unconfined_u:object_r:samba_share_t:s0 /var/www/html/testfile", "~]USD wget http://localhost/testfile --2009-11-06 14:11:23-- http://localhost/testfile Resolving localhost... 127.0.0.1 Connecting to localhost|127.0.0.1|:80... connected. HTTP request sent, awaiting response... 403 Forbidden 2009-11-06 14:11:23 ERROR 403: Forbidden.", "~]# service httpd stop Stopping httpd: [ OK ]", "type=AVC msg=audit(1220706212.937:70): avc: denied { getattr } for pid=1904 comm=\"httpd\" path=\"/var/www/html/testfile\" dev=sda5 ino=247576 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:samba_share_t:s0 tclass=file type=SYSCALL msg=audit(1220706212.937:70): arch=40000003 syscall=196 success=no exit=-13 a0=b9e21da0 a1=bf9581dc a2=555ff4 a3=2008171 items=0 ppid=1902 pid=1904 auid=500 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=1 comm=\"httpd\" exe=\"/usr/sbin/httpd\" subj=unconfined_u:system_r:httpd_t:s0 key=(null)", "[Wed May 06 23:00:54 2009] [error] [client 127.0.0.1 ] (13)Permission denied: access to /testfile denied" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/chap-Security-Enhanced_Linux-Targeted_Policy
Chapter 11. Monitoring project and application metrics using the Developer perspective
Chapter 11. Monitoring project and application metrics using the Developer perspective The Observe view in the Developer perspective provides options to monitor your project or application metrics, such as CPU, memory, and bandwidth usage, and network related information. 11.1. Prerequisites You have created and deployed applications on OpenShift Container Platform . You have logged in to the web console and have switched to the Developer perspective . 11.2. Monitoring your project metrics After you create applications in your project and deploy them, you can use the Developer perspective in the web console to see the metrics for your project. Procedure Go to Observe to see the Dashboard , Metrics , Alerts , and Events for your project. Optional: Use the Dashboard tab to see graphs depicting the following application metrics: CPU usage Memory usage Bandwidth consumption Network-related information such as the rate of transmitted and received packets and the rate of dropped packets. In the Dashboard tab, you can access the Kubernetes compute resources dashboards. Note In the Dashboard list, the Kubernetes / Compute Resources / Namespace (Pods) dashboard is selected by default. Use the following options to see further details: Select a dashboard from the Dashboard list to see the filtered metrics. All dashboards produce additional sub-menus when selected, except Kubernetes / Compute Resources / Namespace (Pods) . Select an option from the Time Range list to determine the time frame for the data being captured. Set a custom time range by selecting Custom time range from the Time Range list. You can input or select the From and To dates and times. Click Save to save the custom time range. Select an option from the Refresh Interval list to determine the time period after which the data is refreshed. Hover your cursor over the graphs to see specific details for your pod. Click Inspect located in the upper-right corner of every graph to see any particular graph details. The graph details appear in the Metrics tab. Optional: Use the Metrics tab to query for the required project metric. Figure 11.1. Monitoring metrics In the Select Query list, select an option to filter the required details for your project. The filtered metrics for all the application pods in your project are displayed in the graph. The pods in your project are also listed below. From the list of pods, clear the colored square boxes to remove the metrics for specific pods to further filter your query result. Click Show PromQL to see the Prometheus query. You can further modify this query with the help of prompts to customize the query and filter the metrics you want to see for that namespace. Use the drop-down list to set a time range for the data being displayed. You can click Reset Zoom to reset it to the default time range. Optional: In the Select Query list, select Custom Query to create a custom Prometheus query and filter relevant metrics. Optional: Use the Alerts tab to do the following tasks: See the rules that trigger alerts for the applications in your project. Identify the alerts firing in the project. Silence such alerts if required. Figure 11.2. Monitoring alerts Use the following options to see further details: Use the Filter list to filter the alerts by their Alert State and Severity . Click on an alert to go to the details page for that alert. In the Alerts Details page, you can click View Metrics to see the metrics for the alert. Use the Notifications toggle adjoining an alert rule to silence all the alerts for that rule, and then select the duration for which the alerts will be silenced from the Silence for list. You must have the permissions to edit alerts to see the Notifications toggle. Use the Options menu adjoining an alert rule to see the details of the alerting rule. Optional: Use the Events tab to see the events for your project. Figure 11.3. Monitoring events You can filter the displayed events using the following options: In the Resources list, select a resource to see events for that resource. In the All Types list, select a type of event to see events relevant to that type. Search for specific events using the Filter events by names or messages field. 11.3. Monitoring your application metrics After you create applications in your project and deploy them, you can use the Topology view in the Developer perspective to see the alerts and metrics for your application. Critical and warning alerts for your application are indicated on the workload node in the Topology view. Procedure To see the alerts for your workload: In the Topology view, click the workload to see the workload details in the right panel. Click the Observe tab to see the critical and warning alerts for the application; graphs for metrics, such as CPU, memory, and bandwidth usage; and all the events for the application. Note Only critical and warning alerts in the Firing state are displayed in the Topology view. Alerts in the Silenced , Pending and Not Firing states are not displayed. Figure 11.4. Monitoring application metrics Click the alert listed in the right panel to see the alert details in the Alert Details page. Click any of the charts to go to the Metrics tab to see the detailed metrics for the application. Click View monitoring dashboard to see the monitoring dashboard for that application. 11.4. Image vulnerabilities breakdown In the Developer perspective, the project dashboard shows the Image Vulnerabilities link in the Status section. Using this link, you can view the Image Vulnerabilities breakdown window, which includes details regarding vulnerable container images and fixable container images. The icon color indicates severity: Red: High priority. Fix immediately. Orange: Medium priority. Can be fixed after high-priority vulnerabilities. Yellow: Low priority. Can be fixed after high and medium-priority vulnerabilities. Based on the severity level, you can prioritize vulnerabilities and fix them in an organized manner. Figure 11.5. Viewing image vulnerabilities 11.5. Monitoring your application and image vulnerabilities metrics After you create applications in your project and deploy them, use the Developer perspective in the web console to see the metrics for your application dependency vulnerabilities across your cluster. The metrics help you to analyze the following image vulnerabilities in detail: Total count of vulnerable images in a selected project Severity-based counts of all vulnerable images in a selected project Drilldown into severity to obtain the details, such as count of vulnerabilities, count of fixable vulnerabilities, and number of affected pods for each vulnerable image Prerequisites You have installed the Red Hat Quay Container Security operator from the Operator Hub. Note The Red Hat Quay Container Security operator detects vulnerabilities by scanning the images that are in the quay registry. Procedure For a general overview of the image vulnerabilities, on the navigation panel of the Developer perspective, click Project to see the project dashboard. Click Image Vulnerabilities in the Status section. The window that opens displays details such as Vulnerable Container Images and Fixable Container Images . For a detailed vulnerabilities overview, click the Vulnerabilities tab on the project dashboard. To get more detail about an image, click its name. View the default graph with all types of vulnerabilities in the Details tab. Optional: Click the toggle button to view a specific type of vulnerability. For example, click App dependency to see vulnerabilities specific to application dependency. Optional: You can filter the list of vulnerabilities based on their Severity and Type or sort them by Severity , Package , Type , Source , Current Version , and Fixed in Version . Click a Vulnerability to get its associated details: Base image vulnerabilities display information from a Red Hat Security Advisory (RHSA). App dependency vulnerabilities display information from the Snyk security application. 11.6. Additional resources About OpenShift Container Platform monitoring
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/building_applications/odc-monitoring-project-and-application-metrics-using-developer-perspective
Chapter 19. Managing cloud provider credentials
Chapter 19. Managing cloud provider credentials 19.1. About the Cloud Credential Operator The Cloud Credential Operator (CCO) manages cloud provider credentials as custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with the specific permissions that are required for the cluster to run. By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in several different modes. If no mode is specified, or the credentialsMode parameter is set to an empty string ( "" ), the CCO operates in its default mode. 19.1.1. Modes By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in mint , passthrough , or manual mode. These options provide transparency and flexibility in how the CCO uses cloud credentials to process CredentialsRequest CRs in the cluster, and allow the CCO to be configured to suit the security requirements of your organization. Not all CCO modes are supported for all cloud providers. Mint : In mint mode, the CCO uses the provided admin-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required. Passthrough : In passthrough mode, the CCO passes the provided cloud credential to the components that request cloud credentials. Manual : In manual mode, a user manages cloud credentials instead of the CCO. Manual with AWS Security Token Service : In manual mode, you can configure an AWS cluster to use Amazon Web Services Security Token Service (AWS STS). With this configuration, the CCO uses temporary credentials for different components. Manual with GCP Workload Identity : In manual mode, you can configure a GCP cluster to use GCP Workload Identity. With this configuration, the CCO uses temporary credentials for different components. Table 19.1. CCO mode support matrix Cloud provider Mint Passthrough Manual Alibaba Cloud X Amazon Web Services (AWS) X X X Microsoft Azure X [1] X Google Cloud Platform (GCP) X X X IBM Cloud X Nutanix X Red Hat OpenStack Platform (RHOSP) X Red Hat Virtualization (RHV) X VMware vSphere X Manual mode is the only supported CCO configuration for Microsoft Azure Stack Hub. 19.1.2. Determining the Cloud Credential Operator mode For platforms that support using the CCO in multiple modes, you can determine what mode the CCO is configured to use by using the web console or the CLI. Figure 19.1. Determining the CCO configuration 19.1.2.1. Determining the Cloud Credential Operator mode by using the web console You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the web console. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select CloudCredential . On the CloudCredential details page, select the YAML tab. In the YAML block, check the value of spec.credentialsMode . The following values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS or GCP cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. An AWS or GCP cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster using the AWS Security Token Service (STS) or GCP Workload Identity. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use the default ( '' ) only: To determine whether the cluster is operating in mint or passthrough mode, inspect the annotations on the cluster root secret: Navigate to Workloads Secrets and look for the root secret for your cloud provider. Note Ensure that the Project dropdown is set to All Projects . Platform Secret name AWS aws-creds GCP gcp-credentials To view the CCO mode that the cluster is using, click 1 annotation under Annotations , and check the value field. The following values are possible: Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. If your cluster uses mint mode, you can also determine whether the cluster is operating without the root secret. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, navigate to Workloads Secrets and look for the root secret for your cloud provider. Note Ensure that the Project dropdown is set to All Projects . Platform Secret name AWS aws-creds GCP gcp-credentials If you see one of these values, your cluster is using mint or passthrough mode with the root secret present. If you do not see these values, your cluster is using the CCO in mint mode with the root secret removed. AWS or GCP clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, you must check the cluster Authentication object YAML values. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select Authentication . On the Authentication details page, select the YAML tab. In the YAML block, check the value of the .spec.serviceAccountIssuer parameter. A value that contains a URL that is associated with your cloud provider indicates that the CCO is using manual mode with AWS STS or GCP Workload Identity to create and manage cloud credentials from outside of the cluster. These clusters are configured using the ccoctl utility. An empty value ( '' ) indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. 19.1.2.2. Determining the Cloud Credential Operator mode by using the CLI You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the CLI. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. To determine the mode that the CCO is configured to use, enter the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS or GCP cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. An AWS or GCP cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster using the AWS Security Token Service (STS) or GCP Workload Identity. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use the default ( '' ) only: To determine whether the cluster is operating in mint or passthrough mode, run the following command: USD oc get secret <secret_name> \ -n kube-system \ -o jsonpath \ --template '{ .metadata.annotations }' where <secret_name> is aws-creds for AWS or gcp-credentials for GCP. This command displays the value of the .metadata.annotations parameter in the cluster root secret object. The following output values are possible: Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. If your cluster uses mint mode, you can also determine whether the cluster is operating without the root secret. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, run the following command: USD oc get secret <secret_name> \ -n=kube-system where <secret_name> is aws-creds for AWS or gcp-credentials for GCP. If the root secret is present, the output of this command returns information about the secret. An error indicates that the root secret is not present on the cluster. AWS or GCP clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, run the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the CCO is using manual mode with AWS STS or GCP Workload Identity to create and manage cloud credentials from outside of the cluster. These clusters are configured using the ccoctl utility. An empty output indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. 19.1.3. Default behavior For platforms on which multiple modes are supported (AWS, Azure, and GCP), when the CCO operates in its default mode, it checks the provided credentials dynamically to determine for which mode they are sufficient to process CredentialsRequest CRs. By default, the CCO determines whether the credentials are sufficient for mint mode, which is the preferred mode of operation, and uses those credentials to create appropriate credentials for components in the cluster. If the credentials are not sufficient for mint mode, it determines whether they are sufficient for passthrough mode. If the credentials are not sufficient for passthrough mode, the CCO cannot adequately process CredentialsRequest CRs. If the provided credentials are determined to be insufficient during installation, the installation fails. For AWS, the installer fails early in the process and indicates which required permissions are missing. Other providers might not provide specific information about the cause of the error until errors are encountered. If the credentials are changed after a successful installation and the CCO determines that the new credentials are insufficient, the CCO puts conditions on any new CredentialsRequest CRs to indicate that it cannot process them because of the insufficient credentials. To resolve insufficient credentials issues, provide a credential with sufficient permissions. If an error occurred during installation, try installing again. For issues with new CredentialsRequest CRs, wait for the CCO to try to process the CR again. As an alternative, you can manually create IAM for AWS , Azure , and GCP . 19.1.4. Additional resources Cluster Operators reference page for the Cloud Credential Operator 19.2. Using mint mode Mint mode is supported for Amazon Web Services (AWS) and Google Cloud Platform (GCP). Mint mode is the default mode on the platforms for which it is supported. In this mode, the Cloud Credential Operator (CCO) uses the provided administrator-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required. For clusters that use the CCO in mint mode, the administrator-level credential is stored in the kube-system namespace. The CCO uses the admin credential to process the CredentialsRequest objects in the cluster and create users for components with limited permissions. With mint mode, each cluster component has only the specific permissions it requires. Cloud credential reconciliation is automatic and continuous so that components can perform actions that require additional credentials or permissions. For example, a minor version cluster update (such as updating from OpenShift Container Platform 4.16 to 4.17) might include an updated CredentialsRequest resource for a cluster component. The CCO, operating in mint mode, uses the admin credential to process the CredentialsRequest resource and create users with limited permissions to satisfy the updated authentication requirements. Note By default, mint mode requires storing the admin credential in the cluster kube-system namespace. If this approach does not meet the security requirements of your organization, see Alternatives to storing administrator-level secrets in the kube-system project for AWS or GCP . 19.2.1. Mint mode permissions requirements When using the CCO in mint mode, ensure that the credential you provide meets the requirements of the cloud on which you are running or installing OpenShift Container Platform. If the provided credentials are not sufficient for mint mode, the CCO cannot create an IAM user. 19.2.1.1. Amazon Web Services (AWS) permissions The credential you provide for mint mode in AWS must have the following permissions: iam:CreateAccessKey iam:CreateUser iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:GetUser iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser iam:SimulatePrincipalPolicy 19.2.1.2. Google Cloud Platform (GCP) permissions The credential you provide for mint mode in GCP must have the following permissions: resourcemanager.projects.get serviceusage.services.list iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.roles.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy 19.2.2. Admin credentials root secret format Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which is then used to satisfy all credentials requests and create their respective secrets. This is done either by minting new credentials with mint mode , or by copying the credentials root secret with passthrough mode . The format for the secret varies by cloud, and is also used for each CredentialsRequest secret. Amazon Web Services (AWS) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key> Google Cloud Platform (GCP) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account> 19.2.3. Mint mode with removal or rotation of the administrator-level credential Currently, this mode is only supported on AWS and GCP. In this mode, a user installs OpenShift Container Platform with an administrator-level credential just like the normal mint mode. However, this process removes the administrator-level credential secret from the cluster post-installation. The administrator can have the Cloud Credential Operator make its own request for a read-only credential that allows it to verify if all CredentialsRequest objects have their required permissions, thus the administrator-level credential is not required unless something needs to be changed. After the associated credential is removed, it can be deleted or deactivated on the underlying cloud, if desired. Note Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked. The administrator-level credential is not stored in the cluster permanently. Following these steps still requires the administrator-level credential in the cluster for brief periods of time. It also requires manually re-instating the secret with administrator-level credentials for each upgrade. 19.2.3.1. Rotating cloud provider credentials manually If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential. Prerequisites Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using: For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are supported. You have changed the credentials that are used to interface with your cloud provider. The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds GCP gcp-credentials Click the Options menu in the same row as the secret and select Edit Secret . Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save . Delete each component secret that is referenced by the individual CredentialsRequest objects. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Get the names and namespaces of all referenced component secrets: USD oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef' where <provider_spec> is the corresponding value for your cloud provider: AWS: AWSProviderSpec GCP: GCPProviderSpec Partial example output for AWS { "name": "ebs-cloud-credentials", "namespace": "openshift-cluster-csi-drivers" } { "name": "cloud-credential-operator-iam-ro-creds", "namespace": "openshift-cloud-credential-operator" } Delete each of the referenced component secrets: USD oc delete secret <secret_name> \ 1 -n <secret_namespace> 2 1 Specify the name of a secret. 2 Specify the namespace that contains the secret. Example deletion of an AWS secret USD oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones. Verification To verify that the credentials have changed: In the Administrator perspective of the web console, navigate to Workloads Secrets . Verify that the contents of the Value field or fields have changed. 19.2.3.2. Removing cloud provider credentials For clusters that use the Cloud Credential Operator (CCO) in mint mode, the administrator-level credential is stored in the kube-system namespace. The CCO uses the admin credential to process the CredentialsRequest objects in the cluster and create users for components with limited permissions. After installing an OpenShift Container Platform cluster with the CCO in mint mode, you can remove the administrator-level credential secret from the kube-system namespace in the cluster. The CCO only requires the administrator-level credential during changes that require reconciling new or modified CredentialsRequest custom resources, such as minor cluster version updates. Note Before performing a minor version cluster update (for example, updating from OpenShift Container Platform 4.16 to 4.17), you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the update might be blocked. Prerequisites Your cluster is installed on a platform that supports removing cloud credentials from the CCO. Supported platforms are AWS and GCP. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds GCP gcp-credentials Click the Options menu in the same row as the secret and select Delete Secret . 19.2.4. Additional resources Alternatives to storing administrator-level secrets in the kube-system project for AWS Alternatives to storing administrator-level secrets in the kube-system project for GCP 19.3. Using passthrough mode Passthrough mode is supported for Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere. In passthrough mode, the Cloud Credential Operator (CCO) passes the provided cloud credential to the components that request cloud credentials. The credential must have permissions to perform the installation and complete the operations that are required by components in the cluster, but does not need to be able to create new credentials. The CCO does not attempt to create additional limited-scoped credentials in passthrough mode. Note Manual mode is the only supported CCO configuration for Microsoft Azure Stack Hub. 19.3.1. Passthrough mode permissions requirements When using the CCO in passthrough mode, ensure that the credential you provide meets the requirements of the cloud on which you are running or installing OpenShift Container Platform. If the provided credentials the CCO passes to a component that creates a CredentialsRequest CR are not sufficient, that component will report an error when it tries to call an API that it does not have permissions for. 19.3.1.1. Amazon Web Services (AWS) permissions The credential you provide for passthrough mode in AWS must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the CredentialsRequest CRs that are required, see Manually creating IAM for AWS . 19.3.1.2. Microsoft Azure permissions The credential you provide for passthrough mode in Azure must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the CredentialsRequest CRs that are required, see Manually creating IAM for Azure . 19.3.1.3. Google Cloud Platform (GCP) permissions The credential you provide for passthrough mode in GCP must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the CredentialsRequest CRs that are required, see Manually creating IAM for GCP . 19.3.1.4. Red Hat OpenStack Platform (RHOSP) permissions To install an OpenShift Container Platform cluster on RHOSP, the CCO requires a credential with the permissions of a member user role. 19.3.1.5. Red Hat Virtualization (RHV) permissions To install an OpenShift Container Platform cluster on RHV, the CCO requires a credential with the following privileges: DiskOperator DiskCreator UserTemplateBasedVm TemplateOwner TemplateCreator ClusterAdmin on the specific cluster that is targeted for OpenShift Container Platform deployment 19.3.1.6. VMware vSphere permissions To install an OpenShift Container Platform cluster on VMware vSphere, the CCO requires a credential with the following vSphere privileges: Table 19.2. Required vSphere privileges Category Privileges Datastore Allocate space Folder Create folder , Delete folder vSphere Tagging All privileges Network Assign network Resource Assign virtual machine to resource pool Profile-driven storage All privileges vApp All privileges Virtual machine All privileges 19.3.2. Admin credentials root secret format Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which is then used to satisfy all credentials requests and create their respective secrets. This is done either by minting new credentials with mint mode , or by copying the credentials root secret with passthrough mode . The format for the secret varies by cloud, and is also used for each CredentialsRequest secret. Amazon Web Services (AWS) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key> Microsoft Azure secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: azure-credentials stringData: azure_subscription_id: <base64-encoded_subscription_id> azure_client_id: <base64-encoded_client_id> azure_client_secret: <base64-encoded_client_secret> azure_tenant_id: <base64-encoded_tenant_id> azure_resource_prefix: <base64-encoded_resource_prefix> azure_resourcegroup: <base64-encoded_resource_group> azure_region: <base64-encoded_region> On Microsoft Azure, the credentials secret format includes two properties that must contain the cluster's infrastructure ID, generated randomly for each cluster installation. This value can be found after running create manifests: USD cat .openshift_install_state.json | jq '."*installconfig.ClusterID".InfraID' -r Example output mycluster-2mpcn This value would be used in the secret data as follows: azure_resource_prefix: mycluster-2mpcn azure_resourcegroup: mycluster-2mpcn-rg Google Cloud Platform (GCP) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account> Red Hat OpenStack Platform (RHOSP) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: openstack-credentials data: clouds.yaml: <base64-encoded_cloud_creds> clouds.conf: <base64-encoded_cloud_creds_init> Red Hat Virtualization (RHV) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: ovirt-credentials data: ovirt_url: <base64-encoded_url> ovirt_username: <base64-encoded_username> ovirt_password: <base64-encoded_password> ovirt_insecure: <base64-encoded_insecure> ovirt_ca_bundle: <base64-encoded_ca_bundle> VMware vSphere secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: vsphere-creds data: vsphere.openshift.example.com.username: <base64-encoded_username> vsphere.openshift.example.com.password: <base64-encoded_password> 19.3.3. Passthrough mode credential maintenance If CredentialsRequest CRs change over time as the cluster is upgraded, you must manually update the passthrough mode credential to meet the requirements. To avoid credentials issues during an upgrade, check the CredentialsRequest CRs in the release image for the new version of OpenShift Container Platform before upgrading. To locate the CredentialsRequest CRs that are required for your cloud provider, see Manually creating IAM for AWS , Azure , or GCP . 19.3.3.1. Rotating cloud provider credentials manually If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential. Prerequisites Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using: For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere are supported. You have changed the credentials that are used to interface with your cloud provider. The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds Azure azure-credentials GCP gcp-credentials RHOSP openstack-credentials RHV ovirt-credentials VMware vSphere vsphere-creds Click the Options menu in the same row as the secret and select Edit Secret . Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save . If you are updating the credentials for a vSphere cluster that does not have the vSphere CSI Driver Operator enabled, you must force a rollout of the Kubernetes controller manager to apply the updated credentials. Note If the vSphere CSI Driver Operator is enabled, this step is not required. To apply the updated vSphere credentials, log in to the OpenShift Container Platform CLI as a user with the cluster-admin role and run the following command: USD oc patch kubecontrollermanager cluster \ -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date )"'"}}' \ --type=merge While the credentials are rolling out, the status of the Kubernetes Controller Manager Operator reports Progressing=true . To view the status, run the following command: USD oc get co kube-controller-manager Verification To verify that the credentials have changed: In the Administrator perspective of the web console, navigate to Workloads Secrets . Verify that the contents of the Value field or fields have changed. Additional resources vSphere CSI Driver Operator 19.3.4. Reducing permissions after installation When using passthrough mode, each component has the same permissions used by all other components. If you do not reduce the permissions after installing, all components have the broad permissions that are required to run the installer. After installation, you can reduce the permissions on your credential to only those that are required to run the cluster, as defined by the CredentialsRequest CRs in the release image for the version of OpenShift Container Platform that you are using. To locate the CredentialsRequest CRs that are required for AWS, Azure, or GCP and learn how to change the permissions the CCO uses, see Manually creating IAM for AWS , Azure , or GCP . 19.3.5. Additional resources Manually creating IAM for AWS Manually creating IAM for Azure Manually creating IAM for GCP 19.4. Using manual mode Manual mode is supported for Alibaba Cloud, Amazon Web Services (AWS), Microsoft Azure, IBM Cloud, and Google Cloud Platform (GCP). In manual mode, a user manages cloud credentials instead of the Cloud Credential Operator (CCO). To use this mode, you must examine the CredentialsRequest CRs in the release image for the version of OpenShift Container Platform that you are running or installing, create corresponding credentials in the underlying cloud provider, and create Kubernetes Secrets in the correct namespaces to satisfy all CredentialsRequest CRs for the cluster's cloud provider. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. This mode also does not require connectivity to the AWS public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. For information about configuring your cloud provider to use manual mode, see the manual credentials management options for your cloud provider: Manually creating RAM resources for Alibaba Cloud Manually creating IAM for AWS Manually creating IAM for Azure Manually creating IAM for GCP Configuring IAM for IBM Cloud Configuring IAM for Nutanix 19.4.1. Manual mode with cloud credentials created and managed outside of the cluster An AWS or GCP cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster using the AWS Security Token Service (STS) or GCP Workload Identity. With this configuration, the CCO uses temporary credentials for different components. For more information, see Using manual mode with Amazon Web Services Security Token Service or Using manual mode with GCP Workload Identity . 19.4.2. Updating cloud provider resources with manually maintained credentials Before upgrading a cluster with manually maintained credentials, you must create any new credentials for the release image that you are upgrading to. You must also review the required permissions for existing credentials and accommodate any new permissions requirements in the new release for those components. Procedure Extract and examine the CredentialsRequest custom resource for the new release. The "Manually creating IAM" section of the installation content for your cloud provider explains how to obtain and use the credentials required for your cloud. Update the manually maintained credentials on your cluster: Create new secrets for any CredentialsRequest custom resources that are added by the new release image. If the CredentialsRequest custom resources for any existing credentials that are stored in secrets have changed permissions requirements, update the permissions as required. If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components. Example credrequests directory contents for OpenShift Container Platform 4.12 on AWS 0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-network-operator_02-cncc-credentials.yaml 5 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 6 1 The Machine API Operator CR is required. 2 The Cloud Credential Operator CR is required. 3 The Image Registry Operator CR is required. 4 The Ingress Operator CR is required. 5 The Network Operator CR is required. 6 The Storage Operator CR is an optional component and might be disabled in your cluster. Example credrequests directory contents for OpenShift Container Platform 4.12 on GCP 0000_26_cloud-controller-manager-operator_16_credentialsrequest-gcp.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-gcp-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request-gcs.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_gcp.yaml 7 1 The Cloud Controller Manager Operator CR is required. 2 The Machine API Operator CR is required. 3 The Cloud Credential Operator CR is required. 4 The Image Registry Operator CR is required. 5 The Ingress Operator CR is required. 6 The Network Operator CR is required. 7 The Storage Operator CR is an optional component and might be disabled in your cluster. steps Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade. 19.4.2.1. Indicating that the cluster is ready to upgrade The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default. Prerequisites For the release image that you are upgrading to, you have processed any new credentials manually or by using the Cloud Credential Operator utility ( ccoctl ). You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. Edit the CloudCredential resource to add an upgradeable-to annotation within the metadata field by running the following command: USD oc edit cloudcredential cluster Text to add ... metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number> ... Where <version_number> is the version that you are upgrading to, in the format x.y.z . For example, use 4.12.2 for OpenShift Container Platform 4.12.2. It may take several minutes after adding the annotation for the upgradeable status to change. Verification In the Administrator perspective of the web console, navigate to Administration Cluster Settings . To view the CCO status details, click cloud-credential in the Cluster Operators list. If the Upgradeable status in the Conditions section is False , verify that the upgradeable-to annotation is free of typographical errors. When the Upgradeable status in the Conditions section is True , begin the OpenShift Container Platform upgrade. 19.4.3. Additional resources Manually creating RAM resources for Alibaba Cloud Manually creating IAM for AWS Using manual mode with Amazon Web Services Security Token Service Manually creating IAM for Azure Manually creating IAM for GCP Using manual mode with GCP Workload Identity Configuring IAM for IBM Cloud Configuring IAM for Nutanix 19.5. Using manual mode with Amazon Web Services Security Token Service Manual mode with STS is supported for Amazon Web Services (AWS). Note This credentials strategy is supported for only new OpenShift Container Platform clusters and must be configured during installation. You cannot reconfigure an existing cluster that uses a different credentials strategy to use this feature. 19.5.1. About manual mode with AWS Security Token Service In manual mode with STS, the individual OpenShift Container Platform cluster components use AWS Security Token Service (STS) to assign components IAM roles that provide short-term, limited-privilege security credentials. These credentials are associated with IAM roles that are specific to each component that makes AWS API calls. 19.5.1.1. AWS Security Token Service authentication process The AWS Security Token Service (STS) and the AssumeRole API action allow pods to retrieve access keys that are defined by an IAM role policy. The OpenShift Container Platform cluster includes a Kubernetes service account signing service. This service uses a private key to sign service account JSON web tokens (JWT). A pod that requires a service account token requests one through the pod specification. When the pod is created and assigned to a node, the node retrieves a signed service account from the service account signing service and mounts it onto the pod. Clusters that use STS contain an IAM role ID in their Kubernetes configuration secrets. Workloads assume the identity of this IAM role ID. The signed service account token issued to the workload aligns with the configuration in AWS, which allows AWS STS to grant access keys for the specified IAM role to the workload. AWS STS grants access keys only for requests that include service account tokens that meet the following conditions: The token name and namespace match the service account name and namespace. The token is signed by a key that matches the public key. The public key pair for the service account signing key used by the cluster is stored in an AWS S3 bucket. AWS STS federation validates that the service account token signature aligns with the public key stored in the S3 bucket. 19.5.1.2. Authentication flow for AWS STS The following diagram illustrates the authentication flow between AWS and the OpenShift Container Platform cluster when using AWS STS. Token signing is the Kubernetes service account signing service on the OpenShift Container Platform cluster. The Kubernetes service account in the pod is the signed service account token. Figure 19.2. AWS Security Token Service authentication flow Requests for new and refreshed credentials are automated by using an appropriately configured AWS IAM OpenID Connect (OIDC) identity provider combined with AWS IAM roles. Service account tokens that are trusted by AWS IAM are signed by OpenShift Container Platform and can be projected into a pod and used for authentication. 19.5.1.3. Token refreshing for AWS STS The signed service account token that a pod uses expires after a period of time. For clusters that use AWS STS, this time period is 3600 seconds, or one hour. The kubelet on the node that the pod is assigned to ensures that the token is refreshed. The kubelet attempts to rotate a token when it is older than 80 percent of its time to live. 19.5.1.4. OpenID Connect requirements for AWS STS You can store the public portion of the encryption keys for your OIDC configuration in a public or private S3 bucket. The OIDC spec requires the use of HTTPS. AWS services require a public endpoint to expose the OIDC documents in the form of JSON web key set (JWKS) public keys. This allows AWS services to validate the bound tokens signed by Kubernetes and determine whether to trust certificates. As a result, both S3 bucket options require a public HTTPS endpoint and private endpoints are not supported. To use AWS STS, the public AWS backbone for the AWS STS service must be able to communicate with a public S3 bucket or a private S3 bucket with a public CloudFront endpoint. You can choose which type of bucket to use when you process CredentialsRequest objects during installation: By default, the CCO utility ( ccoctl ) stores the OIDC configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. As an alternative, you can have the ccoctl utility store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL. 19.5.1.5. AWS component secret formats Using manual mode with STS changes the content of the AWS credentials that are provided to individual OpenShift Container Platform components. Compare the following secret formats: AWS secret format using long-lived credentials apiVersion: v1 kind: Secret metadata: namespace: <target-namespace> 1 name: <target-secret-name> 2 data: aws_access_key_id: <base64-encoded-access-key-id> aws_secret_access_key: <base64-encoded-secret-access-key> 1 The namespace for the component. 2 The name of the component secret. AWS secret format with STS apiVersion: v1 kind: Secret metadata: namespace: <target-namespace> 1 name: <target-secret-name> 2 stringData: credentials: |- [default] sts_regional_endpoints = regional role_name: <operator-role-name> 3 web_identity_token_file: <path-to-token> 4 1 The namespace for the component. 2 The name of the component secret. 3 The IAM role for the component. 4 The path to the service account token inside the pod. By convention, this is /var/run/secrets/openshift/serviceaccount/token for OpenShift Container Platform components. 19.5.2. Installing an OpenShift Container Platform cluster configured for manual mode with STS To install a cluster that is configured to use the Cloud Credential Operator (CCO) in manual mode with STS: Configure the Cloud Credential Operator utility . Create the required AWS resources individually , or with a single command . Run the OpenShift Container Platform installer . Verify that the cluster is using short-lived credentials . Note Because the cluster is operating in manual mode when using STS, it is not able to create new credentials for components with the permissions that they require. When upgrading to a different minor version of OpenShift Container Platform, there are often new AWS permission requirements. Before upgrading a cluster that is using STS, the cluster administrator must manually ensure that the AWS permissions are sufficient for existing components and available to any new components. Additional resources Configuring the Cloud Credential Operator utility for a cluster update 19.5.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Table 19.3. Required AWS permissions Permission type Required permissions iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Obtain the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 19.5.2.2. Creating AWS resources with the Cloud Credential Operator utility You can use the CCO utility ( ccoctl ) to create the required AWS resources individually , or with a single command . 19.5.2.2.1. Creating AWS resources individually If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. For example, this option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster: USD ccoctl aws create-key-pair Example output: 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS: USD ccoctl aws create-identity-provider \ --name=<name> \ --region=<aws_region> \ --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public where: <name> is the name used to tag any cloud resources that are created for tracking. <aws-region> is the AWS region in which cloud resources will be created. <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output: 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster. Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --cloud=aws \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1 1 credrequests is the directory where the list of CredentialsRequest objects is stored. This command creates the directory if it does not exist. If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components. Example credrequests directory contents for OpenShift Container Platform 4.13 on AWS 0000_30_cluster-api_00_credentials-request.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 7 1 For clusters that use the TechPreviewNoUpgrade feature set, the Cluster API Operator CR is required. 2 The Machine API Operator CR is required. 3 The Cloud Credential Operator CR is required. 4 The Image Registry Operator CR is required. 5 The Ingress Operator CR is required. 6 The Network Operator CR is required. 7 The Storage Operator CR is an optional component and might be disabled in your cluster. Use the ccoctl tool to process all CredentialsRequest objects in the credrequests directory: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output: cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 19.5.2.2.2. Creating AWS resources with a single command If you do not need to review the JSON files that the ccoctl tool creates before modifying AWS resources, and if the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set the USDRELEASE_IMAGE variable by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --cloud=aws \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1 1 credrequests is the directory where the list of CredentialsRequest objects is stored. This command creates the directory if it does not exist. Note This command can take a few moments to run. If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components. Example credrequests directory contents for OpenShift Container Platform 4.13 on AWS 0000_30_cluster-api_00_credentials-request.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 7 1 For clusters that use the TechPreviewNoUpgrade feature set, the Cluster API Operator CR is required. 2 The Machine API Operator CR is required. 3 The Cloud Credential Operator CR is required. 4 The Image Registry Operator CR is required. 5 The Ingress Operator CR is required. 6 The Network Operator CR is required. 7 The Storage Operator CR is an optional component and might be disabled in your cluster. Use the ccoctl tool to process all CredentialsRequest objects in the credrequests directory: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output: cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 19.5.2.3. Running the installer Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform release image. Procedure Change to the directory that contains the installation program and create the install-config.yaml file: USD openshift-install create install-config --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . Create the required OpenShift Container Platform installation manifests: USD openshift-install create manifests Copy the manifests that ccoctl generated to the manifests directory that the installation program created: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the private key that the ccoctl generated in the tls directory to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . Run the OpenShift Container Platform installer: USD ./openshift-install create cluster 19.5.2.4. Verifying the installation Connect to the OpenShift Container Platform cluster. Verify that the cluster does not have root credentials: USD oc get secrets -n kube-system aws-creds The output should look similar to: Error from server (NotFound): secrets "aws-creds" not found Verify that the components are assuming the IAM roles that are specified in the secret manifests, instead of using credentials that are created by the CCO: Example command with the Image Registry Operator USD oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r .data.credentials | base64 --decode The output should show the role and web identity token that are used by the component and look similar to: Example output with the Image Registry Operator [default] role_arn = arn:aws:iam::123456789:role/openshift-image-registry-installer-cloud-credentials web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token 19.5.3. Additional resources Preparing to update a cluster with manually maintained credentials 19.6. Using manual mode with GCP Workload Identity Manual mode with GCP Workload Identity is supported for Google Cloud Platform (GCP). Note This credentials strategy is supported for only new OpenShift Container Platform clusters and must be configured during installation. You cannot reconfigure an existing cluster that uses a different credentials strategy to use this feature. 19.6.1. About manual mode with GCP Workload Identity In manual mode with GCP Workload Identity, the individual OpenShift Container Platform cluster components can impersonate IAM service accounts using short-term, limited-privilege credentials. Requests for new and refreshed credentials are automated by using an appropriately configured OpenID Connect (OIDC) identity provider combined with IAM service accounts. Service account tokens that are trusted by GCP are signed by OpenShift Container Platform and can be projected into a pod and used for authentication. Tokens are refreshed after one hour. Figure 19.3. Workload Identity authentication flow Using manual mode with GCP Workload Identity changes the content of the GCP credentials that are provided to individual OpenShift Container Platform components. GCP secret format apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: service_account.json: <service_account> 3 1 The namespace for the component. 2 The name of the component secret. 3 The Base64 encoded service account. Content of the Base64 encoded service_account.json file using long-lived credentials { "type": "service_account", 1 "project_id": "<project_id>", "private_key_id": "<private_key_id>", "private_key": "<private_key>", 2 "client_email": "<client_email_address>", "client_id": "<client_id>", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/<client_email_address>" } 1 The credential type is service_account . 2 The private RSA key that is used to authenticate to GCP. This key must be kept secure and is not rotated. Content of the Base64 encoded service_account.json file using GCP Workload Identity { "type": "external_account", 1 "audience": "//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider", 2 "subject_token_type": "urn:ietf:params:oauth:token-type:jwt", "token_url": "https://sts.googleapis.com/v1/token", "service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client_email_address>:generateAccessToken", 3 "credential_source": { "file": "<path_to_token>", 4 "format": { "type": "text" } } } 1 The credential type is external_account . 2 The target audience is the GCP Workload Identity provider. 3 The resource URL of the service account that can be impersonated with these credentials. 4 The path to the service account token inside the pod. By convention, this is /var/run/secrets/openshift/serviceaccount/token for OpenShift Container Platform components. 19.6.2. Installing an OpenShift Container Platform cluster configured for manual mode with GCP Workload Identity To install a cluster that is configured to use the Cloud Credential Operator (CCO) in manual mode with GCP Workload Identity: Configure the Cloud Credential Operator utility . Create the required GCP resources . Run the OpenShift Container Platform installer . Verify that the cluster is using short-lived credentials . Note Because the cluster is operating in manual mode when using GCP Workload Identity, it is not able to create new credentials for components with the permissions that they require. When upgrading to a different minor version of OpenShift Container Platform, there are often new GCP permission requirements. Before upgrading a cluster that is using GCP Workload Identity, the cluster administrator must manually ensure that the GCP permissions are sufficient for existing components and available to any new components. Additional resources Configuring the Cloud Credential Operator utility for a cluster update 19.6.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Obtain the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 19.6.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set the USDRELEASE_IMAGE variable by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --cloud=gcp \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1 1 credrequests is the directory where the list of CredentialsRequest objects is stored. This command creates the directory if it does not exist. Note This command can take a few moments to run. If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components. Example credrequests directory contents for OpenShift Container Platform 4.13 on GCP 0000_26_cloud-controller-manager-operator_16_credentialsrequest-gcp.yaml 1 0000_30_cluster-api_00_credentials-request.yaml 2 0000_30_machine-api-operator_00_credentials-request.yaml 3 0000_50_cloud-credential-operator_05-gcp-ro-credentialsrequest.yaml 4 0000_50_cluster-image-registry-operator_01-registry-credentials-request-gcs.yaml 5 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 6 0000_50_cluster-network-operator_02-cncc-credentials.yaml 7 0000_50_cluster-storage-operator_03_credentials_request_gcp.yaml 8 1 The Cloud Controller Manager Operator CR is required. 2 For clusters that use the TechPreviewNoUpgrade feature set, the Cluster API Operator CR is required. 3 The Machine API Operator CR is required. 4 The Cloud Credential Operator CR is required. 5 The Image Registry Operator CR is required. 6 The Ingress Operator CR is required. 7 The Network Operator CR is required. 8 The Storage Operator CR is an optional component and might be disabled in your cluster. Use the ccoctl tool to process all CredentialsRequest objects in the credrequests directory: USD ccoctl gcp create-all \ --name=<name> \ --region=<gcp_region> \ --project=<gcp_project_id> \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests where: <name> is the user-defined name for all created GCP resources used for tracking. <gcp_region> is the GCP region in which cloud resources will be created. <gcp_project_id> is the GCP project ID in which cloud resources will be created. <path_to_directory_with_list_of_credentials_requests>/credrequests is the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output: cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 19.6.2.3. Running the installer Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform release image. Procedure Change to the directory that contains the installation program and create the install-config.yaml file: USD openshift-install create install-config --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . Create the required OpenShift Container Platform installation manifests: USD openshift-install create manifests Copy the manifests that ccoctl generated to the manifests directory that the installation program created: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the private key that the ccoctl generated in the tls directory to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . Run the OpenShift Container Platform installer: USD ./openshift-install create cluster 19.6.2.4. Verifying the installation Connect to the OpenShift Container Platform cluster. Verify that the cluster does not have root credentials: USD oc get secrets -n kube-system gcp-credentials The output should look similar to: Error from server (NotFound): secrets "gcp-credentials" not found Verify that the components are assuming the service accounts that are specified in the secret manifests, instead of using credentials that are created by the CCO: Example command with the Image Registry Operator USD oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r '.data."service_account.json"' | base64 -d The output should show the role and web identity token that are used by the component and look similar to: Example output with the Image Registry Operator { "type": "external_account", 1 "audience": "//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider", "subject_token_type": "urn:ietf:params:oauth:token-type:jwt", "token_url": "https://sts.googleapis.com/v1/token", "service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client-email-address>:generateAccessToken", 2 "credential_source": { "file": "/var/run/secrets/openshift/serviceaccount/token", "format": { "type": "text" } } } 1 The credential type is external_account . 2 The resource URL of the service account used by the Image Registry Operator. 19.6.3. Additional resources Preparing to update a cluster with manually maintained credentials
[ "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "oc get secret <secret_name> -n kube-system -o jsonpath --template '{ .metadata.annotations }'", "oc get secret <secret_name> -n=kube-system", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>", "oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'", "{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }", "oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2", "oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: azure-credentials stringData: azure_subscription_id: <base64-encoded_subscription_id> azure_client_id: <base64-encoded_client_id> azure_client_secret: <base64-encoded_client_secret> azure_tenant_id: <base64-encoded_tenant_id> azure_resource_prefix: <base64-encoded_resource_prefix> azure_resourcegroup: <base64-encoded_resource_group> azure_region: <base64-encoded_region>", "cat .openshift_install_state.json | jq '.\"*installconfig.ClusterID\".InfraID' -r", "mycluster-2mpcn", "azure_resource_prefix: mycluster-2mpcn azure_resourcegroup: mycluster-2mpcn-rg", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: openstack-credentials data: clouds.yaml: <base64-encoded_cloud_creds> clouds.conf: <base64-encoded_cloud_creds_init>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: ovirt-credentials data: ovirt_url: <base64-encoded_url> ovirt_username: <base64-encoded_username> ovirt_password: <base64-encoded_password> ovirt_insecure: <base64-encoded_insecure> ovirt_ca_bundle: <base64-encoded_ca_bundle>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: vsphere-creds data: vsphere.openshift.example.com.username: <base64-encoded_username> vsphere.openshift.example.com.password: <base64-encoded_password>", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge", "oc get co kube-controller-manager", "0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-network-operator_02-cncc-credentials.yaml 5 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 6", "0000_26_cloud-controller-manager-operator_16_credentialsrequest-gcp.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-gcp-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request-gcs.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_gcp.yaml 7", "oc edit cloudcredential cluster", "metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>", "apiVersion: v1 kind: Secret metadata: namespace: <target-namespace> 1 name: <target-secret-name> 2 data: aws_access_key_id: <base64-encoded-access-key-id> aws_secret_access_key: <base64-encoded-secret-access-key>", "apiVersion: v1 kind: Secret metadata: namespace: <target-namespace> 1 name: <target-secret-name> 2 stringData: credentials: |- [default] sts_regional_endpoints = regional role_name: <operator-role-name> 3 web_identity_token_file: <path-to-token> 4", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> --region=<aws_region> --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=aws --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1", "0000_30_cluster-api_00_credentials-request.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 7", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=aws --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1", "0000_30_cluster-api_00_credentials-request.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 7", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster", "oc get secrets -n kube-system aws-creds", "Error from server (NotFound): secrets \"aws-creds\" not found", "oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r .data.credentials | base64 --decode", "[default] role_arn = arn:aws:iam::123456789:role/openshift-image-registry-installer-cloud-credentials web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token", "apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: service_account.json: <service_account> 3", "{ \"type\": \"service_account\", 1 \"project_id\": \"<project_id>\", \"private_key_id\": \"<private_key_id>\", \"private_key\": \"<private_key>\", 2 \"client_email\": \"<client_email_address>\", \"client_id\": \"<client_id>\", \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\", \"token_uri\": \"https://oauth2.googleapis.com/token\", \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\", \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/<client_email_address>\" }", "{ \"type\": \"external_account\", 1 \"audience\": \"//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider\", 2 \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client_email_address>:generateAccessToken\", 3 \"credential_source\": { \"file\": \"<path_to_token>\", 4 \"format\": { \"type\": \"text\" } } }", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=gcp --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1", "0000_26_cloud-controller-manager-operator_16_credentialsrequest-gcp.yaml 1 0000_30_cluster-api_00_credentials-request.yaml 2 0000_30_machine-api-operator_00_credentials-request.yaml 3 0000_50_cloud-credential-operator_05-gcp-ro-credentialsrequest.yaml 4 0000_50_cluster-image-registry-operator_01-registry-credentials-request-gcs.yaml 5 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 6 0000_50_cluster-network-operator_02-cncc-credentials.yaml 7 0000_50_cluster-storage-operator_03_credentials_request_gcp.yaml 8", "ccoctl gcp create-all --name=<name> --region=<gcp_region> --project=<gcp_project_id> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml", "openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster", "oc get secrets -n kube-system gcp-credentials", "Error from server (NotFound): secrets \"gcp-credentials\" not found", "oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r '.data.\"service_account.json\"' | base64 -d", "{ \"type\": \"external_account\", 1 \"audience\": \"//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider\", \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client-email-address>:generateAccessToken\", 2 \"credential_source\": { \"file\": \"/var/run/secrets/openshift/serviceaccount/token\", \"format\": { \"type\": \"text\" } } }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authentication_and_authorization/managing-cloud-provider-credentials
Chapter 4. Managing namespace buckets
Chapter 4. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so that you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. Note A namespace bucket can only be used if its write target is available and functional. 4.1. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API. Ensure that the credentials provided for the Multicloud Object Gateway (MCG) enables you to perform the AWS S3 namespace bucket operations. You can use the AWS tool, aws-cli to verify that all the operations can be performed on the target bucket. Also, the list bucket which is using this MCG account shows the target bucket. Red Hat OpenShift Data Foundation supports the following namespace bucket operations: ListBuckets ListObjects ListMultipartUploads ListObjectVersions GetObject HeadObject CopyObject PutObject CreateMultipartUpload UploadPartCopy UploadPart ListParts AbortMultipartUpload PubObjectTagging DeleteObjectTagging GetObjectTagging GetObjectAcl PutObjectAcl DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 4.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway (MCG) CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 4.2.1. Adding an AWS S3 namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). For information, see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: where <namespacestore-secret-name> is a unique NamespaceStore name. You must provide and encode your own AWS access key ID and secret access key using Base64 , and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <resource-name> The name you want to give to the resource. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: <my-bucket-class> A unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the names of the NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step using the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.2. Adding an IBM COS namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: <namespacestore-secret-name> A unique NamespaceStore name. You must provide and encode your own IBM COS access key ID and secret access key using Base64 , and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <IBM COS ENDPOINT> The appropriate IBM COS endpoint. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . The namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. The namespace policy of type multi requires the following configuration: <my-bucket-class> The unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the NamespaceStores names that defines the read targets of the namespace bucket. To create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step, apply the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. <aws-region-name> The AWS bucket region. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy can be either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single namespace-store that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single namespace-store that defines the write target of the namespace bucket. <read-resources>s A list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and a ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> An IBM access key ID, secret access key, and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. <bucket-name> An existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single NamespaceStore that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A comma-separated list of NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.3. Adding a namespace bucket using the OpenShift Container Platform user interface You can add namespace buckets using the OpenShift Container Platform user interface. For information about namespace buckets, see Managing namespace buckets . Prerequisites Ensure that Openshift Container Platform with OpenShift Data Foundation operator is already installed. Access to the Multicloud Object Gateway (MCG). Procedure On the OpenShift Web Console, navigate to Storage Object Storage Namespace Store tab. Click Create namespace store to create a namespacestore resources to be used in the namespace bucket. Enter a namespacestore name. Choose a provider and region. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key. Enter a target bucket. Click Create . On the Namespace Store tab, verify that the newly created namespacestore is in the Ready state. Repeat steps 2 and 3 until you have created all the desired amount of resources. Navigate to Bucket Class tab and click Create Bucket Class . Choose Namespace BucketClass type radio button. Enter a BucketClass name and click . Choose a Namespace Policy Type for your namespace bucket, and then click . If your namespace policy type is Single , you need to choose a read resource. If your namespace policy type is Multi , you need to choose read resources and a write resource. If your namespace policy type is Cache , you need to choose a Hub namespace store that defines the read and write target of the namespace bucket. Select one Read and Write NamespaceStore which defines the read and write targets of the namespace bucket and click . Review your new bucket class details, and then click Create Bucket Class . Navigate to Bucket Class tab and verify that your newly created resource is in the Ready phase. Navigate to Object Bucket Claims tab and click Create Object Bucket Claim . Enter ObjectBucketClaim Name for the namespace bucket. Select StorageClass as openshift-storage.noobaa.io . Select the BucketClass that you created earlier for your namespacestore from the list. By default, noobaa-default-bucket-class gets selected. Click Create . The namespace bucket is created along with Object Bucket Claim for your namespace. Navigate to Object Bucket Claims tab and verify that the Object Bucket Claim created is in Bound state. Navigate to Object Buckets tab and verify that the your namespace bucket is present in the list and is in Bound state. 4.4. Sharing legacy application data with cloud native application using S3 protocol Many legacy applications use file systems to share data sets. You can access and share the legacy data in the file system by using the S3 operations. To share data you need to do the following: Export the pre-existing file system datasets, that is, RWX volume such as Ceph FileSystem (CephFS) or create a new file system datasets using the S3 protocol. Access file system datasets from both file system and S3 protocol. Configure S3 accounts and map them to the existing or a new file system unique identifiers (UIDs) and group identifiers (GIDs). 4.4.1. Creating a NamespaceStore to use a file system Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). Procedure Log into the OpenShift Web Console. Click Storage Object Storage . Click the NamespaceStore tab to create NamespaceStore resources to be used in the namespace bucket. Click Create namespacestore . Enter a name for the NamespaceStore. Choose Filesystem as the provider. Choose the Persistent volume claim. Enter a folder name. If the folder name exists, then that folder is used to create the NamespaceStore or else a folder with that name is created. Click Create . Verify the NamespaceStore is in the Ready state. 4.4.2. Creating accounts with NamespaceStore filesystem configuration You can either create a new account with NamespaceStore filesystem configuration or convert an existing normal account into a NamespaceStore filesystem account by editing the YAML. Note You cannot remove a NamespaceStore filesystem configuration from an account. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface: Procedure Create a new account with NamespaceStore filesystem configuration using the MCG command-line interface. For example: allow_bucket_create Indicates whether the account is allowed to create new buckets. Supported values are true or false . Default value is true . default_resource The NamespaceStore resource on which the new buckets will be created when using the S3 CreateBucket operation. The NamespaceStore must be backed by an RWX (ReadWriteMany) persistent volume claim (PVC). new_buckets_path The filesystem path where directories corresponding to new buckets will be created. The path is inside the filesystem of NamespaceStore filesystem PVCs where new directories are created to act as the filesystem mapping of newly created object bucket classes. nsfs_account_config A mandatory field that indicates if the account is used for NamespaceStore filesystem. nsfs_only Indicates whether the account is used only for NamespaceStore filesystem or not. Supported values are true or false . Default value is false . If it is set to 'true', it limits you from accessing other types of buckets. uid The user ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem gid The group ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem The MCG system sends a response with the account configuration and its S3 credentials: You can list all the custom resource definition (CRD) based accounts by using the following command: If you are interested in a particular account, you can read its custom resource definition (CRD) directly by the account name: 4.4.3. Accessing legacy application data from the openshift-storage namespace When using the Multicloud Object Gateway (MCG) NamespaceStore filesystem (NSFS) feature, you need to have the Persistent Volume Claim (PVC) where the data resides in the openshift-storage namespace. In almost all cases, the data you need to access is not in the openshift-storage namespace, but in the namespace that the legacy application uses. In order to access data stored in another namespace, you need to create a PVC in the openshift-storage namespace that points to the same CephFS volume that the legacy application uses. Procedure Display the application namespace with scc : <application_namespace> Specify the name of the application namespace. For example: Navigate into the application namespace: For example: Ensure that a ReadWriteMany (RWX) PVC is mounted on the pod that you want to consume from the noobaa S3 endpoint using the MCG NSFS feature: Check the mount point of the Persistent Volume (PV) inside your pod. Get the volume name of the PV from the pod: <pod_name> Specify the name of the pod. For example: In this example, the name of the volume for the PVC is cephfs-write-workload-generator-no-cache-pv-claim . List all the mounts in the pod, and check for the mount point of the volume that you identified in the step: For example: Confirm the mount point of the RWX PV in your pod: <mount_path> Specify the path to the mount point that you identified in the step. For example: Ensure that the UID and SELinux labels are the same as the ones that the legacy namespace uses: For example: Get the information of the legacy application RWX PV that you want to make accessible from the openshift-storage namespace: <pv_name> Specify the name of the PV. For example: Ensure that the PVC from the legacy application is accessible from the openshift-storage namespace so that one or more noobaa-endpoint pods can access the PVC. Find the values of the subvolumePath and volumeHandle from the volumeAttributes . You can get these values from the YAML description of the legacy application PV: For example: Use the subvolumePath and volumeHandle values that you identified in the step to create a new PV and PVC object in the openshift-storage namespace that points to the same CephFS volume as the legacy application PV: Example YAML file : 1 The storage capacity of the PV that you are creating in the openshift-storage namespace must be the same as the original PV. 2 The volume handle for the target PV that you create in openshift-storage needs to have a different handle than the original application PV, for example, add -clone at the end of the volume handle. 3 The storage capacity of the PVC that you are creating in the openshift-storage namespace must be the same as the original PVC. Create the PV and PVC in the openshift-storage namespace using the YAML file specified in the step: <YAML_file> Specify the name of the YAML file. For example: Ensure that the PVC is available in the openshift-storage namespace: Navigate into the openshift-storage project: Create the NSFS namespacestore: <nsfs_namespacestore> Specify the name of the NSFS namespacestore. <cephfs_pvc_name> Specify the name of the CephFS PVC in the openshift-storage namespace. For example: Ensure that the noobaa-endpoint pod restarts and that it successfully mounts the PVC at the NSFS namespacestore, for example, /nsfs/legacy-namespace mountpoint: <noobaa_endpoint_pod_name> Specify the name of the noobaa-endpoint pod. For example: Create a MCG user account: <user_account> Specify the name of the MCG user account. <gid_number> Specify the GID number. <uid_number> Specify the UID number. Important Use the same UID and GID as that of the legacy application. You can find it from the output. For example: Create a MCG bucket. Create a dedicated folder for S3 inside the NSFS share on the CephFS PV and PVC of the legacy application pod: For example: Create the MCG bucket using the nsfs/ path: For example: Check the SELinux labels of the folders residing in the PVCs in the legacy application and openshift-storage namespaces: For example: For example: In these examples, you can see that the SELinux labels are not the same which results in permission denied or access issues. Ensure that the legacy application and openshift-storage pods use the same SELinux labels on the files. You can do this in one of the following ways: Section 4.4.3.1, "Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project" . Section 4.4.3.2, "Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC" . Delete the NSFS namespacestore: Delete the MCG bucket: For example: Delete the MCG user account: For example: Delete the NSFS namespacestore: For example: Delete the PV and PVC: Important Before you delete the PV and PVC, ensure that the PV has a retain policy configured. <cephfs_pv_name> Specify the CephFS PV name of the legacy application. <cephfs_pvc_name> Specify the CephFS PVC name of the legacy application. For example: 4.4.3.1. Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project Display the current openshift-storage namespace with sa.scc.mcs : Edit the legacy application namespace, and modify the sa.scc.mcs with the value from the sa.scc.mcs of the openshift-storage namespace: For example: For example: Restart the legacy application pod. A relabel of all the files take place and now the SELinux labels match with the openshift-storage deployment. 4.4.3.2. Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC Create a new scc with the MustRunAs and seLinuxOptions options, with the Multi Category Security (MCS) that the openshift-storage project uses. Example YAML file: Create a service account for the deployment and add it to the newly created scc . Create a service account: <service_account_name>` Specify the name of the service account. For example: Add the service account to the newly created scc : For example: Patch the legacy application deployment so that it uses the newly created service account. This allows you to specify the SELinux label in the deployment: For example: Edit the deployment to specify the security context to use at the SELinux label in the deployment configuration: Add the following lines: <security_context_value> You can find this value when you execute the command to create a dedicated folder for S3 inside the NSFS share, on the CephFS PV and PVC of the legacy application pod. For example: Ensure that the security context to be used at the SELinux label in the deployment configuration is specified correctly: For example" The legacy application is restarted and begins using the same SELinux labels as the openshift-storage namespace.
[ "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> --region <aws-region-name> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage", "noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage", "noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "noobaa account create <noobaa-account-name> [flags]", "noobaa account create testaccount --nsfs_account_config --gid 10001 --uid 10001 -default_resource fs_namespacestore", "NooBaaAccount spec: allow_bucket_creation: true default_resource: noobaa-default-namespace-store Nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001 INFO[0006] ✅ Exists: Secret \"noobaa-account-testaccount\" Connection info: AWS_ACCESS_KEY_ID : <aws-access-key-id> AWS_SECRET_ACCESS_KEY : <aws-secret-access-key>", "noobaa account list NAME DEFAULT_RESOURCE PHASE AGE testaccount noobaa-default-backing-store Ready 1m17s", "oc get noobaaaccount/testaccount -o yaml spec: allow_bucket_creation: true default_resource: noobaa-default-namespace-store nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001", "oc get ns <application_namespace> -o yaml | grep scc", "oc get ns testnamespace -o yaml | grep scc openshift.io/sa.scc.mcs: s0:c26,c5 openshift.io/sa.scc.supplemental-groups: 1000660000/10000 openshift.io/sa.scc.uid-range: 1000660000/10000", "oc project <application_namespace>", "oc project testnamespace", "oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-write-workload-generator-no-cache-pv-claim Bound pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX ocs-storagecluster-cephfs 12s", "oc get pod NAME READY STATUS RESTARTS AGE cephfs-write-workload-generator-no-cache-1-cv892 1/1 Running 0 11s", "oc get pods <pod_name> -o jsonpath='{.spec.volumes[]}'", "oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.volumes[]}' {\"name\":\"app-persistent-storage\",\"persistentVolumeClaim\":{\"claimName\":\"cephfs-write-workload-generator-no-cache-pv-claim\"}}", "oc get pods <pod_name> -o jsonpath='{.spec.containers[].volumeMounts}'", "oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.containers[].volumeMounts}' [{\"mountPath\":\"/mnt/pv\",\"name\":\"app-persistent-storage\"},{\"mountPath\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"name\":\"kube-api-access-8tnc5\",\"readOnly\":true}]", "oc exec -it <pod_name> -- df <mount_path>", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- df /mnt/pv main Filesystem 1K-blocks Used Available Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10485760 0 10485760 0% /mnt/pv", "oc exec -it <pod_name> -- ls -latrZ <mount_path>", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..", "oc get pv | grep <pv_name>", "oc get pv | grep pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX Delete Bound testnamespace/cephfs-write-workload-generator-no-cache-pv-claim ocs-storagecluster-cephfs 47s", "oc get pv <pv_name> -o yaml", "oc get pv pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com creationTimestamp: \"2022-05-25T06:27:49Z\" finalizers: - kubernetes.io/pv-protection name: pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a resourceVersion: \"177458\" uid: 683fa87b-5192-4ccf-af2f-68c6bcf8f500 spec: accessModes: - ReadWriteMany capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: cephfs-write-workload-generator-no-cache-pv-claim namespace: testnamespace resourceVersion: \"177453\" uid: aa58fb91-c3d2-475b-bbee-68452a613e1a csi: controllerExpandSecretRef: name: rook-csi-cephfs-provisioner namespace: openshift-storage driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: clusterID: openshift-storage fsName: ocs-storagecluster-cephfilesystem storage.kubernetes.io/csiProvisionerIdentity: 1653458225664-8081-openshift-storage.cephfs.csi.ceph.com subvolumeName: csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213 subvolumePath: /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213 persistentVolumeReclaimPolicy: Delete storageClassName: ocs-storagecluster-cephfs volumeMode: Filesystem status: phase: Bound", "cat << EOF >> pv-openshift-storage.yaml apiVersion: v1 kind: PersistentVolume metadata: name: cephfs-pv-legacy-openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany capacity: storage: 10Gi 1 csi: driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: # Volume Attributes can be copied from the Source testnamespace PV \"clusterID\": \"openshift-storage\" \"fsName\": \"ocs-storagecluster-cephfilesystem\" \"staticVolume\": \"true\" # rootpath is the subvolumePath: you copied from the Source testnamespace PV \"rootPath\": /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213-clone 2 persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-legacy namespace: openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany resources: requests: storage: 10Gi 3 volumeMode: Filesystem # volumeName should be same as PV name volumeName: cephfs-pv-legacy-openshift-storage EOF", "oc create -f <YAML_file>", "oc create -f pv-openshift-storage.yaml persistentvolume/cephfs-pv-legacy-openshift-storage created persistentvolumeclaim/cephfs-pvc-legacy created", "oc get pvc -n openshift-storage NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc-legacy Bound cephfs-pv-legacy-openshift-storage 10Gi RWX 14s", "oc project openshift-storage Now using project \"openshift-storage\" on server \"https://api.cluster-5f6ng.5f6ng.sandbox65.opentlc.com:6443\".", "noobaa namespacestore create nsfs <nsfs_namespacestore> --pvc-name=' <cephfs_pvc_name> ' --fs-backend='CEPH_FS'", "noobaa namespacestore create nsfs legacy-namespace --pvc-name='cephfs-pvc-legacy' --fs-backend='CEPH_FS'", "oc exec -it <noobaa_endpoint_pod_name> -- df -h /nsfs/ <nsfs_namespacestore>", "oc exec -it noobaa-endpoint-5875f467f5-546c6 -- df -h /nsfs/legacy-namespace Filesystem Size Used Avail Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10G 0 10G 0% /nsfs/legacy-namespace", "noobaa account create <user_account> --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid <gid_number> --uid <uid_number> --default_resource='legacy-namespace'", "noobaa account create leguser --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid 0 --uid 1000660000 --default_resource='legacy-namespace'", "oc exec -it <pod_name> -- mkdir <mount_path> /nsfs", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- mkdir /mnt/pv/nsfs", "noobaa api bucket_api create_bucket '{ \"name\": \" <bucket_name> \", \"namespace\":{ \"write_resource\": { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }] } }'", "noobaa api bucket_api create_bucket '{ \"name\": \"legacy-bucket\", \"namespace\":{ \"write_resource\": { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }] } }'", "oc exec -it <noobaa_endpoint_pod_name> -n openshift-storage -- ls -ltraZ /nsfs/ <nsfs_namespacstore>", "oc exec -it noobaa-endpoint-5875f467f5-546c6 -n openshift-storage -- ls -ltraZ /nsfs/legacy-namespace total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c0,c26 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 30 May 25 06:35 ..", "oc exec -it <pod_name> -- ls -latrZ <mount_path>", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..", "noobaa bucket delete <bucket_name>", "noobaa bucket delete legacy-bucket", "noobaa account delete <user_account>", "noobaa account delete leguser", "noobaa namespacestore delete <nsfs_namespacestore>", "noobaa namespacestore delete legacy-namespace", "oc delete pv <cephfs_pv_name>", "oc delete pvc <cephfs_pvc_name>", "oc delete pv cephfs-pv-legacy-openshift-storage", "oc delete pvc cephfs-pvc-legacy", "oc get ns openshift-storage -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0", "oc edit ns <appplication_namespace>", "oc edit ns testnamespace", "oc get ns <application_namespace> -o yaml | grep sa.scc.mcs", "oc get ns testnamespace -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0", "cat << EOF >> scc.yaml allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: - system:authenticated kind: SecurityContextConstraints metadata: annotations: name: restricted-pvselinux priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - MKNOD - SETUID - SETGID runAsUser: type: MustRunAsRange seLinuxContext: seLinuxOptions: level: s0:c26,c0 type: MustRunAs supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret EOF", "oc create -f scc.yaml", "oc create serviceaccount <service_account_name>", "oc create serviceaccount testnamespacesa", "oc adm policy add-scc-to-user restricted-pvselinux -z <service_account_name>", "oc adm policy add-scc-to-user restricted-pvselinux -z testnamespacesa", "oc patch dc/ <pod_name> '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \" <service_account_name> \"}}}}'", "oc patch dc/cephfs-write-workload-generator-no-cache --patch '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \"testnamespacesa\"}}}}'", "oc edit dc <pod_name> -n <application_namespace>", "spec: template: metadata: securityContext: seLinuxOptions: Level: <security_context_value>", "oc edit dc cephfs-write-workload-generator-no-cache -n testnamespace", "spec: template: metadata: securityContext: seLinuxOptions: level: s0:c26,c0", "oc get dc <pod_name> -n <application_namespace> -o yaml | grep -A 2 securityContext", "oc get dc cephfs-write-workload-generator-no-cache -n testnamespace -o yaml | grep -A 2 securityContext securityContext: seLinuxOptions: level: s0:c26,c0" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_hybrid_and_multicloud_resources/Managing-namespace-buckets_rhodf
Chapter 1. Access control
Chapter 1. Access control Access control might need to manually be created and managed. You must configure authentication service requirements for Red Hat Advanced Cluster Management for Kubernetes to onboard workloads to Identity and Access Management (IAM). For more information, see Understanding authentication in the OpenShift Container Platform documentation. Role-based access control and authentication identifies the user associated roles and cluster credentials. See the following documentation for information about access and credentials. Required access: Cluster administrator Role-based access control Implementing role-based access control Bringing your own observability Certificate Authority (CA) certificates 1.1. Role-based access control Red Hat Advanced Cluster Management for Kubernetes supports role-based access control (RBAC). Your role determines the actions that you can perform. RBAC is based on the authorization mechanisms in Kubernetes, similar to Red Hat OpenShift Container Platform. For more information about RBAC, see the OpenShift RBAC overview in the OpenShift Container Platform documentation . Note: Action buttons are disabled from the console if the user-role access is impermissible. 1.1.1. Overview of roles Some product resources are cluster-wide and some are namespace-scoped. You must apply cluster role bindings and namespace role bindings to your users for consistent access controls. View the table list of the following role definitions that are supported in Red Hat Advanced Cluster Management for Kubernetes: Table 1.1. Role definition table Role Definition cluster-admin This is an OpenShift Container Platform default role. A user with cluster binding to the cluster-admin role is an OpenShift Container Platform super user, who has all access. open-cluster-management:cluster-manager-admin A user with cluster binding to the open-cluster-management:cluster-manager-admin role is a Red Hat Advanced Cluster Management for Kubernetes super user, who has all access. This role allows the user to create a ManagedCluster resource. open-cluster-management:admin:<managed_cluster_name> A user with cluster binding to the open-cluster-management:admin:<managed_cluster_name> role has administrator access to the ManagedCluster resource named, <managed_cluster_name> . When a user has a managed cluster, this role is automatically created. open-cluster-management:view:<managed_cluster_name> A user with cluster binding to the open-cluster-management:view:<managed_cluster_name> role has view access to the ManagedCluster resource named, <managed_cluster_name> . open-cluster-management:managedclusterset:admin:<managed_clusterset_name> A user with cluster binding to the open-cluster-management:managedclusterset:admin:<managed_clusterset_name> role has administrator access to ManagedCluster resource named <managed_clusterset_name> . The user also has administrator access to managedcluster.cluster.open-cluster-management.io , clusterclaim.hive.openshift.io , clusterdeployment.hive.openshift.io , and clusterpool.hive.openshift.io resources, which has the managed cluster set label: cluster.open-cluster-management.io/clusterset=<managed_clusterset_name> . A role binding is automatically generated when you are using a cluster set. See Creating a ManagedClusterSet to learn how to manage the resource. open-cluster-management:managedclusterset:view:<managed_clusterset_name> A user with cluster binding to the open-cluster-management:managedclusterset:view:<managed_clusterset_name> role has view access to the ManagedCluster resource named, <managed_clusterset_name>`. The user also has view access to managedcluster.cluster.open-cluster-management.io , clusterclaim.hive.openshift.io , clusterdeployment.hive.openshift.io , and clusterpool.hive.openshift.io resources, which has the managed cluster set labels: cluster.open-cluster-management.io , clusterset=<managed_clusterset_name> . For more details on how to manage managed cluster set resources, see Creating a ManagedClusterSet . open-cluster-management:subscription-admin A user with the open-cluster-management:subscription-admin role can create Git subscriptions that deploy resources to multiple namespaces. The resources are specified in Kubernetes resource YAML files in the subscribed Git repository. Note: When a non-subscription-admin user creates a subscription, all resources are deployed into the subscription namespace regardless of specified namespaces in the resources. For more information, see the Application lifecycle RBAC section. admin, edit, view Admin, edit, and view are OpenShift Container Platform default roles. A user with a namespace-scoped binding to these roles has access to open-cluster-management resources in a specific namespace, while cluster-wide binding to the same roles gives access to all of the open-cluster-management resources cluster-wide. open-cluster-management:managedclusterset:bind:<managed_clusterset_name> A user with the open-cluster-management:managedclusterset:bind:<managed_clusterset_name> role has view access to the managed cluster resource called <managed_clusterset_name> . The user can bind <managed_clusterset_name> to a namespace. The user also has view access to managedcluster.cluster.open-cluster-management.io , clusterclaim.hive.openshift.io , clusterdeployment.hive.openshift.io , and clusterpool.hive.openshift.io resources, which have the following managed cluster set label: cluster.open-cluster-management.io/clusterset=<managed_clusterset_name> . See Creating a ManagedClusterSet to learn how to manage the resource. Important: Any user can create projects from OpenShift Container Platform, which gives administrator role permissions for the namespace. If a user does not have role access to a cluster, the cluster name is not displayed. The cluster name might be displayed with the following symbol: - . See Implementing role-based access control for more details. 1.2. Implementing role-based access control Red Hat Advanced Cluster Management for Kubernetes RBAC is validated at the console level and at the API level. Actions in the console can be enabled or disabled based on user access role permissions. The multicluster engine operator is a prerequisite and the cluster lifecycle function of Red Hat Advanced Cluster Management. To manage RBAC for clusters with the multicluster engine operator, use the RBAC guidance from the cluster lifecycle multicluster engine for Kubernetes operator Role-based access control documentation. View the following sections for more information on RBAC for specific lifecycles for Red Hat Advanced Cluster Management: Application lifecycle RBAC Console and API RBAC table for application lifecycle Governance lifecycle RBAC Console and API RBAC table for governance lifecycle Observability RBAC Console and API RBAC table for observability lifecycle 1.2.1. Application lifecycle RBAC When you create an application, the subscription namespace is created and the configuration map is created in the subscription namespace. You must also have access to the channel namespace. When you want to apply a subscription, you must be a subscription administrator. For more information on managing applications, see Creating an allow and deny list as subscription administrator . View the following application lifecycle RBAC operations: Create and administer applications on all managed clusters with a user named username . You must create a cluster role binding and bind it to username . Run the following command: This role is a super user, which has access to all resources and actions. You can create the namespace for the application and all application resources in the namespace with this role. Create applications that deploy resources to multiple namespaces. You must create a cluster role binding to the open-cluster-management:subscription-admin cluster role, and bind it to a user named username . Run the following command: Create and administer applications in the cluster-name managed cluster, with the username user. You must create a cluster role binding to the open-cluster-management:admin:<cluster-name> cluster role and bind it to username by entering the following command: This role has read and write access to all application resources on the managed cluster, cluster-name . Repeat this if access for other managed clusters is required. Create a namespace role binding to the application namespace using the admin role and bind it to username by entering the following command: This role has read and write access to all application resources in the application namspace. Repeat this if access for other applications is required or if the application deploys to multiple namespaces. You can create applications that deploy resources to multiple namespaces. Create a cluster role binding to the open-cluster-management:subscription-admin cluster role and bind it to username by entering the following command: To view an application on a managed cluster named cluster-name with the user named username , create a cluster role binding to the open-cluster-management:view: cluster role and bind it to username . Enter the following command: This role has read access to all application resources on the managed cluster, cluster-name . Repeat this if access for other managed clusters is required. Create a namespace role binding to the application namespace using the view role and bind it to username . Enter the following command: This role has read access to all application resources in the application namspace. Repeat this if access for other applications is required. 1.2.1.1. Console and API RBAC table for application lifecycle View the following console and API RBAC tables for Application lifecycle: Table 1.2. Console RBAC table for application lifecycle Resource Admin Edit View Application create, read, update, delete create, read, update, delete read Channel create, read, update, delete create, read, update, delete read Subscription create, read, update, delete create, read, update, delete read Table 1.3. API RBAC table for application lifecycle API Admin Edit View applications.app.k8s.io create, read, update, delete create, read, update, delete read channels.apps.open-cluster-management.io create, read, update, delete create, read, update, delete read deployables.apps.open-cluster-management.io create, read, update, delete create, read, update, delete read helmreleases.apps.open-cluster-management.io create, read, update, delete create, read, update, delete read placements.apps.open-cluster-management.io create, read, update, delete create, read, update, delete read placementrules.apps.open-cluster-management.io (Deprecated) create, read, update, delete create, read, update, delete read subscriptions.apps.open-cluster-management.io create, read, update, delete create, read, update, delete read configmaps create, read, update, delete create, read, update, delete read secrets create, read, update, delete create, read, update, delete read namespaces create, read, update, delete create, read, update, delete read 1.2.2. Governance lifecycle RBAC To perform governance lifecycle operations, you need access to the namespace where the policy is created, along with access to the managed cluster where the policy is applied. The managed cluster must also be part of a ManagedClusterSet that is bound to the namespace. To continue to learn about ManagedClusterSet , see ManagedClusterSets Introduction . After you select a namespace, such as rhacm-policies , with one or more bound ManagedClusterSets , and after you have access to create Placement objects in the namespace, view the following operations: To create a ClusterRole named rhacm-edit-policy with Policy , PlacementBinding , and PolicyAutomation edit access, run the following command: To create a policy in the rhacm-policies namespace, create a namespace RoleBinding , such as rhacm-edit-policy , to the rhacm-policies namespace using the ClusterRole created previously. Run the following command: To view policy status of a managed cluster, you need permission to view policies in the managed cluster namespace on the hub cluster. If you do not have view access, such as through the OpenShift view ClusterRole , create a ClusterRole , such as rhacm-view-policy , with view access to policies with the following command: To bind the new ClusterRole to the managed cluster namespace, run the following command to create a namespace RoleBinding : 1.2.2.1. Console and API RBAC table for governance lifecycle View the following console and API RBAC tables for governance lifecycle: Table 1.4. Console RBAC table for governance lifecycle Resource Admin Edit View Policies create, read, update, delete read, update read PlacementBindings create, read, update, delete read, update read Placements create, read, update, delete read, update read PlacementRules (deprecated) create, read, update, delete read, update read PolicyAutomations create, read, update, delete read, update read Table 1.5. API RBAC table for governance lifecycle API Admin Edit View policies.policy.open-cluster-management.io create, read, update, delete read, update read placementbindings.policy.open-cluster-management.io create, read, update, delete read, update read policyautomations.policy.open-cluster-management.io create, read, update, delete read, update read Continue to learn about securing your cluster, see Security overview . 1.2.3. Observability RBAC To view the observability metrics for a managed cluster, you must have view access to that managed cluster on the hub cluster. View the following list of observability features: Access managed cluster metrics. Users are denied access to managed cluster metrics, if they are not assigned to the view role for the managed cluster on the hub cluster. Run the following command to verify if a user has the authority to create a managedClusterView role in the managed cluster namespace: As a cluster administrator, create a managedClusterView role in the managed cluster namespace. Run the following command: Then apply and bind the role to a user by creating a role bind. Run the following command: Search for resources. To verify if a user has access to resource types, use the following command: Note: <resource-type> must be plural. To view observability data in Grafana, you must have a RoleBinding resource in the same namespace of the managed cluster. View the following RoleBinding example: kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: <replace-with-name-of-rolebinding> namespace: <replace-with-name-of-managedcluster-namespace> subjects: - kind: <replace with User|Group|ServiceAccount> apiGroup: rbac.authorization.k8s.io name: <replace with name of User|Group|ServiceAccount> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view See Role binding policy for more information. See Customizing observability to configure observability. 1.2.3.1. Console and API RBAC table for observability lifecycle To manage components of observability, view the following API RBAC table: Table 1.6. API RBAC table for observability API Admin Edit View multiclusterobservabilities.observability.open-cluster-management.io create, read, update, and delete read, update read searchcustomizations.search.open-cluster-management.io create, get, list, watch, update, delete, patch - - policyreports.wgpolicyk8s.io get, list, watch get, list, watch get, list, watch 1.3. Bringing your own observability Certificate Authority (CA) certificates When you install Red Hat Advanced Cluster Management for Kubernetes, only Certificate Authority (CA) certificates for observability are provided by default. If you do not want to use the default observability CA certificates generated by Red Hat Advanced Cluster Management, you can choose to bring your own observability CA certificates before you enable observability. 1.3.1. Generating CA certificates by using OpenSSL commands Observability requires two CA certificates, one for the server-side and the other is for the client-side. Generate your CA RSA private keys with the following commands: openssl genrsa -out serverCAKey.pem 2048 openssl genrsa -out clientCAKey.pem 2048 Generate the self-signed CA certificates using the private keys. Run the following commands: openssl req -x509 -sha256 -new -nodes -key serverCAKey.pem -days 1825 -out serverCACert.pem openssl req -x509 -sha256 -new -nodes -key clientCAKey.pem -days 1825 -out clientCACert.pem 1.3.2. Creating the secrets associated with your own observability CA certificates Complete the following steps to create the secrets: Create the observability-server-ca-certs secret by using your certificate and private key. Run the following command: oc -n open-cluster-management-observability create secret tls observability-server-ca-certs --cert ./serverCACert.pem --key ./serverCAKey.pem Create the observability-client-ca-certs secret by using your certificate and private key. Run the following command: oc -n open-cluster-management-observability create secret tls observability-client-ca-certs --cert ./clientCACert.pem --key ./clientCAKey.pem 1.3.3. Additional resources See Customizing route certification . See Customizing certificates for accessing the object store .
[ "create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:cluster-manager-admin --user=<username>", "create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:subscription-admin --user=<username>", "create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:admin:<cluster-name> --user=<username>", "create rolebinding <role-binding-name> -n <application-namespace> --clusterrole=admin --user=<username>", "create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:subscription-admin --user=<username>", "create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:view:<cluster-name> --user=<username>", "create rolebinding <role-binding-name> -n <application-namespace> --clusterrole=view --user=<username>", "create clusterrole rhacm-edit-policy --resource=policies.policy.open-cluster-management.io,placementbindings.policy.open-cluster-management.io,policyautomations.policy.open-cluster-management.io,policysets.policy.open-cluster-management.io --verb=create,delete,get,list,patch,update,watch", "create rolebinding rhacm-edit-policy -n rhacm-policies --clusterrole=rhacm-edit-policy --user=<username>", "create clusterrole rhacm-view-policy --resource=policies.policy.open-cluster-management.io --verb=get,list,watch", "create rolebinding rhacm-view-policy -n <cluster name> --clusterrole=rhacm-view-policy --user=<username>", "auth can-i create ManagedClusterView -n <managedClusterName> --as=<user>", "create role create-managedclusterview --verb=create --resource=managedclusterviews -n <managedClusterName>", "create rolebinding user-create-managedclusterview-binding --role=create-managedclusterview --user=<user> -n <managedClusterName>", "auth can-i list <resource-type> -n <namespace> --as=<rbac-user>", "kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: <replace-with-name-of-rolebinding> namespace: <replace-with-name-of-managedcluster-namespace> subjects: - kind: <replace with User|Group|ServiceAccount> apiGroup: rbac.authorization.k8s.io name: <replace with name of User|Group|ServiceAccount> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view", "openssl genrsa -out serverCAKey.pem 2048 openssl genrsa -out clientCAKey.pem 2048", "openssl req -x509 -sha256 -new -nodes -key serverCAKey.pem -days 1825 -out serverCACert.pem openssl req -x509 -sha256 -new -nodes -key clientCAKey.pem -days 1825 -out clientCACert.pem", "-n open-cluster-management-observability create secret tls observability-server-ca-certs --cert ./serverCACert.pem --key ./serverCAKey.pem", "-n open-cluster-management-observability create secret tls observability-client-ca-certs --cert ./clientCACert.pem --key ./clientCAKey.pem" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/access_control/access-control