title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 3. Hot Rod Java Client Configuration | Chapter 3. Hot Rod Java Client Configuration Data Grid provides a Hot Rod Java client configuration API that exposes configuration properties. 3.1. Adding Hot Rod Java Client Dependencies Add Hot Rod Java client dependencies to include it in your project. Prerequisites Java {jdkminversion} or greater. Procedure Add the infinispan-client-hotrod artifact as a dependency in your pom.xml as follows: <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-client-hotrod</artifactId> </dependency> Reference Data Grid Server Requirements 3.2. Configuring Hot Rod Client Connections Configure Hot Rod Java client connections to Data Grid Server. Procedure Use the ConfigurationBuilder class to generate immutable configuration objects that you can pass to RemoteCacheManager or use a hotrod-client.properties file on the application classpath. ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host("127.0.0.1") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .addServer() .host("192.0.2.0") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .security().authentication() .username("username") .password("changeme") .realm("default") .saslMechanism("SCRAM-SHA-512"); RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build()); hotrod-client.properties Configuring Hot Rod URIs You can also configure Hot Rod client connections with URIs as follows: ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.uri("hotrod://username:[email protected]:11222,192.0.2.0:11222?auth_realm=default&sasl_mechanism=SCRAM-SHA-512"); RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build()); hotrod-client.properties Adding properties outside the classpath If the hotrod-client.properties file is not on the application classpath then you need to specify the location, as in the following example: ConfigurationBuilder builder = new ConfigurationBuilder(); Properties p = new Properties(); try(Reader r = new FileReader("/path/to/hotrod-client.properties")) { p.load(r); builder.withProperties(p); } RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build()); Additional resources Hot Rod Client Configuration org.infinispan.client.hotrod.configuration.ConfigurationBuilder org.infinispan.client.hotrod.RemoteCacheManager 3.2.1. Defining Data Grid Clusters in Client Configuration Provide the locations of Data Grid clusters in Hot Rod client configuration. Procedure Provide at least one Data Grid cluster name along with a host name and port for at least one node with the ClusterConfigurationBuilder class. If you want to define a cluster as default, so that clients always attempt to connect to it first, then define a server list with the addServers("<host_name>:<port>; <host_name>:<port>") method. Multiple cluster connections ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addCluster("siteA") .addClusterNode("hostA1", 11222) .addClusterNode("hostA2", 11222) .addCluster("siteB") .addClusterNodes("hostB1:11222; hostB2:11222"); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(clientBuilder.build()); Default server list with a failover cluster ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServers("hostA1:11222; hostA2:11222") .addCluster("siteB") .addClusterNodes("hostB1:11222; hostB2:11223"); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(clientBuilder.build()); 3.2.2. Manually Switching Data Grid Clusters Manually switch Hot Rod Java client connections between Data Grid clusters. Procedure Call one of the following methods in the RemoteCacheManager class: switchToCluster(clusterName) switches to a specific cluster defined in the client configuration. switchToDefaultCluster() switches to the default cluster in the client configuration, which is defined as a list of Data Grid servers. Additional resources RemoteCacheManager 3.2.3. Configuring Connection Pools Hot Rod Java clients keep pools of persistent connections to Data Grid servers to reuse TCP connections instead of creating them on each request. Procedure Configure Hot Rod client connection pool settings as in the following examples: ConfigurationBuilder ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host("127.0.0.1") .port(11222) .connectionPool() .maxActive(10) .exhaustedAction(ExhaustedAction.valueOf("WAIT")) .maxWait(1) .minIdle(20) .minEvictableIdleTime(300000) .maxPendingRequests(20); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(clientBuilder.build()); hotrod-client.properties These configuration options give fine-grained control over the pool. We recommend tuning the pool according to the application needs. Otherwise, you might experience an increase in latency for new connections opening or a decrease in throughput with few connections available. A large pool could lead to a "herd effect" of new connections, given an abrupt increase in load. However, it should help in throughput with more connections to attend requests. A smaller pool avoids this effect and consumes fewer resources. However, it can only handle so many requests before starting to degrade. 3.3. Configuring Authentication Mechanisms for Hot Rod Clients Data Grid Server uses different mechanisms to authenticate Hot Rod client connections. Procedure Specify authentication mechanisms with the saslMechanism() method from the AuthenticationConfigurationBuilder class or with the infinispan.client.hotrod.sasl_mechanism property. Warning Authorization is an expensive operation performed on each new connection. We strongly recommend tuning the connection pool. SCRAM ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host("127.0.0.1") .port(11222) .security() .authentication() .saslMechanism("SCRAM-SHA-512") .username("myuser") .password("qwer1234!"); DIGEST ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host("127.0.0.1") .port(11222) .security() .authentication() .saslMechanism("DIGEST-MD5") .username("myuser") .password("qwer1234!"); PLAIN ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host("127.0.0.1") .port(11222) .security() .authentication() .saslMechanism("PLAIN") .username("myuser") .password("qwer1234!"); OAUTHBEARER String token = "..."; // Obtain the token from your OAuth2 provider ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host("127.0.0.1") .port(11222) .security() .authentication() .saslMechanism("OAUTHBEARER") .token(token); EXTERNAL ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder .addServer() .host("127.0.0.1") .port(11222) .security() .ssl() // TrustStore stores trusted CA certificates for the server. .trustStoreFileName("/path/to/truststore") .trustStorePassword("truststorepassword".toCharArray()) .trustStoreType("PCKS12") // KeyStore stores valid client certificates. .keyStoreFileName("/path/to/keystore") .keyStorePassword("keystorepassword".toCharArray()) .keyStoreType("PCKS12") .authentication() .saslMechanism("EXTERNAL"); remoteCacheManager = new RemoteCacheManager(clientBuilder.build()); RemoteCache<String, String> cache = remoteCacheManager.getCache("secured"); GSSAPI LoginContext lc = new LoginContext("GssExample", new BasicCallbackHandler("krb_user", "krb_password".toCharArray())); lc.login(); Subject clientSubject = lc.getSubject(); ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host("127.0.0.1") .port(11222) .security() .authentication() .saslMechanism("GSSAPI") .clientSubject(clientSubject) .callbackHandler(new BasicCallbackHandler()); Basic Callback Handler The BasicCallbackHandler , as shown in the GSSAPI example, invokes the following callbacks: NameCallback and PasswordCallback construct the client subject. AuthorizeCallback is called during SASL authentication. OAUTHBEARER with Token Callback Handler Use a TokenCallbackHandler to refresh OAuth2 tokens before they expire, as in the following example: String token = "..."; // Obtain the token from your OAuth2 provider TokenCallbackHandler tokenHandler = new TokenCallbackHandler(token); ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host("127.0.0.1") .port(11222) .security() .authentication() .saslMechanism("OAUTHBEARER") .callbackHandler(tokenHandler); remoteCacheManager = new RemoteCacheManager(clientBuilder.build()); RemoteCache<String, String> cache = remoteCacheManager.getCache("secured"); // Refresh the token tokenHandler.setToken("newToken"); Custom CallbackHandler Hot Rod clients set up a default CallbackHandler to pass credentials to SASL mechanisms. In some cases you might need to provide a custom CallbackHandler , as in the following example: public class MyCallbackHandler implements CallbackHandler { final private String username; final private char[] password; final private String realm; public MyCallbackHandler(String username, String realm, char[] password) { this.username = username; this.password = password; this.realm = realm; } @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (Callback callback : callbacks) { if (callback instanceof NameCallback) { NameCallback nameCallback = (NameCallback) callback; nameCallback.setName(username); } else if (callback instanceof PasswordCallback) { PasswordCallback passwordCallback = (PasswordCallback) callback; passwordCallback.setPassword(password); } else if (callback instanceof AuthorizeCallback) { AuthorizeCallback authorizeCallback = (AuthorizeCallback) callback; authorizeCallback.setAuthorized(authorizeCallback.getAuthenticationID().equals( authorizeCallback.getAuthorizationID())); } else if (callback instanceof RealmCallback) { RealmCallback realmCallback = (RealmCallback) callback; realmCallback.setText(realm); } else { throw new UnsupportedCallbackException(callback); } } } } ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host("127.0.0.1") .port(11222) .security().authentication() .serverName("myhotrodserver") .saslMechanism("DIGEST-MD5") .callbackHandler(new MyCallbackHandler("myuser","default","qwer1234!".toCharArray())); Note A custom CallbackHandler needs to handle callbacks that are specific to the authentication mechanism that you use. However, it is beyond the scope of this document to provide examples for each possible callback type. 3.3.1. Creating GSSAPI Login Contexts To use the GSSAPI mechanism, you must create a LoginContext so your Hot Rod client can obtain a Ticket Granting Ticket (TGT). Procedure Define a login module in a login configuration file. gss.conf For the IBM JDK: gss-ibm.conf Set the following system properties: Note krb5.conf provides the location of your KDC. Use the kinit command to authenticate with Kerberos and verify krb5.conf . 3.3.2. SASL authentication mechanisms Data Grid Server supports the following SASL authentications mechanisms with Hot Rod and Memcached binary protocol endpoints: Authentication mechanism Description Security realm type Related details PLAIN Uses credentials in plain-text format. You should use PLAIN authentication with encrypted connections only. Property realms and LDAP realms Similar to the BASIC HTTP mechanism. DIGEST-* Uses hashing algorithms and nonce values. Hot Rod connectors support DIGEST-MD5 , DIGEST-SHA , DIGEST-SHA-256 , DIGEST-SHA-384 , and DIGEST-SHA-512 hashing algorithms, in order of strength. Property realms and LDAP realms Similar to the Digest HTTP mechanism. SCRAM-* Uses salt values in addition to hashing algorithms and nonce values. Hot Rod connectors support SCRAM-SHA , SCRAM-SHA-256 , SCRAM-SHA-384 , and SCRAM-SHA-512 hashing algorithms, in order of strength. Property realms and LDAP realms Similar to the Digest HTTP mechanism. GSSAPI Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding kerberos server identity in the realm configuration. In most cases, you also specify an ldap-realm to provide user membership information. Kerberos realms Similar to the SPNEGO HTTP mechanism. GS2-KRB5 Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding kerberos server identity in the realm configuration. In most cases, you also specify an ldap-realm to provide user membership information. Kerberos realms Similar to the SPNEGO HTTP mechanism. EXTERNAL Uses client certificates. Trust store realms Similar to the CLIENT_CERT HTTP mechanism. OAUTHBEARER Uses OAuth tokens and requires a token-realm configuration. Token realms Similar to the BEARER_TOKEN HTTP mechanism. 3.4. Configuring Hot Rod client encryption Data Grid Server can enforce SSL/TLS encryption and present Hot Rod clients with certificates to establish trust and negotiate secure connections. To verify certificates issued to Data Grid Server, Hot Rod clients require either the full certificate chain or a partial chain that starts with the Root CA. You provide server certificates to Hot Rod clients as trust stores. By default, Hot Rod clients will also perform hostname validation by matching the dNSName or iPAddress or both types contained by the server certificate's subjectAltName extension with the expected hostname. Tip Alternatively to providing trust stores you can use shared system certificates. Prerequisites Create a trust store that Hot Rod clients can use to verify Data Grid Server identities. If you configure Data Grid Server to validate or authenticate client certificates, create a keystore as appropriate. Procedure Add the trust store to the client configuration with the trustStoreFileName() and trustStorePassword() methods or corresponding properties. If you configure client certificate authentication, do the following: Add the keystore to the client configuration with the keyStoreFileName() and keyStorePassword() methods or corresponding properties. Configure clients to use the EXTERNAL authentication mechanism. ConfigurationBuilder ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder .addServer() .host("127.0.0.1") .port(11222) .security() .ssl() // Server SNI hostname. .sniHostName("myservername") // Keystore that contains the public keys for Data Grid Server. // Clients use the trust store to verify Data Grid Server identities. .trustStoreFileName("/path/to/server/truststore") .trustStorePassword("truststorepassword".toCharArray()) .trustStoreType("PCKS12") // Keystore that contains client certificates. // Clients present these certificates to Data Grid Server. .keyStoreFileName("/path/to/client/keystore") .keyStorePassword("keystorepassword".toCharArray()) .keyStoreType("PCKS12") .authentication() // Clients must use the EXTERNAL mechanism for certificate authentication. .saslMechanism("EXTERNAL"); hotrod-client.properties steps Add a client trust store to the USDRHDG_HOME/server/conf directory and configure Data Grid Server to use it, if necessary. Additional resources Encrypting Data Grid Server Connections SslConfigurationBuilder Hot Rod client configuration properties Using Shared System Certificates (Red Hat Enterprise Linux 9 Securing Networks) 3.5. Enabling Hot Rod client statistics Hot Rod Java clients can provide statistics that include remote cache and near-cache hits and misses as well as connection pool usage. Procedure Open your Hot Rod Java client configuration for editing. Set true as the value for the statistics property or invoke the statistics().enable() methods. Export JMX MBeans for your Hot Rod client with the jmx and jmx_domain properties or invoke the jmxEnable() and jmxDomain() methods. Save and close your client configuration. Hot Rod Java client statistics ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.statistics().enable() .jmxEnable() .jmxDomain("my.domain.org") .addServer() .host("127.0.0.1") .port(11222); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(builder.build()); hotrod-client.properties infinispan.client.hotrod.statistics = true infinispan.client.hotrod.jmx = true infinispan.client.hotrod.jmx_domain = my.domain.org 3.6. Hot Rod client tracing propagation When you configure OpenTelemetry tracing on both the client VM and the Data Grid Server, the Hot Rod client enables automatic correlation of tracing spans between the client application and the Data Grid Server. Disabling tracing propagation from the client to the Data Grid Server Prerequisites Have OpenTelemetry tracing enabled on The Data Grid Server and the client side. Procedure Use the disableTracingPropagation() method to disable OpenTelemetry tracing propagation. import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host("127.0.0.1") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .disableTracingPropagation(); The Hot Rod client stops sending tracing to the Data Grid Server. Additional resources Enabling Data Grid tracing 3.7. Near Caches Near caches are local to Hot Rod clients and store recently used data so that every read operation does not need to traverse the network, which significantly increases performance. Near caches: Are populated with read operations, calls to get() or getVersioned() methods. In the following example the put() call does not populate the near cache and only has the effect of invalidating the entry if it already exists: cache.put("k1", "v1"); cache.get("k1"); Register a client listener to invalidate entries when they are updated or removed in remote caches on Data Grid Server. If entries are requested after they are invalidated, clients must retrieve them from the remote caches again. Are cleared when clients fail over to different servers. Bounded near caches You should always use bounded near caches by specifying the maximum number of entries they can contain. When near caches reach the maximum number of entries, eviction automatically takes place to remove older entries. This means you do not need to manually keep the cache size within the boundaries of the client JVM. Important Do not use maximum idle expiration with near caches because near-cache reads do not propagate the last access time for entries. Bloom filters Bloom filters optimize performance for write operations by reducing the total number of invalidation messages. Bloom filters: Reside on Data Grid Server and keep track of the entries that the client has requested. Require a connection pool configuration that has a maximum of one active connection per server and uses the WAIT exhausted action. Cannot be used with unbounded near caches. 3.7.1. Configuring Near Caches Configure Hot Rod Java clients with near caches to store recently used data locally in the client JVM. Procedure Open your Hot Rod Java client configuration. Configure each cache to perform near caching with the nearCacheMode(NearCacheMode.INVALIDATED) method. Note Data Grid provides global near cache configuration properties. However, those properties are deprecated and you should not use them but configure near caching on a per-cache basis instead. Specify the maximum number of entries that the near cache can hold before eviction occurs with the nearCacheMaxEntries() method. Enable bloom filters for near caches with the nearCacheUseBloomFilter() method. import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; import org.infinispan.client.hotrod.configuration.NearCacheMode; import org.infinispan.client.hotrod.configuration.ExhaustedAction; ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host("127.0.0.1") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .security().authentication() .username("username") .password("password") .realm("default") .saslMechanism("SCRAM-SHA-512") // Configure the connection pool for bloom filters. .connectionPool() .maxActive(1) .exhaustedAction(ExhaustedAction.WAIT); // Configure near caching for specific caches builder.remoteCache("books") .nearCacheMode(NearCacheMode.INVALIDATED) .nearCacheMaxEntries(100) .nearCacheUseBloomFilter(false); builder.remoteCache("authors") .nearCacheMode(NearCacheMode.INVALIDATED) .nearCacheMaxEntries(200) .nearCacheUseBloomFilter(true); Additional resources org.infinispan.client.hotrod.configuration.NearCacheConfiguration org.infinispan.client.hotrod.configuration.ExhaustedAction 3.8. Forcing Return Values To avoid sending data unnecessarily, write operations on remote caches return null instead of values. For example, the following method calls do not return values for keys: V remove(Object key); V put(K key, V value); You can, however, change the default behavior so your invocations return values for keys. Procedure Configure Hot Rod clients so method calls return values for keys in one of the following ways: FORCE_RETURN_VALUE flag cache.withFlags(Flag.FORCE_RETURN_VALUE).put("aKey", "newValue") Per-cache ConfigurationBuilder builder = new ConfigurationBuilder(); // Return values for keys for invocations for a specific cache. builder.remoteCache("mycache") .forceReturnValues(true); hotrod-client.properties Additional resources org.infinispan.client.hotrod.Flag 3.9. Creating remote caches from Hot Rod clients Use the Data Grid Hot Rod API to create remote caches on Data Grid Server from Java, C++, .NET/C#, JS clients and more. This procedure shows you how to use Hot Rod Java clients that create remote caches on first access. You can find code examples for other Hot Rod clients in the Data Grid Tutorials . Prerequisites Create a Data Grid user with admin permissions. Start at least one Data Grid Server instance. Have a Data Grid cache configuration. Procedure Invoke the remoteCache() method as part of your the ConfigurationBuilder . Set the configuration or configuration_uri properties in the hotrod-client.properties file on your classpath. ConfigurationBuilder File file = new File("path/to/infinispan.xml") ConfigurationBuilder builder = new ConfigurationBuilder(); builder.remoteCache("another-cache") .configuration("<distributed-cache name=\"another-cache\"/>"); builder.remoteCache("my.other.cache") .configurationURI(file.toURI()); hotrod-client.properties Important If the name of your remote cache contains the . character, you must enclose it in square brackets when using hotrod-client.properties files. Additional resources Hot Rod Client Configuration org.infinispan.client.hotrod.configuration.RemoteCacheConfigurationBuilder | [
"<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-client-hotrod</artifactId> </dependency>",
"ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host(\"127.0.0.1\") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .addServer() .host(\"192.0.2.0\") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .security().authentication() .username(\"username\") .password(\"changeme\") .realm(\"default\") .saslMechanism(\"SCRAM-SHA-512\"); RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build());",
"infinispan.client.hotrod.server_list = 127.0.0.1:11222,192.0.2.0:11222 infinispan.client.hotrod.auth_username = username infinispan.client.hotrod.auth_password = changeme infinispan.client.hotrod.auth_realm = default infinispan.client.hotrod.sasl_mechanism = SCRAM-SHA-512",
"ConfigurationBuilder builder = new ConfigurationBuilder(); builder.uri(\"hotrod://username:[email protected]:11222,192.0.2.0:11222?auth_realm=default&sasl_mechanism=SCRAM-SHA-512\"); RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build());",
"infinispan.client.hotrod.uri = hotrod://username:[email protected]:11222,192.0.2.0:11222?auth_realm=default&sasl_mechanism=SCRAM-SHA-512",
"ConfigurationBuilder builder = new ConfigurationBuilder(); Properties p = new Properties(); try(Reader r = new FileReader(\"/path/to/hotrod-client.properties\")) { p.load(r); builder.withProperties(p); } RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build());",
"ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addCluster(\"siteA\") .addClusterNode(\"hostA1\", 11222) .addClusterNode(\"hostA2\", 11222) .addCluster(\"siteB\") .addClusterNodes(\"hostB1:11222; hostB2:11222\"); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(clientBuilder.build());",
"ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServers(\"hostA1:11222; hostA2:11222\") .addCluster(\"siteB\") .addClusterNodes(\"hostB1:11222; hostB2:11223\"); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(clientBuilder.build());",
"ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host(\"127.0.0.1\") .port(11222) .connectionPool() .maxActive(10) .exhaustedAction(ExhaustedAction.valueOf(\"WAIT\")) .maxWait(1) .minIdle(20) .minEvictableIdleTime(300000) .maxPendingRequests(20); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(clientBuilder.build());",
"infinispan.client.hotrod.server_list = 127.0.0.1:11222 infinispan.client.hotrod.connection_pool.max_active = 10 infinispan.client.hotrod.connection_pool.exhausted_action = WAIT infinispan.client.hotrod.connection_pool.max_wait = 1 infinispan.client.hotrod.connection_pool.min_idle = 20 infinispan.client.hotrod.connection_pool.min_evictable_idle_time = 300000 infinispan.client.hotrod.connection_pool.max_pending_requests = 20",
"ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host(\"127.0.0.1\") .port(11222) .security() .authentication() .saslMechanism(\"SCRAM-SHA-512\") .username(\"myuser\") .password(\"qwer1234!\");",
"ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host(\"127.0.0.1\") .port(11222) .security() .authentication() .saslMechanism(\"DIGEST-MD5\") .username(\"myuser\") .password(\"qwer1234!\");",
"ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host(\"127.0.0.1\") .port(11222) .security() .authentication() .saslMechanism(\"PLAIN\") .username(\"myuser\") .password(\"qwer1234!\");",
"String token = \"...\"; // Obtain the token from your OAuth2 provider ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host(\"127.0.0.1\") .port(11222) .security() .authentication() .saslMechanism(\"OAUTHBEARER\") .token(token);",
"ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder .addServer() .host(\"127.0.0.1\") .port(11222) .security() .ssl() // TrustStore stores trusted CA certificates for the server. .trustStoreFileName(\"/path/to/truststore\") .trustStorePassword(\"truststorepassword\".toCharArray()) .trustStoreType(\"PCKS12\") // KeyStore stores valid client certificates. .keyStoreFileName(\"/path/to/keystore\") .keyStorePassword(\"keystorepassword\".toCharArray()) .keyStoreType(\"PCKS12\") .authentication() .saslMechanism(\"EXTERNAL\"); remoteCacheManager = new RemoteCacheManager(clientBuilder.build()); RemoteCache<String, String> cache = remoteCacheManager.getCache(\"secured\");",
"LoginContext lc = new LoginContext(\"GssExample\", new BasicCallbackHandler(\"krb_user\", \"krb_password\".toCharArray())); lc.login(); Subject clientSubject = lc.getSubject(); ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host(\"127.0.0.1\") .port(11222) .security() .authentication() .saslMechanism(\"GSSAPI\") .clientSubject(clientSubject) .callbackHandler(new BasicCallbackHandler());",
"String token = \"...\"; // Obtain the token from your OAuth2 provider TokenCallbackHandler tokenHandler = new TokenCallbackHandler(token); ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host(\"127.0.0.1\") .port(11222) .security() .authentication() .saslMechanism(\"OAUTHBEARER\") .callbackHandler(tokenHandler); remoteCacheManager = new RemoteCacheManager(clientBuilder.build()); RemoteCache<String, String> cache = remoteCacheManager.getCache(\"secured\"); // Refresh the token tokenHandler.setToken(\"newToken\");",
"public class MyCallbackHandler implements CallbackHandler { final private String username; final private char[] password; final private String realm; public MyCallbackHandler(String username, String realm, char[] password) { this.username = username; this.password = password; this.realm = realm; } @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (Callback callback : callbacks) { if (callback instanceof NameCallback) { NameCallback nameCallback = (NameCallback) callback; nameCallback.setName(username); } else if (callback instanceof PasswordCallback) { PasswordCallback passwordCallback = (PasswordCallback) callback; passwordCallback.setPassword(password); } else if (callback instanceof AuthorizeCallback) { AuthorizeCallback authorizeCallback = (AuthorizeCallback) callback; authorizeCallback.setAuthorized(authorizeCallback.getAuthenticationID().equals( authorizeCallback.getAuthorizationID())); } else if (callback instanceof RealmCallback) { RealmCallback realmCallback = (RealmCallback) callback; realmCallback.setText(realm); } else { throw new UnsupportedCallbackException(callback); } } } } ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host(\"127.0.0.1\") .port(11222) .security().authentication() .serverName(\"myhotrodserver\") .saslMechanism(\"DIGEST-MD5\") .callbackHandler(new MyCallbackHandler(\"myuser\",\"default\",\"qwer1234!\".toCharArray()));",
"GssExample { com.sun.security.auth.module.Krb5LoginModule required client=TRUE; };",
"GssExample { com.ibm.security.auth.module.Krb5LoginModule required client=TRUE; };",
"java.security.auth.login.config=gss.conf java.security.krb5.conf=/etc/krb5.conf",
"ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder .addServer() .host(\"127.0.0.1\") .port(11222) .security() .ssl() // Server SNI hostname. .sniHostName(\"myservername\") // Keystore that contains the public keys for Data Grid Server. // Clients use the trust store to verify Data Grid Server identities. .trustStoreFileName(\"/path/to/server/truststore\") .trustStorePassword(\"truststorepassword\".toCharArray()) .trustStoreType(\"PCKS12\") // Keystore that contains client certificates. // Clients present these certificates to Data Grid Server. .keyStoreFileName(\"/path/to/client/keystore\") .keyStorePassword(\"keystorepassword\".toCharArray()) .keyStoreType(\"PCKS12\") .authentication() // Clients must use the EXTERNAL mechanism for certificate authentication. .saslMechanism(\"EXTERNAL\");",
"infinispan.client.hotrod.server_list = 127.0.0.1:11222 infinispan.client.hotrod.use_ssl = true Use SNI for hostname validation infinispan.client.hotrod.sni_host_name = myservername Keystore that contains the public keys for Data Grid Server. Clients use the trust store to verify Data Grid Server identities. infinispan.client.hotrod.trust_store_file_name = server_truststore.pkcs12 infinispan.client.hotrod.trust_store_password = changeme infinispan.client.hotrod.trust_store_type = PCKS12 Keystore that contains client certificates. Clients present these certificates to Data Grid Server. infinispan.client.hotrod.key_store_file_name = client_keystore.pkcs12 infinispan.client.hotrod.key_store_password = changeme infinispan.client.hotrod.key_store_type = PCKS12 Clients must use the EXTERNAL mechanism for certificate authentication. infinispan.client.hotrod.sasl_mechanism = EXTERNAL",
"ConfigurationBuilder builder = new ConfigurationBuilder(); builder.statistics().enable() .jmxEnable() .jmxDomain(\"my.domain.org\") .addServer() .host(\"127.0.0.1\") .port(11222); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(builder.build());",
"infinispan.client.hotrod.statistics = true infinispan.client.hotrod.jmx = true infinispan.client.hotrod.jmx_domain = my.domain.org",
"import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host(\"127.0.0.1\") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .disableTracingPropagation();",
"cache.put(\"k1\", \"v1\"); cache.get(\"k1\");",
"import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; import org.infinispan.client.hotrod.configuration.NearCacheMode; import org.infinispan.client.hotrod.configuration.ExhaustedAction; ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host(\"127.0.0.1\") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .security().authentication() .username(\"username\") .password(\"password\") .realm(\"default\") .saslMechanism(\"SCRAM-SHA-512\") // Configure the connection pool for bloom filters. .connectionPool() .maxActive(1) .exhaustedAction(ExhaustedAction.WAIT); // Configure near caching for specific caches builder.remoteCache(\"books\") .nearCacheMode(NearCacheMode.INVALIDATED) .nearCacheMaxEntries(100) .nearCacheUseBloomFilter(false); builder.remoteCache(\"authors\") .nearCacheMode(NearCacheMode.INVALIDATED) .nearCacheMaxEntries(200) .nearCacheUseBloomFilter(true);",
"V remove(Object key); V put(K key, V value);",
"cache.withFlags(Flag.FORCE_RETURN_VALUE).put(\"aKey\", \"newValue\")",
"ConfigurationBuilder builder = new ConfigurationBuilder(); // Return previous values for keys for invocations for a specific cache. builder.remoteCache(\"mycache\") .forceReturnValues(true);",
"Use the \"*\" wildcard in the cache name to return previous values for all caches that start with the \"somecaches\" string. infinispan.client.hotrod.cache.somecaches*.force_return_values = true",
"File file = new File(\"path/to/infinispan.xml\") ConfigurationBuilder builder = new ConfigurationBuilder(); builder.remoteCache(\"another-cache\") .configuration(\"<distributed-cache name=\\\"another-cache\\\"/>\"); builder.remoteCache(\"my.other.cache\") .configurationURI(file.toURI());",
"infinispan.client.hotrod.cache.another-cache.configuration=<distributed-cache name=\\\"another-cache\\\"/> infinispan.client.hotrod.cache.[my.other.cache].configuration_uri=file:///path/to/infinispan.xml"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/hot_rod_java_client_guide/hotrod-client-configuration_hot_rod |
Chapter 5. OpenShift Data Foundation deployed using local storage devices | Chapter 5. OpenShift Data Foundation deployed using local storage devices 5.1. Replacing operational or failed storage devices on clusters backed by local storage devices You can replace an object storage device (OSD) in OpenShift Data Foundation deployed using local storage devices on the following infrastructures: Bare metal VMware Note There might be a need to replace one or more underlying storage devices. Prerequisites Red Hat recommends that replacement devices are configured with similar infrastructure and resources to the device being replaced. Ensure that the data is resilient. In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab, and then click ocs-storagecluster-storagesystem . In the Status card of Block and File dashboard, under the Overview tab, verify that Data Resiliency has a green tick mark. Procedure Remove the underlying storage device from relevant worker node. Verify that relevant OSD Pod has moved to CrashLoopBackOff state. Identify the OSD that needs to be replaced and the OpenShift Container Platform node that has the OSD scheduled on it. Example output: In this example, rook-ceph-osd-0-6d77d6c7c6-m8xj6 needs to be replaced and compute-2 is the OpenShift Container platform node on which the OSD is scheduled. Scale down the OSD deployment for the OSD to be replaced. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-0 . Example output: Verify that the rook-ceph-osd pod is terminated. Example output: Important If the rook-ceph-osd pod is in terminating state for more than a few minutes, use the force option to delete the pod. Example output: Remove the old OSD from the cluster so that you can add a new OSD. Delete any old ocs-osd-removal jobs. Example output: Note The above command must reach Completed state before moving to the steps. This can take more than ten minutes. Navigate to the openshift-storage project. Remove the old OSD from the cluster. The FORCE_OSD_REMOVAL value must be changed to "true" in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSDs that are removed from the respective OpenShift Data Foundation nodes. Get the Persistent Volume Claim (PVC) names of the replaced OSDs from the logs of ocs-osd-removal-job pod. Example output: For each of the previously identified nodes, do the following: Create a debug pod and chroot to the host on the storage node. <node name> Is the name of the node. Find the relevant device name based on the PVC names identified in the step. <pvc name> Is the name of the PVC. Example output: Remove the mapped device. <ocs-deviceset-name> Is the name of the relevant device based on the PVC names identified in the step. Important If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find the PID of the process which was stuck. Terminate the process using the kill command. <PID> Is the process ID. Verify that the device name is removed. Find the persistent volume (PV) that need to be deleted. Example output: Delete the PV. Physically add a new device to the node. Track the provisioning of PVs for the devices that match the deviceInclusionSpec . It can take a few minutes to provision the PVs. Example output: Once the PV is provisioned, a new OSD pod is automatically created for the PV. Delete the ocs-osd-removal job(s). Example output: Note When using an external key management system (KMS) with data encryption, the old OSD encryption key can be removed from the Vault server as it is now an orphan key. Verification steps Verify that there is a new OSD running. Example output: Important If the new OSD does not show as Running after a few minutes, restart the rook-ceph-operator pod to force a reconciliation. Example output: Verify that a new PVC is created. Example output: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSDs are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset name(s). Log in to OpenShift Web Console and check the OSD status on the storage dashboard. Note A full data recovery may take longer depending on the volume of data being recovered. 5.2. Replacing operational or failed storage devices on IBM Power You can replace an object storage device (OSD) in OpenShift Data Foundation deployed using local storage devices on IBM Power. Note There might be a need to replace one or more underlying storage devices. Prerequisites Red Hat recommends that replacement devices are configured with similar infrastructure and resources to the device being replaced. Ensure that the data is resilient. In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab, and then click ocs-storagecluster-storagesystem . In the Status card of Block and File dashboard, under the Overview tab, verify that Data Resiliency has a green tick mark. Procedure Identify the OSD that needs to be replaced and the OpenShift Container Platform node that has the OSD scheduled on it. Example output: In this example, rook-ceph-osd-0-86bf8cdc8-4nb5t needs to be replaced and worker-0 is the RHOCP node on which the OSD is scheduled. Note The status of the pod is Running if the OSD you want to replace is healthy. Scale down the OSD deployment for the OSD to be replaced. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-0 . Example output: Verify that the rook-ceph-osd pod is terminated. Example output: Important If the rook-ceph-osd pod is in terminating state for more than a few minutes, use the force option to delete the pod. Example output: Remove the old OSD from the cluster so that you can add a new OSD. Identify the DeviceSet associated with the OSD to be replaced. Example output: In this example, the Persistent Volume Claim (PVC) name is ocs-deviceset-localblock-0-data-0-64xjl . Identify the Persistent Volume (PV) associated with the PVC. where, x , y , and pvc-suffix are the values in the DeviceSet identified in an earlier step. Example output: In this example, the associated PV is local-pv-8137c873 . Identify the name of the device to be replaced. where, pv-suffix is the value in the PV name identified in an earlier step. Example output: In this example, the device name is vdc . Identify the prepare-pod associated with the OSD to be replaced. where, x , y , and pvc-suffix are the values in the DeviceSet identified in an earlier step. Example output: In this example, the prepare-pod name is rook-ceph-osd-prepare-ocs-deviceset-localblock-0-data-0-64knzkc . Delete any old ocs-osd-removal jobs. Example output: Note The above command must reach Completed state before moving to the steps. This can take more than ten minutes. Change to the openshift-storage project. Remove the old OSD from the cluster. The FORCE_OSD_REMOVAL value must be changed to "true" in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSDs that are removed from the respective OpenShift Data Foundation nodes. Get the PVC name(s) of the replaced OSD(s) from the logs of ocs-osd-removal-job pod. Example output: For each of the previously identified nodes, do the following: Create a debug pod and chroot to the host on the storage node. <node name> Is the name of the node. Find the relevant device name based on the PVC names identified in the step. <pvc name> Is the name of the PVC. Example output: Remove the mapped device. Important If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find the PID of the process which was stuck. Terminate the process using the kill command. <PID> Is the process ID. Verify that the device name is removed. Find the PV that need to be deleted. Example output: Delete the PV. <pv-name> Is the name of the PV. Replace the old device and use the new device to create a new OpenShift Container Platform PV. Log in to the OpenShift Container Platform node with the device to be replaced. In this example, the OpenShift Container Platform node is worker-0 . Example output: Record the /dev/disk that is to be replaced using the device name, vdc , identified earlier. Example output: Find the name of the LocalVolume CR, and remove or comment out the device /dev/disk that is to be replaced. Example output: Example output: Make sure to save the changes after editing the CR. Log in to the OpenShift Container Platform node with the device to be replaced and remove the old symlink . Example output: Identify the old symlink for the device name to be replaced. In this example, the device name is vdc . Example output: Remove the symlink . Verify that the symlink is removed. Example output: Replace the old device with the new device. Log back into the correct OpenShift Cotainer Platform node and identify the device name for the new drive. The device name must change unless you are resetting the same device. Example output: In this example, the new device name is vdd . After the new /dev/disk is available, you can add a new disk entry to the LocalVolume CR. Edit the LocalVolume CR and add the new /dev/disk . In this example, the new device is /dev/vdd . Example output: Make sure to save the changes after editing the CR. Verify that there is a new PV in Available state and of the correct size. Example output: Create a new OSD for the new device. Deploy the new OSD. You need to restart the rook-ceph-operator to force operator reconciliation. Identify the name of the rook-ceph-operator . Example output: Delete the rook-ceph-operator . Example output: In this example, the rook-ceph-operator pod name is rook-ceph-operator-85f6494db4-sg62v . Verify that the rook-ceph-operator pod is restarted. Example output: Creation of the new OSD may take several minutes after the operator restarts. Delete the ocs-osd-removal job(s). Example output: Note When using an external key management system (KMS) with data encryption, the old OSD encryption key can be removed from the Vault server as it is now an orphan key. Verfication steps Verify that there is a new OSD running. Example output: Verify that a new PVC is created. Example output: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the previously identified nodes, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset name(s). Log in to OpenShift Web Console and check the status card in the OpenShift Data Foundation dashboard under Storage section. Note A full data recovery may take longer depending on the volume of data being recovered. 5.3. Replacing operational or failed storage devices on IBM Z or IBM LinuxONE infrastructure You can replace operational or failed storage devices on IBM Z or IBM(R) LinuxONE infrastructure with new Small Computer System Interface (SCSI) disks. IBM Z or IBM(R) LinuxONE supports SCSI FCP disk logical units (SCSI disks) as persistent storage devices from external disk storage. You can identify a SCSI disk using its FCP Device number, two target worldwide port names (WWPN1 and WWPN2), and the logical unit number (LUN). For more information, see https://www.ibm.com/support/knowledgecenter/SSB27U_6.4.0/com.ibm.zvm.v640.hcpa5/scsiover.html Prerequisites Ensure that the data is resilient. In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab, and then click ocs-storagecluster-storagesystem . In the Status card of Block and File dashboard, under the Overview tab, verify that Data Resiliency has a green tick mark. Procedure List all the disks. Example output: A SCSI disk is represented as a zfcp-lun with the structure <device-id>:<wwpn>:<lun-id> in the ID section. The first disk is used for the operating system. If one storage device fails, you can replace it with a new disk. Remove the disk. Run the following command on the disk, replacing scsi-id with the SCSI disk identifier of the disk to be replaced: For example, the following command removes one disk with the device ID 0.0.8204 , the WWPN 0x500507630a0b50a4 , and the LUN 0x4002403000000000 : Append a new SCSI disk. Note The device ID for the new disk must be the same as the disk to be replaced. The new disk is identified with its WWPN and LUN ID. List all the FCP devices to verify the new disk is configured. Example output: | [
"oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide",
"rook-ceph-osd-0-6d77d6c7c6-m8xj6 0/1 CrashLoopBackOff 0 24h 10.129.0.16 compute-2 <none> <none> rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 24h 10.128.2.24 compute-0 <none> <none> rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 24h 10.130.0.18 compute-1 <none> <none>",
"osd_id_to_remove=0",
"oc scale -n openshift-storage deployment rook-ceph-osd-USD{osd_id_to_remove} --replicas=0",
"deployment.extensions/rook-ceph-osd-0 scaled",
"oc get -n openshift-storage pods -l ceph-osd-id=USD{osd_id_to_remove}",
"No resources found in openshift-storage namespace.",
"oc delete -n openshift-storage pod rook-ceph-osd-0-6d77d6c7c6-m8xj6 --grace-period=0 --force",
"warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod \"rook-ceph-osd-0-6d77d6c7c6-m8xj6\" force deleted",
"oc delete -n openshift-storage job ocs-osd-removal-job",
"job.batch \"ocs-osd-removal-job\" deleted",
"oc project openshift-storage",
"oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD{osd_id_to_remove} FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f -",
"oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'",
"2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 |egrep -i 'pvc|deviceset'",
"2021-05-12 14:31:34.666000 I | cephosd: removing the OSD PVC \"ocs-deviceset-xxxx-xxx-xxx-xxx\"",
"oc debug node/ <node name>",
"chroot /host",
"dmsetup ls| grep <pvc name>",
"ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt (253:0)",
"cryptsetup luksClose --debug --verbose <ocs-deviceset-name>",
"ps -ef | grep crypt",
"kill -9 <PID>",
"dmsetup ls",
"oc get pv -L kubernetes.io/hostname | grep <storageclass-name> | grep Released",
"local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1",
"oc delete pv <pv_name>",
"oc -n openshift-local-storage describe localvolumeset <lvs-name>",
"[...] Status: Conditions: Last Transition Time: 2020-11-17T05:03:32Z Message: DiskMaker: Available, LocalProvisioner: Available Status: True Type: DaemonSetsAvailable Last Transition Time: 2020-11-17T05:03:34Z Message: Operator reconciled successfully. Status: True Type: Available Observed Generation: 1 Total Provisioned Device Count: 4 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Discovered 2m30s (x4 localvolumeset- node.example.com - NewDevice over 2m30s) symlink-controller found possible matching disk, waiting 1m to claim Normal FoundMatch 89s (x4 localvolumeset- node.example.com - ingDisk over 89s) symlink-controller symlinking matching disk",
"oc delete -n openshift-storage job ocs-osd-removal-job",
"job.batch \"ocs-osd-removal-job\" deleted",
"oc get -n openshift-storage pods -l app=rook-ceph-osd",
"rook-ceph-osd-0-5f7f4747d4-snshw 1/1 Running 0 4m47s rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 1d20h rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 1d20h",
"oc delete pod -n openshift-storage -l app=rook-ceph-operator",
"pod \"rook-ceph-operator-6f74fb5bff-2d982\" deleted",
"oc get -n openshift-storage pvc | grep <lvs-name>",
"ocs-deviceset-0-0-c2mqb Bound local-pv-b481410 1490Gi RWO localblock 5m ocs-deviceset-1-0-959rp Bound local-pv-414755e0 1490Gi RWO localblock 1d20h ocs-deviceset-2-0-79j94 Bound local-pv-3e8964d3 1490Gi RWO localblock 1d20h",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node name>",
"chroot /host",
"lsblk",
"oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide",
"rook-ceph-osd-0-86bf8cdc8-4nb5t 0/1 crashLoopBackOff 0 24h 10.129.2.26 worker-0 <none> <none> rook-ceph-osd-1-7c99657cfb-jdzvz 1/1 Running 0 24h 10.128.2.46 worker-1 <none> <none> rook-ceph-osd-2-5f9f6dfb5b-2mnw9 1/1 Running 0 24h 10.131.0.33 worker-2 <none> <none>",
"osd_id_to_remove=0",
"oc scale -n openshift-storage deployment rook-ceph-osd-USD{osd_id_to_remove} --replicas=0",
"deployment.extensions/rook-ceph-osd-0 scaled",
"oc get -n openshift-storage pods -l ceph-osd-id=USD{osd_id_to_remove}",
"No resources found in openshift-storage namespace.",
"oc delete -n openshift-storage pod rook-ceph-osd-0-86bf8cdc8-4nb5t --grace-period=0 --force",
"warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod \"rook-ceph-osd-0-86bf8cdc8-4nb5t\" force deleted",
"oc get -n openshift-storage -o yaml deployment rook-ceph-osd-USD{osd_id_to_remove} | grep ceph.rook.io/pvc",
"ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-64xjl ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-64xjl",
"oc get -n openshift-storage pvc ocs-deviceset- <x> - <y> - <pvc-suffix>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ocs-deviceset-localblock-0-data-0-64xjl Bound local-pv-8137c873 256Gi RWO localblock 24h",
"oc get pv local-pv- <pv-suffix> -o yaml | grep path",
"path: /mnt/local-storage/localblock/vdc",
"oc describe -n openshift-storage pvc ocs-deviceset- <x> - <y> - <pvc-suffix> | grep Used",
"Used By: rook-ceph-osd-prepare-ocs-deviceset-localblock-0-data-0-64knzkc",
"oc delete -n openshift-storage job ocs-osd-removal-job",
"job.batch \"ocs-osd-removal-job\" deleted",
"oc project openshift-storage",
"oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD{osd_id_to_remove} FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f -",
"oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'",
"2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 |egrep -i 'pvc|deviceset'",
"2021-05-12 14:31:34.666000 I | cephosd: removing the OSD PVC \"ocs-deviceset-xxxx-xxx-xxx-xxx\"",
"oc debug node/ <node name>",
"chroot /host",
"dmsetup ls| grep <pvc name>",
"ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt (253:0)",
"cryptsetup luksClose --debug --verbose ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt",
"ps -ef | grep crypt",
"kill -9 <PID>",
"dmsetup ls",
"oc get pv -L kubernetes.io/hostname | grep localblock | grep Released",
"local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1",
"oc delete pv <pv-name>",
"oc debug node/worker-0",
"Starting pod/worker-0-debug To use host binaries, run `chroot /host` Pod IP: 192.168.88.21 If you don't see a command prompt, try pressing enter. chroot /host",
"ls -alh /mnt/local-storage/localblock",
"total 0 drwxr-xr-x. 2 root root 17 Nov 18 15:23 . drwxr-xr-x. 3 root root 24 Nov 18 15:23 .. lrwxrwxrwx. 1 root root 8 Nov 18 15:23 vdc -> /dev/vdc",
"oc get -n openshift-local-storage localvolume",
"NAME AGE localblock 25h",
"oc edit -n openshift-local-storage localvolume localblock",
"[...] storageClassDevices: - devicePaths: # - /dev/vdc storageClassName: localblock volumeMode: Block [...]",
"oc debug node/worker-0",
"Starting pod/worker-0-debug To use host binaries, run `chroot /host` Pod IP: 192.168.88.21 If you don't see a command prompt, try pressing enter. chroot /host",
"ls -alh /mnt/local-storage/localblock",
"total 0 drwxr-xr-x. 2 root root 17 Nov 18 15:23 . drwxr-xr-x. 3 root root 24 Nov 18 15:23 .. lrwxrwxrwx. 1 root root 8 Nov 18 15:23 vdc -> /dev/vdc",
"rm /mnt/local-storage/localblock/vdc",
"ls -alh /mnt/local-storage/localblock",
"total 0 drwxr-xr-x. 2 root root 6 Nov 18 17:11 . drwxr-xr-x. 3 root root 24 Nov 18 15:23 ..",
"lsblk",
"NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 40G 0 disk |-vda1 252:1 0 4M 0 part |-vda2 252:2 0 384M 0 part /boot `-vda4 252:4 0 39.6G 0 part `-coreos-luks-root-nocrypt 253:0 0 39.6G 0 dm /sysroot vdb 252:16 0 512B 1 disk vdd 252:32 0 256G 0 disk",
"oc edit -n openshift-local-storage localvolume localblock",
"[...] storageClassDevices: - devicePaths: # - /dev/vdc - /dev/vdd storageClassName: localblock volumeMode: Block [...]",
"oc get pv | grep 256Gi",
"local-pv-1e31f771 256Gi RWO Delete Bound openshift-storage/ocs-deviceset-localblock-2-data-0-6xhkf localblock 24h local-pv-ec7f2b80 256Gi RWO Delete Bound openshift-storage/ocs-deviceset-localblock-1-data-0-hr2fx localblock 24h local-pv-8137c873 256Gi RWO Delete Available localblock 32m",
"oc get -n openshift-storage pod -l app=rook-ceph-operator",
"NAME READY STATUS RESTARTS AGE rook-ceph-operator-85f6494db4-sg62v 1/1 Running 0 1d20h",
"oc delete -n openshift-storage pod rook-ceph-operator-85f6494db4-sg62v",
"pod \"rook-ceph-operator-85f6494db4-sg62v\" deleted",
"oc get -n openshift-storage pod -l app=rook-ceph-operator",
"NAME READY STATUS RESTARTS AGE rook-ceph-operator-85f6494db4-wx9xx 1/1 Running 0 50s",
"oc delete -n openshift-storage job ocs-osd-removal-job",
"job.batch \"ocs-osd-removal-job\" deleted",
"oc get -n openshift-storage pods -l app=rook-ceph-osd",
"rook-ceph-osd-0-76d8fb97f9-mn8qz 1/1 Running 0 23m rook-ceph-osd-1-7c99657cfb-jdzvz 1/1 Running 1 25h rook-ceph-osd-2-5f9f6dfb5b-2mnw9 1/1 Running 0 25h",
"oc get -n openshift-storage pvc | grep localblock",
"ocs-deviceset-localblock-0-data-0-q4q6b Bound local-pv-8137c873 256Gi RWO localblock 10m ocs-deviceset-localblock-1-data-0-hr2fx Bound local-pv-ec7f2b80 256Gi RWO localblock 1d20h ocs-deviceset-localblock-2-data-0-6xhkf Bound local-pv-1e31f771 256Gi RWO localblock 1d20h",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node name>",
"chroot /host",
"lsblk",
"lszdev",
"TYPE ID zfcp-host 0.0.8204 yes yes zfcp-lun 0.0.8204:0x102107630b1b5060:0x4001402900000000 yes no sda sg0 zfcp-lun 0.0.8204:0x500407630c0b50a4:0x3002b03000000000 yes yes sdb sg1 qeth 0.0.bdd0:0.0.bdd1:0.0.bdd2 yes no encbdd0 generic-ccw 0.0.0009 yes no",
"chzdev -d scsi-id",
"chzdev -d 0.0.8204:0x500407630c0b50a4:0x3002b03000000000",
"chzdev -e 0.0.8204:0x500507630b1b50a4:0x4001302a00000000",
"lszdev zfcp-lun",
"TYPE ID ON PERS NAMES zfcp-lun 0.0.8204:0x102107630b1b5060:0x4001402900000000 yes no sda sg0 zfcp-lun 0.0.8204:0x500507630b1b50a4:0x4001302a00000000 yes yes sdb sg1"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/replacing_devices/openshift_data_foundation_deployed_using_local_storage_devices |
Chapter 3. 11 December 2024 | Chapter 3. 11 December 2024 This release Red Hat Ansible Lightspeed features the following enhancement: Playbook generation and viewing playbook explanations are now supported on Red Hat Ansible Lightspeed on-premise deployments. Using the Ansible VS Code extension, you can create Ansible playbooks using a natural language interface in English. You can also view the explanations for new or existing playbooks. For more information, see Creating playbooks and viewing playbook explanations . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant/2.x_latest/html/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant_release_notes/lightspeed-key-features-11dec2024_lightspeed-release-notes |
4.3.2. Converting a virtual machine | 4.3.2. Converting a virtual machine Once you have prepared to convert the virtual machines, use virt-v2v to perform the actual conversions. This section provides the steps to convert the virtual machines, and the command syntax for virt-v2v . Note that conversions are resource intensive processes that require copying the whole disk image for a virtual machine. In typical environments, converting a single virtual machine takes approximately 5-10 minutes. In Example 4.2, "Typical virt-v2v conversion time" a virtual machine with a single 8GB disk is copied over SSH on a 1GigE network on three-year-old consumer hardware: Example 4.2. Typical virt-v2v conversion time The size of the disk to be copied is the major factor in determining conversion time. For a virtual machine on average hardware with a single disk of 20GB or less, a conversion usually takes less than 10 minutes. 4.3.2.1. virt-v2v virt-v2v converts virtual machines from a foreign hypervisor to run on Red Hat Enterprise Virtualization. The general command syntax for converting machines to run on Red Hat Enterprise Virtualization is: A full specification of the parameters which can be used with virt-v2v is available in Section 7.1, "virt-v2v Parameters" . Important Presently, virt-v2v does not support converting a virtual machine with more than four IDE disks to Red Hat Enterprise Virtualization. Attempting to convert a virtual machine with five or more disks will fail. | [
"win2k3r2-pv-32.img: 100% [===========================================]D 0h02m57s virt-v2v: win2k3r2-pv-32 configured with virtio drivers.",
"virt-v2v -i libvirtxml -o rhev -os storage.example.com:/exportdomain --network rhevm guest_name.xml virt-v2v -o rhev -os storage.example.com:/exportdomain --network rhevm guest_name virt-v2v -ic esx://esx.example.com/?no_verify=1 -o rhev -os storage.example.com:/exportdomain --network rhevm guest_name"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/sect-Converting_a_Virtual_Machine |
Chapter 39. Linux Containers | Chapter 39. Linux Containers docker component, BZ# 1193609 If docker is setting up loop devices for docker thin pool setup, docker operations like docker deletion and container I/O operations can be slow. The strongly recommended alternative configuration is to set up an LVM thin pool and use it as storage back-end for docker. Instructions on setting up an LVM thin pool can be found in the lvmthin(7) manual page. Then modify the /etc/sysconfig/docker-storage file to include the following line to make use of the LVM thin pool for container storage. docker component, BZ#1190492 A Super-Privileged Container (SPC) that is launched while some application containers are already active has access to the file system trees of these application containers. The file system trees reside in device mapper "thin target" devices. Since the SPC holds references on these file system trees, the docker daemon fails to clean up the "thin target" (the device is still "busy") at the time when an application container is terminated. As a consequence, the following error message is logged in the journal of systemd: where {Id} is a placeholder for the container runtime ID, and a stale device mapper "thin target" is left behind after an application container is terminated. docker component, BZ#1190492 A Super-Privileged Container (SPC) that is launched while some application containers are already active has access to the file system trees of these application containers. The file system trees reside in device mapper "thin target" devices. Since the SPC holds references on these file system trees, the docker daemon fails to clean up the "thin target" (the device is still "busy") at the time when an application container is terminated. As a consequence, the following error message is logged in the journal of systemd: where {Id} is a placeholder for the container runtime ID, and a stale device mapper "thin target" is left behind after an application container is terminated. docker component, BZ#1188252 The docker daemon can occasionally terminate unexpectedly while a Super-Privileged Container (SPC) is running. Consequently, a stale entry related to the Super-Privileged Container is left behind in /var/lib/docker/linkgraph.db , and the container cannot be restarted correctly afterwards. gdb component, BZ# 1186918 If the GNU debugger (GDB) is executing inside a Super-Privileged Container (SPC) and attaches to a process that is running in another container on Red Hat Enterprise Linux Atomic Host, GDB does not locate the binary images of the main executable or any shared libraries loaded by the process to be debugged. As a consequence, GDB may display error messages relating to files not being present, or being present but mismatched, or GDB may seem to attach correctly but then subsequent commands may fail or display corrupted information. A workaround is to specify the sysroot and file prior to issuing the command, as follows: | [
"DOCKER_STORAGE_OPTIONS= --storage-opt dm.thinpooldev=<pool-device>",
"Cannot destroy container {Id}: Driver devicemapper failed to remove root filesystem {Id}: Device is Busy",
"Cannot destroy container {Id}: Driver devicemapper failed to remove root filesystem {Id}: Device is Busy",
"set sysroot /proc/PID/root file /proc/PID/exe attach PID"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/known-issues-linux_containers |
Appendix B. S3 common request headers | Appendix B. S3 common request headers The following table lists the valid common request headers and their descriptions. Table B.1. Request Headers Request Header Description CONTENT_LENGTH Length of the request body. DATE Request time and date (in UTC). HOST The name of the host server. AUTHORIZATION Authorization token. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/developer_guide/s3-common-request-headers_dev |
Chapter 25. Storage | Chapter 25. Storage Multi-queue I/O scheduling for SCSI Red Hat Enterprise Linux 7.2 includes a new multiple-queue I/O scheduling mechanism for block devices known as blk-mq. The scsi-mq package allows the Small Computer System Interface (SCSI) subsystem to make use of this new queuing mechanism. This functionality is provided as a Technology Preview and is not enabled by default. To enable it, add scsi_mod.use_blk_mq=Y to the kernel command line. Improved LVM locking infrastructure lvmlockd is a generation locking infrastucture for LVM. It allows LVM to safely manage shared storage from multiple hosts, using either the dlm or sanlock lock managers. sanlock allows lvmlockd to coordinate hosts through storage-based locking, without the need for an entire cluster infrastructure. For more information, see the lvmlockd (8) man page. Targetd plug-in from the libStorageMgmt API Since Red Hat Enterprise Linux 7.1, storage array management with libStorageMgmt, a storage array independent API, has been fully supported. The provided API is stable, consistent, and allows developers to programmatically manage different storage arrays and utilize the hardware-accelerated features provided. System administrators can also use libStorageMgmt to manually configure storage and to automate storage management tasks with the included command-line interface. The Targetd plug-in is not fully supported and remains a Technology Preview. DIF/DIX DIF/DIX is a new addition to the SCSI Standard. It is fully supported in Red Hat Enterprise Linux 7.2 for the HBAs and storage arrays specified in the Features chapter, but it remains in Technology Preview for all other HBAs and storage arrays. DIF/DIX increases the size of the commonly used 512 byte disk block from 512 to 520 bytes, adding the Data Integrity Field (DIF). The DIF stores a checksum value for the data block that is calculated by the Host Bus Adapter (HBA) when a write occurs. The storage device then confirms the checksum on receipt, and stores both the data and the checksum. Conversely, when a read occurs, the checksum can be verified by the storage device, and by the receiving HBA. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/technology-preview-storage |
Chapter 5. Determining whether a printer supports driverless printing | Chapter 5. Determining whether a printer supports driverless printing CUPS supports driverless printing, which means that you can print without providing any hardware-specific software for the printer model. For this, the printer must inform the client about its capabilities and use one of the following standards: AirPrintTM IPP EverywhereTM Mopria(R) Wi-Fi Direct Print Services You can use the ipptool utility to find out whether a printer supports driverless printing. Prerequisites The printer or remote print server supports the Internet Printing Protocol (IPP). The host can connect to the IPP port of the printer or remote print server. The default IPP port is 631. Procedure Query the ipp-versions-supported and document-format-supported attributes, and ensure that get-printer-attributes test passes: For a remote printer, enter: For a queue on a remote print server, enter: To ensure that driverless printing works, verify in the output: The get-printer-attributes test returns PASS . The IPP version that the printer supports is 2.0 or higher. The list of formats contains one of the following: application/pdf image/urf image/pwg-raster For color printers, the output contains one of the mentioned formats and, additionally, image/jpeg . steps: Add a printer to CUPS by using the web interface Add a printer to CUPS by using the lpadmin utility | [
"ipptool -tv ipp:// <ip_address_or_hostname> :631/ipp/print get-printer-attributes.test | grep -E \"ipp-versions-supported|document-format-supported|get-printer-attributes\" Get printer attributes using get-printer-attributes [PASS] ipp-versions-supported (1setOf keyword) = document-format-supported (1setOf mimeMediaType) =",
"ipptool -tv ipp:// <ip_address_or_hostname> :631/printers/ <queue_name> get-printer-attributes.test | grep -E \"ipp-versions-supported|document-format-supported|get-printer-attributes\" Get printer attributes using get-printer-attributes [PASS] ipp-versions-supported (1setOf keyword) = document-format-supported (1setOf mimeMediaType) ="
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_using_a_cups_printing_server/determining-whether-a-printer-supports-driverless-printing_configuring-printing |
Chapter 5. Planning an SR-IOV deployment | Chapter 5. Planning an SR-IOV deployment Optimize single root I/O virtualization (SR-IOV) deployments for NFV by setting individual parameters based on your Compute node hardware. See Discovering your NUMA node topology to evaluate your hardware impact on the SR-IOV parameters. 5.1. Hardware partitioning for an SR-IOV deployment To achieve high performance with SR-IOV, partition the resources between the host and the guest. A typical topology includes 14 cores per NUMA node on dual socket Compute nodes. Both hyper-threading (HT) and non-HT cores are supported. Each core has two sibling threads. One core is dedicated to the host on each NUMA node. The virtual network function (VNF) handles the SR-IOV interface bonding. All the interrupt requests (IRQs) are routed on the host cores. The VNF cores are dedicated to the VNFs. They provide isolation from other VNFs and isolation from the host. Each VNF must use resources on a single NUMA node. The SR-IOV NICs used by the VNF must also be associated with that same NUMA node. This topology does not have a virtualization overhead. The host, OpenStack Networking (neutron), and Compute (nova) configuration parameters are exposed in a single file for ease, consistency, and to avoid incoherence that is fatal to proper isolation, causing preemption, and packet loss. The host and virtual machine isolation depend on a tuned profile, which defines the boot parameters and any Red Hat OpenStack Platform modifications based on the list of isolated CPUs. 5.2. Topology of an NFV SR-IOV deployment The following image has two VNFs each with the management interface represented by mgt and the data plane interfaces. The management interface manages the ssh access, and so on. The data plane interfaces bond the VNFs to DPDK to ensure high availability, as VNFs bond the data plane interfaces using the DPDK library. The image also has two provider networks for redundancy. The Compute node has two regular NICs bonded together and shared between the VNF management and the Red Hat OpenStack Platform API management. The image shows a VNF that uses DPDK at an application level, and has access to SR-IOV virtual functions (VFs) and physical functions (PFs), for better availability or performance, depending on the fabric configuration. DPDK improves performance, while the VF/PF DPDK bonds provide support for failover, and high availability. The VNF vendor must ensure that the DPDK poll mode driver (PMD) supports the SR-IOV card that is being exposed as a VF/PF. The management network uses OVS, therefore the VNF sees a mgmt network device using the standard virtIO drivers. You can use that device to initially connect to the VNF, and ensure that the DPDK application bonds the two VF/PFs. 5.2.1. Topology for NFV SR-IOV without HCI Observe the topology for SR-IOV without hyper-converged infrastructure (HCI) for NFV in the image below. It consists of compute and controller nodes with 1 Gbps NICs, and the director node. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/network_functions_virtualization_planning_and_configuration_guide/assembly_sriov_parameters |
7.81. ipa | 7.81. ipa 7.81.1. RHSA-2015:1462 - Moderate: ipa security and bug fix update Updated ipa packages that fix two security issues and several bugs are now available for Red Hat Enterprise Linux 6. Red Hat Product Security has rated this update as having Moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links in the References section. Red Hat Identity Management (IdM) is a centralized authentication, identity management, and authorization solution for both traditional and cloud-based enterprise environments. Two cross-site scripting (XSS) flaws were found in jQuery, which impacted the Identity Management web administrative interface, and could allow an authenticated user to inject arbitrary HTML or web script into the interface. (CVE-2010-5312, CVE-2012-6662) Bug Fixes BZ# 1131571 The ipa-server-install, ipa-replica-install, and ipa-client-install utilities are not supported on machines running in FIPS-140 mode. Previously, IdM did not warn users about this. Now, IdM does not allow running the utilities in FIPS-140 mode, and displays an explanatory message. BZ# 1132261 If an Active Directory (AD) server was specified or discovered automatically when running the ipa-client-install utility, the utility produced a traceback instead of informing the user that an IdM server is expected in this situation. Now, ipa-client-install detects the AD server and fails with an explanatory message. BZ# 1154687 When IdM servers were configured to require the TLS protocol version 1.1 (TLSv1.1) or later in the httpd server, the ipa utility failed. With this update, running ipa works as expected with TLSv1.1 or later. BZ# 1161722 In certain high-load environments, the Kerberos authentication step of the IdM client installer can fail. Previously, the entire client installation failed in this situation. This update modifies ipa-client-install to prefer the TCP protocol over the UDP protocol and to retry the authentication attempt in case of failure. BZ# 1185207 If ipa-client-install updated or created the /etc/nsswitch.conf file, the sudo utility could terminate unexpectedly with a segmentation fault. Now, ipa-client-install puts a new line character at the end of nsswitch.conf if it modifies the last line of the file, fixing this bug. BZ# 1191040 The ipa-client-automount utility failed with the "UNWILLING_TO_PERFORM" LDAP error when the nsslapd-minssf Red Hat Directory Server configuration parameter was set to "1". This update modifies ipa-client-automount to use encrypted connection for LDAP searches by default, and the utility now finishes successfully even with nsslapd-minssf specified. BZ# 1198160 If installing an IdM server failed after the Certificate Authority (CA) installation, the "ipa-server-install --uninstall" command did not perform a proper cleanup. After the user issued "ipa-server-install --uninstall" and then attempted to install the server again, the installation failed. Now, "ipa-server-install --uninstall" removes the CA-related files in the described situation, and ipa-server-install no longer fails with the mentioned error message. BZ# 1198339 Running ipa-client-install added the "sss" entry to the sudoers line in nsswitch.conf even if "sss" was already configured and the entry was present in the file. Duplicate "sss" then caused sudo to become unresponsive. Now, ipa-client-install no longer adds "sss" if it is already present in nsswitch.conf. BZ# 1201454 After running ipa-client-install, it was not possible to log in using SSH under certain circumstances. Now, ipa-client-install no longer corrupts the sshd_config file, and the sshd service can start as expected, and logging in using SSH works in the described situation. BZ# 1220788 An incorrect definition of the dc attribute in the /usr/share/ipa/05rfc2247.ldif file caused bogus error messages to be returned during migration. The attribute has been fixed, but the bug persists if the copy-schema-to-ca.py script was run on Red Hat Enterprise Linux 6.6 prior to running it on Red Hat Enterprise Linux 6.7. To work around this problem, manually copy /usr/share/ipa/schema/05rfc2247.ldif to /etc/dirsrv/slapd-PKI-IPA/schema/ and restart IdM. Note The IdM version provided by this update no longer uses jQuery. All ipa users are advised to upgrade to these updated packages, which contain backported patches to correct these issues. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-ipa |
Chapter 3. Device Drivers | Chapter 3. Device Drivers This chapter provides a comprehensive listing of all device drivers which were updated in Red Hat Enterprise Linux 6.6. Storage Drivers The bnx2i driver has been upgraded to version 2.7.10.1. The hpsa driver has been upgraded to version 3.4.4-1-RH1. The bfa driver has been upgraded to version 3.2.23.0. The mvsas driver has been upgraded to the latest upstream version. The qla4xxx driver has been upgraded to version 5.03.00.00.06.06-k0. The mpt2sas driver has been upgraded to version 16.100.00.00. The qla2xxx driver has been upgraded to version 8.07.00.08.06.6-k. The bnx2fc driver has been upgraded to version 2.4.2. The lpfc driver has been upgraded to version 10.2.8020.1. Device driver ID changes have been implemented for the pm80xx driver to support series 8 controllers. Configuration parameters have been updated for the be2iscsi driver to support Dual Chute mode. The version string has been changed for the hpsa driver. The megaraid_sas driver has been upgraded to the latest upstream version. In addition, its changelog has been updated. Network Drivers The Brocade BNA driver has been updated to version 3.2.23.0. The qlcnic driver has been updated to version 5.3.59. The Emulex be2net driver has been updated to version 10.2. The bnx2x driver has been updated to utilize the version 7.8.19 firmware. The qlge driver has been updated to version 1.00.00.34. A fix has been implemented for the igbvf driver to properly handle 32-bit DMA masks. A fix has been implemented for the igb driver to properly handle 32-bit DMA masks. The bnx2 driver has been updated to version 2.2.4. All Mellanox mlx drivers have been updated to their latest upstream versions. The i40evf driver has been updated to its latest upstream version. The i40e driver has been updated to its latest upstream version. The netxen driver has been updated to version 4.0.82. The enic driver has been updated to support the Cisco low latency network interface controller. The ixbevf driver has been updated to the latest upstream version. The ixbe driver has been updated to the latest upstream version. The tg3 driver has been updated to version 3.137. Product naming has been updated for the sfc driver. The General Public License header and Copyright information have been updated for the e1000e driver. Miscellaneous Drivers A cache device mapper target has been added to the dm driver. The cnic driver has been updated to version 2.5.20. In addition, its copyright year has been updated. The sb_edac has been updated to support the Haswell microarchitecture-based systems. The InfiniBand iser driver has been updated to version 1.3. The InfiniBand srp driver has been updated to the latest upstream version. In addition, its release date information has been updated. The InfiniBand qib driver has been updated to support Direct Connect Architecture. The intel_pstate driver has been updated to support Haswell CPU models. The rtsx driver has been updated to support the Realtek RTL8411B Ethernet controller. The openvswitch driver has been updated to support the Stream Control Transmission Protocol. The Direct Rendering Manager (DRM) module has been updated to version 3.14.2. The cnic driver has been updated to version 2.5.20. In addition, its copyright year has been updated. The hid-multitouch module has been updated to the latest upstream version, adding the support for Windows 8-certified touchescreens. An update notifier for changes of Open Firmware device tree properties has been implemented into the pseries driver. The DRBG module has been implemented, introducing a SP800-90A Deterministic Random Bit Generator. Accelerated computation for the PCLMULQDQ instruction has been implemented into the crct10dif module. The ccis driver has been updated to the latest upstream version. The NVMe driver has been updated to include device and queue numbers in interrupt names. MCE decoding support has been expanded for the mce_amd driver. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/ch-device_drivers |
A.9. Enabling Intel VT-x and AMD-V Virtualization Hardware Extensions in BIOS | A.9. Enabling Intel VT-x and AMD-V Virtualization Hardware Extensions in BIOS Note To expand your expertise, you might also be interested in the Red Hat Virtualization (RH318) training course. This section describes how to identify hardware virtualization extensions and enable them in your BIOS if they are disabled. The Intel VT-x extensions can be disabled in the BIOS. Certain laptop vendors have disabled the Intel VT-x extensions by default in their CPUs. The virtualization extensions cannot be disabled in the BIOS for AMD-V. See the following section for instructions on enabling disabled virtualization extensions. Verify the virtualization extensions are enabled in BIOS. The BIOS settings for Intel VT or AMD-V are usually in the Chipset or Processor menus. The menu names may vary from this guide, the virtualization extension settings may be found in Security Settings or other non standard menu names. Procedure A.3. Enabling virtualization extensions in BIOS Reboot the computer and open the system's BIOS menu. This can usually be done by pressing the delete key, the F1 key or Alt and F4 keys depending on the system. Enabling the virtualization extensions in BIOS Note Many of the steps below may vary depending on your motherboard, processor type, chipset and OEM. See your system's accompanying documentation for the correct information on configuring your system. Open the Processor submenu The processor settings menu may be hidden in the Chipset , Advanced CPU Configuration or Northbridge . Enable Intel Virtualization Technology (also known as Intel VT-x). AMD-V extensions cannot be disabled in the BIOS and should already be enabled. The virtualization extensions may be labeled Virtualization Extensions , Vanderpool or various other names depending on the OEM and system BIOS. Enable Intel VT-d or AMD IOMMU, if the options are available. Intel VT-d and AMD IOMMU are used for PCI device assignment. Select Save & Exit . Reboot the machine. When the machine has booted, run grep -E "vmx|svm" /proc/cpuinfo . Specifying --color is optional, but useful if you want the search term highlighted. If the command outputs, the virtualization extensions are now enabled. If there is no output your system may not have the virtualization extensions or the correct BIOS setting enabled. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-troubleshooting-enabling_intel_vt_x_and_amd_v_virtualization_hardware_extensions_in_bios |
Appendix D. Internationalization | Appendix D. Internationalization Red Hat Directory Server allows users to store, manage, and search for entries and their associated attributes in a number of different languages. An internationalized directory can be an invaluable corporate resource, providing employees and business partners with immediate access to the information they need in languages they understand. Directory Server supports all international character sets by default because directory data is stored in UTF-8. Further, Directory Server can use specified matching rules and collation orders based on language preferences in search operations. Note ASCII characters are required for attribute and object class names. D.1. About Locales Directory Server provides support for multiple languages through the use of locales . A locale identifies language-specific information about how users of a specific region, culture, or custom expect data to be presented, including how data of a given language is interpreted and how data is to be sorted, or collated . In addition, the locale information indicates what code page should be used to represent a given language. A code page is an internal table that the operating system uses to relate keyboard keys to character font screen displays. More specifically, a locale defines four things: Collation order. The collation order provides language and cultural-specific information about how the characters of a given language are to be sorted. It identifies things like the sequence of the letters in the alphabet, how to compare letters with accents to letters without accents, and if there are any characters that can be ignored when comparing strings. The collation order also takes into account culture-specific information about a language, such as the direction in which the language is read (left to right, right to left, or up and down). Character type. The character type distinguishes alphabetic characters from numeric or other characters. For example, in some languages, the pipe (|) character is considered punctuation while in others it is considered alphabetic. In addition, it defines the mapping of upper-case to lower-case letters. Monetary format. The monetary format specifies the monetary symbol used by a specific region, whether the symbol goes before or after its value, and how monetary units are represented. Time/date format. The time and date format indicates the customary formatting for times and dates in the region. The time and date format indicates whether dates are customarily represented in the mm/dd/yy (month, day, year) or dd/mm/yy (day, month, year) format and specifies what the days of the week and month are in a given language. For example, the date January 10, 1996, is represented as 10. leden 1996 in Czech and 10 janvier 1996 in French. Because a locale describes cultural, customary, and regional differences in addition to mechanical language differences, the directory data can both be translated into the specific languages understood by users as well as be presented in a way that users in a given region expect. | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/internationalization |
Red Hat OpenShift Cluster Manager | Red Hat OpenShift Cluster Manager Red Hat OpenShift Service on AWS 4 Configuring Red Hat OpenShift Service on AWS clusters using OpenShift Cluster Manager Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/red_hat_openshift_cluster_manager/index |
Chapter 3. Important Changes to External Kernel Parameters | Chapter 3. Important Changes to External Kernel Parameters This chapter provides system administrators with a summary of significant changes in the kernel shipped with Red Hat Enterprise Linux 7.5. These changes include added or updated proc entries, sysctl , and sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes. Kernel parameters amd_iommu_intr = [HW,X86-64] Specifies one of the following AMD IOMMU interrupt remapping modes. legacy - Use legacy interrupt remapping mode. vapic - Use virtual APIC mode, which allows IOMMU to inject interrupts directly into guest. This mode requires kvm-amd.avic=1 , which is default when IOMMU HW support is present. debug_pagealloc = [KNL] When CONFIG_DEBUG_PAGEALLOC is set, this parameter enables the feature at boot time. It is disabled by default. To avoid allocating huge chunk of memory for debug pagealloc do not enable it at boot time, and the operating system will work similarly as with the kernel built without CONFIG_DEBUG_PAGEALLOC . Use debug_pagealloc = on to enable the feature. ftrace_graph_max_depth = uint [FTRACE] This parameter is used with the function graph tracer. It defines the maximum depth it will trace into a function. Its value can be changed at run time by the max_graph_depth file file in the tracefs tracing directory. The default values is 0, which means that no limit is set. init_pkru = [x86] Specifies the default memory protection keys rights register contents for all processes. The default value is 0x55555554, which disallows access to all but pkey 0. You can override the value in the debugfs file system after boot. nopku = [x86] Disables the Memory Protection Keys CPU feature found in some Intel CPUs. mem_encrypt = [X86-64] Provides AMD Secure Memory Encryption (SME) control. The valid arguments are: on, off. The default setting depends on kernel configuration option: on : CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT=y off : CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT=n mem_encrypt=on: Activate SME mem_encrypt=off: Do not activate SME Kernel parameters to mitigate Spectre and Meltdown issues kpti = [X86-64] Enables kernel page table isolation. nopti = [X86-64] Disables kernel page table isolation. nospectre_v2 = [X86] Disables all mitigations for the Spectre variant 2 (indirect branch speculation) vulnerability. The operating system may allow data leaks with this option, which is equivalent to spectre_v2=off. spectre_v2 = [X86] Controls mitigation of Spectre variant 2 (indirect branch speculation) vulnerability. The valid arguments are: on, off, auto. on: unconditionally enable off: unconditionally disable auto: kernel detects whether your CPU model is vulnerable Selecting on will, and auto may, choose a mitigation method at run time according to the CPU, the available microcode, the setting of the CONFIG_RETPOLINE configuration option, and the compiler with which the kernel was built. You can also select specific mitigations manually: retpoline: replaces indirect branches ibrs: Intel: Indirect Branch Restricted Speculation (kernel) ibrs_always: Intel: Indirect Branch Restricted Speculation (kernel and user space) Not specifying this option is equivalent to spectre_v2=auto. Updated /proc/sys/net/core entries dev_weight_rx_bias The RPS processing, for example RFS and aRFS , is competing with the registered NAPI poll function of the driver for the per softirq cycle netdev_budget . This parameter influences the proportion of the configured netdev_budget that is spent on RPS based packet processing during RX softirq cycles. It also makes current dev_weight adaptable for asymmetric CPU needs on receiving on transmitting side of the network stack. This parameter is effective on a per CPU basis. Determination is based on dev_weight , and it is calculated in multiplicative way (dev_weight * dev_weight_rx_bias). The default value is 1. dev_weight_tx_bias This parameter scales the maximum number of packets that can be processed during a TX softirq cycle. It is effective on a per CPU basis, and allows scaling of current dev_weight for asymmetric net stack processing needs. Make sure to avoid making TX softirq processing a CPU hog. Determination is based on dev_weight , and it is calculated in multiplicative way (dev_weight * dev_weight_rx_bias). The default value is 1. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/chap-red_hat_enterprise_linux-7.5_release_notes-kernel_parameters_changes |
Chapter 8. Sources | Chapter 8. Sources The updated Red Hat Ceph Storage source code packages are available at the following location: For Red Hat Enterprise Linux 7: http://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHCEPH/SRPMS/ For Red Hat Enterprise Linux 8: http://ftp.redhat.com/redhat/linux/enterprise/8Base/en/RHCEPH/SRPMS/ | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/release_notes/sources |
probe::nfs.fop.open | probe::nfs.fop.open Name probe::nfs.fop.open - NFS client file open operation Synopsis nfs.fop.open Values flag file flag i_size file length in bytes dev device identifier file_name file name ino inode number | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-fop-open |
Chapter 2. Block Storage backup service deployment | Chapter 2. Block Storage backup service deployment The Block Storage backup service is optional. It is not installed by default so you must add it to your overcloud deployment Prerequisites An existing Red Hat OpenStack Platform (RHOSP) installation. An available storage source with a compatible backup driver: Object Storage (swift; default), Ceph, NFS, or Google Cloud storage. Note Google Cloud Storage requires additional configuration. For more information, see Appendix A, Google Cloud Storage configuration . 2.1. Configuring back end storage options for the backup service You can configure the backup service to use a different back end from the default. This procedures describes how to make a copy of the cinder-backup.yaml file and how to modify parameter defaults. Procedure Create a copy of the cinder-backup.yaml file, which is located in the /usr/share/openstack-tripleo-heat-templates/environments/ directory, and store it in the same location as your other custom templates. Your copy of the cinder-backup.yaml file contains default settings that configure an OpenStack Object Storage (swift) back end for the Block Storage backup service with Pacemaker. If this is the back end you are using for your backup then you do not need to make changes to this file. If you are using an alternative back end, configure parameter_defaults depending on the backup back end: If you are using Red Hat Ceph Storage, configure parameter_defaults in the following way: CinderBackupBackend : (Required) ceph CinderBackupRbdPoolName : (Optional) Set to a custom RBD pool name. Default: backups If you are using NFS, configure parameter_defaults in the following way: CinderBackupBackend : (Required) nfs CinderBackupNfsShare : (Required) Set to the NFS share that you want to mount. The default value is empty. CinderBackupNfsMountOptions : (Optional) Set to your required mount option. Save the changes to the file. To enable the backup service and apply this configuration, add your backup settings environment file to the stack with your other environment files and deploy the overcloud: For more information and additional configuration options, see Appendix A, Google Cloud Storage configuration . 2.2. Deploying the overcloud with your Google Cloud configuration After you create the environment file in `/home/stack/templates/ , deploy the overcloud then restart the cinder-backup service. Procedure Log in as the stack user. Deploy the configuration: Important If you passed any extra environment files when you created the overcloud, pass them again here using the -e option to avoid making undesired changes to the overcloud. Restart the cinder-backup service after the deployment finishes. Additional resources For more information, see Environment files and Including environment files in overcloud creation in the Director Installation and Usage guide | [
"cp /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml /home/stack/templates/cinder-backup-settings.yaml",
"(undercloud) [stack@undercloud ~]USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/cinder-backup-settings.yaml",
"openstack overcloud deploy --templates -e /home/stack/templates/cinder-backup-settings.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/block_storage_backup_guide/assembly_backup_install |
probe::vm.pagefault.return | probe::vm.pagefault.return Name probe::vm.pagefault.return - Indicates what type of fault occurred Synopsis vm.pagefault.return Values name name of the probe point fault_type returns either 0 (VM_FAULT_OOM) for out of memory faults, 2 (VM_FAULT_MINOR) for minor faults, 3 (VM_FAULT_MAJOR) for major faults, or 1 (VM_FAULT_SIGBUS) if the fault was neither OOM, minor fault, nor major fault. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-vm-pagefault-return |
Chapter 21. Troubleshooting | Chapter 21. Troubleshooting This chapter covers common problems and solutions for Red Hat Enterprise Linux 6 virtualization issues. Read this chapter to develop an understanding of some of the common problems associated with virtualization technologies. Troubleshooting takes practice and experience which are difficult to learn from a book. It is recommended that you experiment and test virtualization on Red Hat Enterprise Linux 6 to develop your troubleshooting skills. If you cannot find the answer in this document there may be an answer online from the virtualization community. Refer to Section B.1, "Online Resources" for a list of Linux virtualization websites. 21.1. Debugging and Troubleshooting Tools This section summarizes the System Administrator applications, the networking utilities, and debugging tools. You can employ these standard System administration tools and logs to assist with troubleshooting: kvm_stat - refer to Section 21.4, "kvm_stat" trace-cmd ftrace Refer to the Red Hat Enterprise Linux Developer Guide vmstat iostat lsof systemtap crash sysrq sysrq t sysrq w These networking tools can assist with troubleshooting virtualization networking problems: ifconfig tcpdump The tcpdump command 'sniffs' network packets. tcpdump is useful for finding network abnormalities and problems with network authentication. There is a graphical version of tcpdump named wireshark . brctl brctl is a networking tool that inspects and configures the Ethernet bridge configuration in the Linux kernel. You must have root access before performing these example commands: Listed below are some other useful commands for troubleshooting virtualization. strace is a command which traces system calls and events received and used by another process. vncviewer : connect to a VNC server running on your server or a virtual machine. Install vncviwer using the yum install tigervnc command. vncserver : start a remote desktop on your server. Gives you the ability to run graphical user interfaces such as virt-manager via a remote session. Install vncserver using the yum install tigervnc-server command. | [
"brctl show bridge-name bridge-id STP enabled interfaces ----------------------------------------------------------------------------- virtbr0 8000.feffffff yes eth0 brctl showmacs virtbr0 port-no mac-addr local? aging timer 1 fe:ff:ff:ff:ff: yes 0.00 2 fe:ff:ff:fe:ff: yes 0.00 brctl showstp virtbr0 virtbr0 bridge-id 8000.fefffffffff designated-root 8000.fefffffffff root-port 0 path-cost 0 max-age 20.00 bridge-max-age 20.00 hello-time 2.00 bridge-hello-time 2.00 forward-delay 0.00 bridge-forward-delay 0.00 aging-time 300.01 hello-timer 1.43 tcn-timer 0.00 topology-change-timer 0.00 gc-timer 0.02"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/chap-virtualization_administration_guide-troubleshooting |
Chapter 134. Spring Redis | Chapter 134. Spring Redis The producer and consumer are supported. This component allows sending and receiving messages from Redis . Redis is an advanced key-value store where keys can contain strings, hashes, lists, sets and sorted sets. In addition Redis provides pub/sub functionality for inter-app communications. Camel provides a producer for executing commands, a consumer for subscribing to pub/sub messages, and an idempotent repository for filtering out duplicate messages. Prerequisites To use this component, you must have a Redis server running. 134.1. Dependencies When using spring-redis with Red Hat build of Camel Spring Boot, ensure you use the following Maven dependency to have support for auto-configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-redis-starter</artifactId> </dependency> Use the BOM to get the version. <dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>USD{camel-spring-boot-version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> 134.2. URI Format spring-redis://host:port[?options] 134.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 134.3.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example, a component may have security settings, credentials for authentication, URLs for network connection. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 134.3.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders , which allows you to not hardcode URLs, port numbers, sensitive information, and other settings. In other words, placeholders allow you to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 134.4. Component Options The Spring Redis component supports 4 options, which are listed below. Name Description Default Type redisTemplate (common) Autowired Reference to a pre-configured RedisTemplate instance to use. RedisTemplate bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer uses the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy, you can use this to allow CamelContext and routes to start up in situations where a producer may otherwise fail during starting and cause the route to fail being started. By starting lazy, Camel's routing error handlers handle any startup failures while routing messages. Beware that when the first message is processed, creating, and starting, the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring of JDBC data sources, JMS connection factories, and AWS Clients. true boolean 134.5. Endpoint Options The Spring Redis endpoint is configured using URI syntax: with the following path and query parameters: 134.5.1. Path Parameters (2 parameters) Name Description Default Type host (common) Required The host where the Redis server is running. String port (common) Required Redis server port number. Integer 134.5.2. Query Parameters (10 parameters) Name Description Default Type channels (common) List of topic names or name patterns to subscribe to. Multiple names can be separated by a comma. String command (common) Default command, which can be overridden by message header. Notice that the consumer only supports the following commands only: PSUBSCRIBE and SUBSCRIBE. Enum values: PING SET GET QUIT EXISTS DEL TYPE FLUSHDB KEYS RANDOMKEY RENAME RENAMENX RENAMEX DBSIZE EXPIRE EXPIREAT TTL SELECT MOVE FLUSHALL GETSET MGET SETNX SETEX MSET MSETNX DECRBY DECR INCRBY INCR APPEND SUBSTR HSET HGET HSETNX HMSET HMGET HINCRBY HEXISTS HDEL HLEN HKEYS HVALS HGETALL RPUSH LPUSH LLEN LRANGE LTRIM LINDEX LSET LREM LPOP RPOP RPOPLPUSH SADD SMEMBERS SREM SPOP SMOVE SCARD SISMEMBER SINTER SINTERSTORE SUNION SUNIONSTORE SDIFF SDIFFSTORE SRANDMEMBER ZADD ZRANGE ZREM ZINCRBY ZRANK ZREVRANK ZREVRANGE ZCARD ZSCORE MULTI DISCARD EXEC WATCH UNWATCH SORT BLPOP BRPOP AUTH SUBSCRIBE PUBLISH UNSUBSCRIBE PSUBSCRIBE PUNSUBSCRIBE ZCOUNT ZRANGEBYSCORE ZREVRANGEBYSCORE ZREMRANGEBYRANK ZREMRANGEBYSCORE ZUNIONSTORE ZINTERSTORE SAVE BGSAVE BGREWRITEAOF LASTSAVE SHUTDOWN INFO MONITOR SLAVEOF CONFIG STRLEN SYNC LPUSHX PERSIST RPUSHX ECHO LINSERT DEBUG BRPOPLPUSH SETBIT GETBIT SETRANGE GETRANGE PEXPIRE PEXPIREAT GEOADD GEODIST GEOHASH GEOPOS GEORADIUS GEORADIUSBYMEMBER SET Command connectionFactory (common) Reference to a pre-configured RedisConnectionFactory instance to use. RedisConnectionFactory redisTemplate (common) Reference to a pre-configured RedisTemplate instance to use. RedisTemplate serializer (common) Reference to a pre-configured RedisSerializer instance to use. RedisSerializer bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which means any exceptions that occurred while the consumer is trying to pick up incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. The consumer defaults to use the org.apache.camel.spi.ExceptionHandler to deal with exceptions. These exceptions log at WARN or ERROR level and ignored. False Boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. If you enable the bridgeErrorHandler option, this option is not used. By default, the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern listenerContainer (consumer (advanced)) Reference to a pre-configured RedisMessageListenerContainer instance to use. RedisMessageListenerContainer lazyStartProducer (Producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy, you can use this to allow CamelContext and routes to start up in situations where a producer may otherwise fail during starting and cause the route startup to fail. By deferring this startup to be lazy, the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. False Boolean 134.6. Message Headers The Spring Redis component supports 29 message header(s), which is/are listed below: Name Description Default Type CamelRedis.Command (producer) Constant: COMMAND The command to perform. String CamelRedis.Key (common) Constant: KEY The key. String CamelRedis.Keys (common) Constant: KEYS The keys. Collection CamelRedis.Field (common) Constant: FIELD The field. String CamelRedis.Fields (common) Constant: FIELDS The fields. Collection CamelRedis.Value (common) Constant: VALUE The value. Object CamelRedis.Values (common) Constant: VALUES The values. Map CamelRedis.Start (common) Constant: START Start. Long CamelRedis.End (common) Constant: END End. Long CamelRedis.Timeout (common) Constant: TIMEOUT The timeout. Long CamelRedis.Offset (common) Constant: OFFSET The offset. Long CamelRedis.Destination (common) Constant: DESTINATION The destination. String CamelRedis.Channel (common) Constant: CHANNEL The channel. byte[] or String CamelRedis.Message (common) Constant: MESSAGE The message. Object CamelRedis.Index (common) Constant: INDEX The index. Long CamelRedis.Position (common) Constant: POSITION The position. String CamelRedis.Pivot (common) Constant: PIVOT The pivot. String CamelRedis.Count (common) Constant: COUNT Count. Long CamelRedis.Timestamp (common) Constant: TIMESTAMP The timestamp. Long CamelRedis.Pattern (common) Constant: PATTERN The pattern. byte[] or String CamelRedis.Db (common) Constant: DB The db. Integer CamelRedis.Score (common) Constant: SCORE The score. Double CamelRedis.Min (common) Constant: MIN The min. Double CamelRedis.Max (common) Constant: MAX The max. Double CamelRedis.Increment (common) Constant: INCREMENT Increment. Double CamelRedis.WithScore (common) Constant: WITHSCORE WithScore. Boolean CamelRedis.Latitude (common) Constant: LATITUDE Latitude. Double CamelRedis.Longitude (common) Constant: LONGITUDE Latitude. Double CamelRedis.Radius (common) Constant: RADIUS Radius. Double 134.7. Usage Also, see the available unit tests . Redis Producer from("direct:start") .setHeader("CamelRedis.Key", constant(key)) .setHeader("CamelRedis.Value", constant(value)) .to("spring-redis://host:port?command=SET&redisTemplate=#redisTemplate"); Redis Consumer from("spring-redis://host:port?command=SUBSCRIBE&channels=myChannel") .log("Received message: USD{body}"); Note Where '//host:port' is URL address for running Redis server. 134.7.1. Message headers evaluated by the Redis producer The producer issues commands to the server and each command has a different set of parameters with specific types. The result from the command execution is returned in the message body. Hash Commands Description Parameters Result HSET Set the string value of a hash field RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.FIELD /"CamelRedis.Field" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Void HGET Get the value of a hash field RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.FIELD /"CamelRedis.Field" (String) String HSETNX Set the value of a hash field, only if the field does not exist RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.FIELD /"CamelRedis.Field" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Void HMSET Set multiple hash fields to multiple values RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUES /"CamelRedis.Values" (Map<String, Object>) Void HMGET Get the values of all the given hash fields RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.FIELDS /"CamelRedis.Filds" (Collection<String>) Collection<Object> HINCRBY Increment the integer value of a hash field by the given number RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.FIELD /"CamelRedis.Field" (String), RedisConstants.VALUE /"CamelRedis.Value" (Long) Long HEXISTS Determine if a hash field exists RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.FIELD /"CamelRedis.Field" (String) Boolean HDEL Delete one or more hash fields RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.FIELD /"CamelRedis.Field" (String) Void HLEN Get the number of fields in a hash RedisConstants.KEY /"CamelRedis.Key" (String) Long HKEYS Get all the fields in a hash RedisConstants.KEY /"CamelRedis.Key" (String) Set<String> HVALS Get all the values in a hash RedisConstants.KEY /"CamelRedis.Key" (String) Collection<Object> HGETALL Get all the fields and values in a hash RedisConstants.KEY /"CamelRedis.Key" (String) Map<String, Object> List Commands Description Parameters Result RPUSH Append one or multiple values to a list RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Long RPUSHX Append a value to a list, only if the list exists RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Long LPUSH Prepend one or multiple values to a list RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Long LLEN Get the length of a list RedisConstants.KEY /"CamelRedis.Key" (String) Long LRANGE Get a range of elements from a list RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.START /"CamelRedis.Start"Long), RedisConstants.END /"CamelRedis.End" (Long) List<Object> LTRIM Trim a list to the specified range RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.START /"CamelRedis.Start"Long), RedisConstants.END /"CamelRedis.End" (Long) Void LINDEX Get an element from a list by its index RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.INDEX /"CamelRedis.Index" (Long) String LINSERT Insert an element before or after another element in a list RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object), RedisConstants.PIVOT /"CamelRedis.Pivot" (String), RedisConstants.POSITION /"CamelRedis.Position" (String) Long LSET Set the value of an element in a list by its index RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object), RedisConstants.INDEX /"CamelRedis.Index" (Long) Void LREM Remove elements from a list RedisConstants.KEY / RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object), RedisConstants.COUNT /"CamelRedis.Count" (Long) Long LPOP Remove and get the first element in a list RedisConstants.KEY /"CamelRedis.Key" (String) Object RPOP Remove and get the last element in a list RedisConstants.KEY /"CamelRedis.Key" (String) String RPOPLPUSH Remove the last element in a list, append it to another list and return it RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.DESTINATION /"CamelRedis.Destination" (String) Object BRPOPLPUSH Pop a value from a list, push it to another list and return it; or block until one is available RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.DESTINATION /"CamelRedis.Destination" (String), RedisConstants.TIMEOUT /"CamelRedis.Timeout" (Long) Object BLPOP Remove and get the first element in a list, or block until one is available RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.TIMEOUT /"CamelRedis.Timeout" (Long) Object BRPOP Remove and get the last element in a list, or block until one is available RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.TIMEOUT /"CamelRedis.Timeout" (Long) String Set Commands Description Parameters Result SADD Add one or more members to a set RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Boolean SMEMBERS Get all the members in a set RedisConstants.KEY /"CamelRedis.Key" (String) Set<Object> SREM Remove one or more members from a set RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Boolean SPOP Remove and return a random member from a set RedisConstants.KEY /"CamelRedis.Key" (String) String SMOVE Move a member from one set to another RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object), RedisConstants.DESTINATION /"CamelRedis.Destination" (String) Boolean SCARD Get the number of members in a set RedisConstants.KEY /"CamelRedis.Key" (String) Long SISMEMBER Determine if a given value is a member of a set RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Boolean SINTER Intersect multiple sets RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.KEYS /"CamelRedis.Keys" (String) Set<Object> SINTERSTORE Intersect multiple sets and store the resulting set in a key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.KEYS /"CamelRedis.Keys" (String), RedisConstants.DESTINATION /"CamelRedis.Destination" (String) Void SUNION Add multiple sets RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.KEYS /"CamelRedis.Keys" (String) Set<Object> SUNIONSTORE Add multiple sets and store the resulting set in a key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.KEYS /"CamelRedis.Keys" (String), RedisConstants.DESTINATION /"CamelRedis.Destination" (String) Void SDIFF Subtract multiple sets RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.KEYS /"CamelRedis.Keys" (String) Set<Object> SDIFFSTORE Subtract multiple sets and store the resulting set in a key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.KEYS /"CamelRedis.Keys" (String), RedisConstants.DESTINATION /"CamelRedis.Destination" (String) Void SRANDMEMBER Get one or multiple random members from a set RedisConstants.KEY /"CamelRedis.Key" (String) String Ordered set Commands Description Parameters Result ZADD Add one or more members to a sorted set, or update its score if it already exists RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object), RedisConstants.SCORE /"CamelRedis.Score" (Double) Boolean ZRANGE Return a range of members in a sorted set, by index RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.START /"CamelRedis.Start"Long), RedisConstants.END /"CamelRedis.End" (Long), RedisConstants.WITHSCORE /"CamelRedis.WithScore" (Boolean) Object ZREM Remove one or more members from a sorted set RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Boolean ZINCRBY Increment the score of a member in a sorted set RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object), RedisConstants.INCREMENT /"CamelRedis.Increment" (Double) Double ZRANK Determine the index of a member in a sorted set RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Long ZREVRANK Determine the index of a member in a sorted set, with scores ordered from high to low RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Long ZREVRANGE Return a range of members in a sorted set, by index, with scores ordered from high to low RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.START /"CamelRedis.Start"Long), RedisConstants.END /"CamelRedis.End" (Long), RedisConstants.WITHSCORE /"CamelRedis.WithScore" (Boolean) Object ZCARD Get the number of members in a sorted set RedisConstants.KEY /"CamelRedis.Key" (String) Long ZCOUNT Count the members in a sorted set with scores within the given values RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.MIN /"CamelRedis.Min" (Double), RedisConstants.MAX /"CamelRedis.Max" (Double) Long ZRANGEBYSCORE Return a range of members in a sorted set, by score RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.MIN /"CamelRedis.Min" (Double), RedisConstants.MAX /"CamelRedis.Max" (Double) Set<Object> ZREVRANGEBYSCORE Return a range of members in a sorted set, by score, with scores ordered from high to low RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.MIN /"CamelRedis.Min" (Double), RedisConstants.MAX /"CamelRedis.Max" (Double) Set<Object> ZREMRANGEBYRANK Remove all members in a sorted set within the given indexes RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.START /"CamelRedis.Start"(Long), RedisConstants.END /"CamelRedis.End" (Long) Void ZREMRANGEBYSCORE Remove all members in a sorted set within the given scores RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.START /"CamelRedis.Start"(Long), RedisConstants.END /"CamelRedis.End" (Long) Void ZUNIONSTORE Add multiple sorted sets and store the resulting sorted set in a new key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.KEYS /"CamelRedis.Keys" (String), RedisConstants.DESTINATION /"CamelRedis.Destination" (String) Void ZINTERSTORE Intersect multiple sorted sets and store the resulting sorted set in a new key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.KEYS /"CamelRedis.Keys" (String), RedisConstants.DESTINATION /"CamelRedis.Destination" (String) Void String Commands Description Parameters Result SET Set the string value of a key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Void GET Get the value of a key RedisConstants.KEY /"CamelRedis.Key" (String) Object STRLEN Get the length of the value stored in a key RedisConstants.KEY /"CamelRedis.Key" (String) Long APPEND Append a value to a key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (String) Integer SETBIT Sets or clears the bit at offset in the string value stored at key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.OFFSET /"CamelRedis.Offset" (Long), RedisConstants.VALUE /"CamelRedis.Value" (Boolean) Void GETBIT Returns the bit value at offset in the string value stored at key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.OFFSET /"CamelRedis.Offset" (Long) Boolean SETRANGE Overwrite part of a string at key starting at the specified offset RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object), RedisConstants.OFFSET /"CamelRedis.Offset" (Long) Void GETRANGE Get a substring of the string stored at a key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.START /"CamelRedis.Start"(Long), RedisConstants.END /"CamelRedis.End" (Long) String SETNX Set the value of a key, only if the key does not exist RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Boolean SETEX Set the value and expiration of a key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object), RedisConstants.TIMEOUT /"CamelRedis.Timeout" (Long), SECONDS Void DECRBY Decrement the integer value of a key by the given number RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Long) Long DECR Decrement the integer value of a key by one RedisConstants.KEY /"CamelRedis.Key" (String), Long INCRBY Increment the integer value of a key by the given amount RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Long) Long INCR Increment the integer value of a key by one RedisConstants.KEY /"CamelRedis.Key" (String) Long MGET Get the values of all the given keys RedisConstants.FIELDS /"CamelRedis.Filds" (Collection<String>) List<Object> MSET Set multiple keys to multiple values RedisConstants.VALUES /"CamelRedis.Values" (Map<String, Object>) Void MSETNX Set multiple keys to multiple values, only if none of the keys exist RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Void GETSET Set the string value of a key and return its old value RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Object Key Commands Description Parameters Result EXISTS Determine if a key exists RedisConstants.KEY /"CamelRedis.Key" (String) Boolean DEL Delete a key RedisConstants.KEYS /"CamelRedis.Keys" (String) Void TYPE Determine the type stored at key RedisConstants.KEY /"CamelRedis.Key" (String) DataType KEYS Find all keys matching the given pattern RedisConstants.PATERN /"CamelRedis.Pattern" (String) Collection<String> RANDOMKEY Return a random key from the keyspace RedisConstants.PATERN /"CamelRedis.Pattern" (String), RedisConstants.VALUE /"CamelRedis.Value" (String) String RENAME Rename a key RedisConstants.KEY /"CamelRedis.Key" (String) Void RENAMENX Rename a key, only if the new key does not exist RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (String) Boolean EXPIRE Set a key's time to live in seconds RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.TIMEOUT /"CamelRedis.Timeout" (Long) Boolean SORT Sort the elements in a list, set or sorted set RedisConstants.KEY /"CamelRedis.Key" (String) List<Object> PERSIST Remove the expiration from a key RedisConstants.KEY /"CamelRedis.Key" (String) Boolean EXPIREAT Set the expiration for a key as a UNIX timestamp RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.TIMESTAMP /"CamelRedis.Timestamp" (Long) Boolean PEXPIRE Set a key's time to live in milliseconds RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.TIMEOUT /"CamelRedis.Timeout" (Long) Boolean PEXPIREAT Set the expiration for a key as a UNIX timestamp specified in milliseconds RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.TIMESTAMP /"CamelRedis.Timestamp" (Long) Boolean TTL Get the time to live for a key RedisConstants.KEY /"CamelRedis.Key" (String) Long MOVE Move a key to another database RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.DB /"CamelRedis.Db" (Integer) Boolean Geo Commands Description Parameters Result GEOADD Adds the specified geospatial items (latitude, longitude, name) to the specified key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.LATITUDE /"CamelRedis.Latitude" (Double), RedisConstants.LONGITUDE /"CamelRedis.Longitude" (Double), RedisConstants.VALUE /"CamelRedis.Value" (Object) Long GEODIST Return the distance between two members in the geospatial index for the specified key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUES /"CamelRedis.Values" (Object[]) Distance GEOHASH Return valid Geohash strings representing the position of an element in the geospatial index for the specified key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) List<String> GEOPOS Return the positions (longitude, latitude) of an element in the geospatial index for the specified key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) List<Point> GEORADIUS Return the element in the geospatial index for the specified key, which is within the borders of the area specified with the central location and the maximum distance from the center (the radius) RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.LATITUDE /"CamelRedis.Latitude" (Double), RedisConstants.LONGITUDE /"CamelRedis.Longitude" (Double), RedisConstants.RADIUS /"CamelRedis.Radius" (Double), RedisConstants.COUNT /"CamelRedis.Count" (Integer) GeoResults GEORADIUSBYMEMBER This command is exactly like GEORADIUS with the sole difference that instead of taking, as the center of the area to query, a longitude and latitude value, it takes the name of a member already existing inside the geospatial index for the specified key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object), RedisConstants.RADIUS /"CamelRedis.Radius" (Double), RedisConstants.COUNT /"CamelRedis.Count" (Integer) GeoResults Other Commands Description Parameters Result MULTI Mark the start of a transaction block none Void DISCARD Discard all commands issued after MULTI none Void EXEC Execute all commands issued after MULTI none Void WATCH Watch the given keys to determine the execution of the MULTI/EXEC block RedisConstants.KEYS /"CamelRedis.Keys" (String) Void UNWATCH Forget about all watched keys none Void ECHO Echo the given string RedisConstants.VALUE /"CamelRedis.Value" (String) String PING Ping the server none String QUIT Close the connection none Void PUBLISH Post a message to a channel RedisConstants.CHANNEL /"CamelRedis.Channel" (String), RedisConstants.MESSAGE /"CamelRedis.Message" (Object) Void 134.8. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.component.spring-redis.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatically configure JDBC data sources, JMS connection factories, AWS Clients, etc. True Boolean camel.component.spring-redis.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which means any exceptions that occur while the consumer is trying to pick up incoming messages, or the likes, will be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. False Boolean camel.component.spring-redis.enabled Whether to enable auto configuration of the spring-redis component. This is enabled by default. Boolean camel.component.spring-redis.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy, you can allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail start up. By deferring this startup to be lazy, the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. False Boolean camel.component.spring-redis.redis-template Reference to a pre-configured RedisTemplate instance to use. The option is an org.springframework.data.redis.core.RedisTemplate type. RedisTemplate | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-redis-starter</artifactId> </dependency>",
"<dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>USD{camel-spring-boot-version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>",
"spring-redis://host:port[?options]",
"spring-redis:host:port",
"from(\"direct:start\") .setHeader(\"CamelRedis.Key\", constant(key)) .setHeader(\"CamelRedis.Value\", constant(value)) .to(\"spring-redis://host:port?command=SET&redisTemplate=#redisTemplate\");",
"from(\"spring-redis://host:port?command=SUBSCRIBE&channels=myChannel\") .log(\"Received message: USD{body}\");"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-spring-redis-component-starter |
Red Hat JBoss Data Virtualization for OpenShift | Red Hat JBoss Data Virtualization for OpenShift Red Hat JBoss Data Virtualization 6.4 Learn how to use Red Hat JBoss Data Virtualization with OpenShift. Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/red_hat_jboss_data_virtualization_for_openshift/index |
Chapter 1. About this guidance document | Chapter 1. About this guidance document This guide contains documentation about understanding, installing, and configuring Red Hat Certificate System. It is structured into the following parts: Part I, "Part I: Planning how to deploy Red Hat Certificate System" Part II, "Part II: Installing Red Hat Certificate System" Part III, "Part III: Configuring Red Hat Certificate System" Chapter 16, Uninstalling Certificate System subsystems Note Administrators not familiar with Red Hat Certificate System are strongly encouraged to read Part I, "Part I: Planning how to deploy Red Hat Certificate System" for a good understanding of Certificate System and plan ahead accordingly before following Part II, "Part II: Installing Red Hat Certificate System" . Chapter 2, Introduction to Red Hat Certificate System provides a topical overview of several different parts of Certificate System. Chapter 4, Supported platforms lists various components and their supported version by Red Hat. Chapter 5, Planning the Certificate System contains helpful information for planning a Red Hat Certificate System installation. Refer to Chapter 3, Supported Standards and Protocols for a list of compliant and supported standards and protocols. To install Red Hat Certificate System in a compliant manner, follow the instructions in Part II, "Part II: Installing Red Hat Certificate System" . Begin with Chapter 6, Prerequisites for installation to prepare the base operating system. Follow Section 7.2, "Installing RHCS using the pkispawn utility" . Afterwards, follow the required post-installation steps in Section 7.13, "Post-installation" to ensure the complete installation is compliant. This last section links you to relevant parts of the guide which explains the steps to perform for full compliance. Important If installation fails during the pkispawn phase (discussed in Section 7.2, "Installing RHCS using the pkispawn utility" ), it is suggested to check your configuration carefully for mistakes and refer to the error logs. Prior to re-running the pkispawn utility to retry the installation, it is necessary to fully remove the old instance. Refer to Section 16.1, "Removing a subsystem" for information about removing subsystems. For a more complete list of configuration options that are supported and compliant, refer to Part III, "Part III: Configuring Red Hat Certificate System" . This part can only be followed after the installation is complete. | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide_common_criteria_edition/about_guidance_document |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following link: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/providing_apis_in_the_developer_portal/proc-providing-feedback-on-redhat-documentation |
Chapter 4. Compiler and Tools | Chapter 4. Compiler and Tools gcc-libraries rebased to version 7.2.1 The gcc-libraries packages have been updated to upstream version 7.2.1. This update adds the following enhancements: The libgfortran.so Fortran library has been added to enable running applications built with Red Hat Developer Toolset. Support for certain DEC Fortran formatting extensions has been added to the Fortran library. (BZ# 1465568 , BZ#1554429) Support for retpolines added to GCC This update adds support for retpolines to GCC. Retpolines are a technique used by the kernel to reduce overhead of mitigating Spectre Variant 2 attacks described in CVE-2017-5715. (BZ# 1535656 , BZ#1553817) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_release_notes/new_features_compiler_and_tools |
6.4. Reverting to an ext2 File System | 6.4. Reverting to an ext2 File System Because ext3 is relatively new, some disk utilities do not yet support it. For example, you may need to shrink a partition with resize2fs , which does not yet support ext3. In this situation, it may be necessary to temporarily revert a file system to ext2. To revert a partition, you must first unmount the partition by logging in as root and typing, , change the file system type to ext2 by typing the following command as root: Check the partition for errors by typing the following command as root: Then mount the partition again as ext2 file system by typing: In the above command, replace /mount/point with the mount point of the partition. , remove the .journal file at the root level of the partition by changing to the directory where it is mounted and typing: You now have an ext2 partition. If you want to permanently change the partition to ext2, remember to update the /etc/fstab file. Note You can increase the size of an ext3 file system using ext2online . ext2online allows you to increase the size of an ext3 file system once it is mounted (online) and on a resizable logical volume. The root file system is set up by default on LVM2 logical volumes during installation. Note that ext2online will only work on ext3 file systems. For more information, refer to man ext2online . | [
"umount /dev/mapper/VolGroup00-LogVol02",
"/sbin/tune2fs -O ^has_journal /dev/mapper/VolGroup00-LogVol02",
"/sbin/e2fsck -y /dev/mapper/VolGroup00-LogVol02",
"mount -t ext2 /dev/mapper/VolGroup00-LogVol02 /mount/point",
"rm -f .journal"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/The_ext3_File_System-Reverting_to_an_ext2_File_System |
Chapter 2. Distribution of content in RHEL 9 | Chapter 2. Distribution of content in RHEL 9 In the following sections, learn how the software is distributed in Red Hat Enterprise Linux 9. 2.1. Repositories Red Hat Enterprise Linux (RHEL) distributes content through different repositories, for example: BaseOS Content in the BaseOS repository consists of the core set of the underlying operating system functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in earlier releases of RHEL. AppStream Content in the AppStream repository includes additional user-space applications, runtime languages, and databases in support of the varied workloads and use cases. Important Both the BaseOS and AppStream content sets are required by RHEL and are available in all RHEL subscriptions. CodeReady Linux Builder The CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Red Hat does not support packages included in the CodeReady Linux Builder repository. Additional resources Package manifest 2.2. Application Streams Red Hat provides multiple versions of user-space components as Application Streams, and they are updated more frequently than the core operating system packages. This provides more flexibility to customize Red Hat Enterprise Linux (RHEL) without impacting the underlying stability of the platform or specific deployments. Application Streams are available in the following formats: RPM format Modules, which are an extension to the RPM format Software Collections RHEL 9 improves Application Streams experience by providing initial Application Stream versions as RPMs, which you can install by using the dnf install command. Starting with RHEL 9.1, Red Hat provides additional Application Stream versions as modules with a shorter life cycle. Important Each Application Stream has its own life cycle, and it can be the same or shorter than the life cycle of RHEL 9. See Red Hat Enterprise Linux Application Streams Life Cycle . Always determine which version of an Application Stream you want to install, and make sure to review the RHEL Application Stream life cycle first. Additional resources Red Hat Enterprise Linux 9: Application Compatibility Guide Package manifest Red Hat Enterprise Linux Application Streams Life Cycle 2.3. Modules A module is a set of RPM packages that represent a component. A typical module contains the following package types: Packages with an application Packages with the application-specific dependency libraries Packages with documentation for the application Packages with helper utilities 2.4. Module streams Module streams are filters that can be imagined as virtual repositories in the AppStream physical repository. Module streams versions of the AppStream components. Each of the streams receives updates independently, and they can depend on other module streams. Module streams can be active or inactive. Active streams give the system access to the RPM packages within the particular module stream, allowing the installation of the respective component version. A stream is active in the following cases: If an administrator explicitly enables it. If the stream is a dependency of an enabled module. If the stream is the default stream. Each module can have a default stream but in Red Hat Enterprise Linux 9, no default streams are defined. If required, you can configure default streams as described in Defining custom default module streams and profiles . Only one stream of a particular module can be active at a given point in time. Therefore, only packages from a particular stream are available. Prior to selecting a particular stream for a runtime user application or a developer application, consider the following: Required functionality and which component versions support that functionality Compatibility with your application or use case The life cycle of the Application Stream and your update plan For a list of all available modules and streams, see the Package manifest . For per-component changes, see the Release Notes . Additional resources Modular dependencies and stream changes 2.5. Module profiles A module profile is a list of recommended packages to be installed together for a particular use case such as for a server, client, development, minimal install, or other. These package lists can contain packages outside the module stream, usually from the BaseOS repository or the dependencies of the stream. Installing packages by using a profile is a one-time action provided for the user's convenience. It is also possible to install packages by using multiple profiles of the same module stream without any further preparatory steps. Each module stream can have any number of profiles, including none. For any given module stream, some of its profiles can be marked as default and are then used for profile installation actions if you did not explicitly specify a profile. However, the existence of a default profile for a module stream is not required. Example 2.1. nodejs module profiles The nodejs module, which provides the Node.js runtime environment, offers the following profiles for installation: In this example, the following profiles are available: common : The production-ready packages. This is the default profile ( [d] ). development : The production-ready packages, including the Node.js development headers. minimal : The smallest set of packages that provides the Node.js runtime environment. s2i : Packages necessary for creating Node.js Source-to-Image (S2I) Linux containers. | [
"dnf module list nodejs Name Stream Profiles Summary nodejs 18 common [d], development, minimal, s2i Javascript runtime Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_software_with_the_dnf_tool/assembly_distribution-of-content-in-rhel-9_managing-software-with-the-dnf-tool |
Chapter 10. Custom Metadata Repository | Chapter 10. Custom Metadata Repository 10.1. Custom Metadata Repository Traditionally the metadata for a Virtual Database is built by Teiid Designer and supplied to the Red Hat JBoss Data Virtualization engine through a VDB archive file. This VDB file contains the metadata files called INDEX files, that are then read by a specific instance of MetadataRepository by name INDEX . In the Dynamic VDB scenario, currently there are three import types available: NATIVE, DDL and FILE. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/chap-custom_metadata_repository |
Chapter 3. Standalone upgrade | Chapter 3. Standalone upgrade In general, Red Hat Quay supports upgrades from a prior (N-1) minor version only. For example, upgrading directly from Red Hat Quay 3.8 to the latest version of 3 is not supported. Instead, users would have to upgrade as follows: 3.8.z 3.9.z 3.9.z 3.10.z 3.10.z 3.11.z 3.11.z 3.12.z 3.12.z 3.13.z This is required to ensure that any necessary database migrations are done correctly and in the right order during the upgrade. In some cases, Red Hat Quay supports direct, single-step upgrades from prior (N-2, N-3) minor versions. This exception to the normal, prior minor version-only, upgrade simplifies the upgrade procedure for customers on older releases. The following upgrade paths are supported for Red Hat Quay 3: 3.10.z 3.13.z 3.11.z 3.13.z 3.12.z 3.13.z For users wanting to upgrade the Red Hat Quay Operator, see Upgrading the Red Hat Quay Operator Overview . This document describes the steps needed to perform each individual upgrade. Determine your current version and then follow the steps in sequential order, starting with your current version and working up to your desired target version. Upgrade to 3.13.z from 3.12.z Upgrade to 3.13.z from 3.11.z Upgrade to 3.13.z from 3.10.z See the Red Hat Quay Release Notes for information on features for individual releases. The general procedure for a manual upgrade consists of the following steps: Stop the Quay and Clair containers. Backup the database and image storage (optional but recommended). Start Clair using the new version of the image. Wait until Clair is ready to accept connections before starting the new version of Quay. 3.1. Accessing images Red Hat Quay image from version 3.4.0 and later are available from registry.redhat.io and registry.access.redhat.com , with authentication set up as described in Red Hat Container Registry Authentication . 3.2. Upgrading the Clair PostgreSQL database If you are upgrading Red Hat Quay to version 13, you must migrate your Clair PostgreSQL database version from PostgreSQL version 13 version 15. This requires bringing down your Clair PostgreSQL 13 database and running a migration script to initiate the process. Use the following procedure to upgrade your Clair PostgreSQL database from version 13 to version 15. Important Clair security scans might become temporarily disrupted after the migration procedure has succeeded. Procedure Stop the Red Hat Quay container by entering the following command: USD sudo podman stop <quay_container_name> Stop the Clair container by running the following command: USD sudo podman stop <clair_container_id> Run the following Podman process from SCLOrg's Data Migration procedure, which allows for data migration from a remote PostgreSQL server: USD sudo podman run -d --name <clair_migration_postgresql_database> 1 -e POSTGRESQL_MIGRATION_REMOTE_HOST=<container_ip_address> \ 2 -e POSTGRESQL_MIGRATION_ADMIN_PASSWORD=remoteAdminP@ssword \ -v </host/data/directory:/var/lib/pgsql/data:Z> \ 3 [ OPTIONAL_CONFIGURATION_VARIABLES ] registry.redhat.io/rhel8/postgresql-15 1 Insert a name for your Clair PostgreSQL 15 migration database. 2 Your new Clair PostgreSQL 15 database container IP address. Can obtained by running the following command: sudo podman inspect -f "{{.NetworkSettings.IPAddress}}" postgresql-quay . 3 You must specify a different volume mount point than the one from your initial Clair PostgreSQL 13 deployment, and modify the access control lists for said directory. For example: USD mkdir -p /host/data/clair-postgresql15-directory USD setfacl -m u:26:-wx /host/data/clair-postgresql15-directory This prevents data from being overwritten by the new container. Stop the Clair PostgreSQL 13 container: USD sudo podman stop <clair_postgresql13_container_name> After completing the PostgreSQL migration, run the Clair PostgreSQL 15 container, using the new data volume mount from Step 3, for example, </host/data/clair-postgresql15-directory:/var/lib/postgresql/data> : USD sudo podman run -d --rm --name <postgresql15-clairv4> \ -e POSTGRESQL_USER=<clair_username> \ -e POSTGRESQL_PASSWORD=<clair_password> \ -e POSTGRESQL_DATABASE=<clair_database_name> \ -e POSTGRESQL_ADMIN_PASSWORD=<admin_password> \ -p 5433:5432 \ -v </host/data/clair-postgresql15-directory:/var/lib/postgresql/data:Z> \ registry.redhat.io/rhel8/postgresql-15 Start the Red Hat Quay container by entering the following command: USD sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay \ -v /home/<quay_user>/quay-poc/config:/conf/stack:Z \ -v /home/<quay_user>/quay-poc/storage:/datastorage:Z \ {productrepo}/{quayimage}:{productminv} Start the Clair container by entering the following command: USD sudo podman run -d --name clairv4 \ -p 8081:8081 -p 8088:8088 \ -e CLAIR_CONF=/clair/config.yaml \ -e CLAIR_MODE=combo \ registry.redhat.io/quay/clair-rhel8:{productminv} For more information, see Data Migration . 3.3. Upgrade to 3.13.z from 3.12.z 3.3.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.13.3 Clair: registry.redhat.io/quay/clair-rhel8:v3.13.3 PostgreSQL: registry.redhat.io/rhel8/postgresql-13 Redis: registry.redhat.io/rhel8/redis-6:1-110 Clair-PosgreSQL: registry.redhat.io/rhel8/postgresql-15 3.4. Upgrade to 3.13.z from 3.11.z 3.4.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.13.3 Clair: registry.redhat.io/quay/clair-rhel8:v3.13.3 PostgreSQL: registry.redhat.io/rhel8/postgresql-13 Redis: registry.redhat.io/rhel8/redis-6:1-110 Clair-PosgreSQL: registry.redhat.io/rhel8/postgresql-15 3.5. Upgrade to 3.13.z from 3.10.z 3.5.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.13.3 Clair: registry.redhat.io/quay/clair-rhel8:v3.13.3 PostgreSQL: registry.redhat.io/rhel8/postgresql-13 Redis: registry.redhat.io/rhel8/redis-6:1-110 Clair-PosgreSQL: registry.redhat.io/rhel8/postgresql-15 | [
"sudo podman stop <quay_container_name>",
"sudo podman stop <clair_container_id>",
"sudo podman run -d --name <clair_migration_postgresql_database> 1 -e POSTGRESQL_MIGRATION_REMOTE_HOST=<container_ip_address> \\ 2 -e POSTGRESQL_MIGRATION_ADMIN_PASSWORD=remoteAdminP@ssword -v </host/data/directory:/var/lib/pgsql/data:Z> \\ 3 [ OPTIONAL_CONFIGURATION_VARIABLES ] registry.redhat.io/rhel8/postgresql-15",
"mkdir -p /host/data/clair-postgresql15-directory",
"setfacl -m u:26:-wx /host/data/clair-postgresql15-directory",
"sudo podman stop <clair_postgresql13_container_name>",
"sudo podman run -d --rm --name <postgresql15-clairv4> -e POSTGRESQL_USER=<clair_username> -e POSTGRESQL_PASSWORD=<clair_password> -e POSTGRESQL_DATABASE=<clair_database_name> -e POSTGRESQL_ADMIN_PASSWORD=<admin_password> -p 5433:5432 -v </host/data/clair-postgresql15-directory:/var/lib/postgresql/data:Z> registry.redhat.io/rhel8/postgresql-15",
"sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v /home/<quay_user>/quay-poc/config:/conf/stack:Z -v /home/<quay_user>/quay-poc/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv}",
"sudo podman run -d --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo registry.redhat.io/quay/clair-rhel8:{productminv}"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/upgrade_red_hat_quay/standalone-upgrade |
Chapter 68. KafkaConnect schema reference | Chapter 68. KafkaConnect schema reference Property Description spec The specification of the Kafka Connect cluster. KafkaConnectSpec status The status of the Kafka Connect cluster. KafkaConnectStatus | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaConnect-reference |
Chapter 4. Installing a cluster on vSphere using the Assisted Installer | Chapter 4. Installing a cluster on vSphere using the Assisted Installer You can install OpenShift Container Platform on on-premise hardware or on-premise VMs by using the Assisted Installer. Installing OpenShift Container Platform by using the Assisted Installer supports x86_64 , AArch64 , ppc64le , and s390x CPU architectures. The Assisted Installer is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console. 4.1. Additional resources Installing OpenShift Container Platform with the Assisted Installer | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_vsphere/installing-vsphere-assisted-installer |
10.3. Configuring Attribute Encryption | 10.3. Configuring Attribute Encryption Use the command line or the web console to enable and disable attribute encryption for certain attributes. 10.3.1. Enabling Encryption of an Attribute Using the Command Line To configure that Directory Server stores, for example, telephoneNumber attributes in the userRoot database AES-encrypted: Optionally, to encrypt existing telephoneNumber attributes, export the database. See Section 10.4.1, "Exporting an Encrypted Database" . Enable AES encryption for the telephoneNumber attribute in the userRoot database: If you exported the database to encrypt also existing attributes, reimport the database. See Section 10.4.2, "Importing an LDIF File into an Encrypted Database" . 10.3.2. Enabling Encryption of an Attribute Using the Web Console To configure that Directory Server stores, for example, telephoneNumber attributes in the database AES-encrypted: Optionally, to encrypt existing telephoneNumber attributes, export the database. See Section 10.4.1, "Exporting an Encrypted Database" . Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Database menu. Select the suffix entry. Open the Encrypted Attributes tab. Enter the name of the attribute to be encrypted. Click Add Attribute . If you exported the database to encrypt also existing attributes, reimport the database. See Section 10.4.2, "Importing an LDIF File into an Encrypted Database" . 10.3.3. Disabling Encryption for an Attribute Using the Command Line To configure that Directory Server no longer stores, for example, telephoneNumber attributes encrypted in the userRoot database: Optionally, to decrypt existing telephoneNumber attributes, export the database. See Section 10.4.1, "Exporting an Encrypted Database" . Disable encryption for the telephoneNumber attribute in the userRoot database: If you exported the database to decrypt existing attributes, reimport the database. See Section 10.4.2, "Importing an LDIF File into an Encrypted Database" . 10.3.4. Disabling Encryption of an Attribute Using the Web Console To configure that Directory Server stores, for example, telephoneNumber attributes in the database AES-encrypted: Optionally, to encrypt existing telephoneNumber attributes, export the database. See Section 10.4.1, "Exporting an Encrypted Database" . Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Database menu. Select the suffix entry. Open the Encrypted Attributes tab. Click the Delete Attribute button to the right of the telephoneNumber attribute. Click Yes to confirm. If you exported the database to decrypt existing attributes, reimport the database. See Section 10.4.2, "Importing an LDIF File into an Encrypted Database" . 10.3.5. General Considerations after Enabling Attribute Encryption When you enabled encryption for data that is already in the database: Unencrypted data can persist in the server's database page pool backing file. To remove this data: Stop the instance: Delete the /var/lib/dirsrv/slapd- instance_name /db/guardian file: Start the instance: After you enabled encryption and successfully imported the data, delete the LDIF file with the unencrypted data. After enabling encryption, Directory Server deletes and creates a new database when reimporting the data. The replication log file is not encrypted. To protect this data, store it on an encrypted disk. Data in the server's memory (RAM) is unencrypted and can be temporarily stored in swap partitions. To protect this data, set up encrypted swap space. Important Even if you delete files that contain unencrypted data, this data can be restored under certain circumstances. | [
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend attr-encrypt --add-attr telephoneNumber userRoot",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend attr-encrypt --del-attr telephoneNumber userRoot",
"dsctl instance_name stop",
"rm /var/lib/dirsrv/slapd- instance_name /db/guardian",
"dsctl instance_name start"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/configuring_attribute_encryption |
Chapter 11. Creating a Keycloak user | Chapter 11. Creating a Keycloak user This procedure creates a Keycloak user, with the hubadmin role, that can log in to automation hub with Super Administration privileges. Procedure Navigate to Operator Installed Operators . Select the Red Hat Single Sign-On Operator project. Select the Keycloak Realm tab and click Create Keycloak User . On the Keycloak User form, select YAML view . Replace the default YAML file with the following: apiVersion: keycloak.org/v1alpha1 kind: KeycloakUser metadata: name: hubadmin-user labels: app: sso realm: ansible-automation-platform namespace: rh-sso spec: realmSelector: matchLabels: app: sso realm: ansible-automation-platform user: username: hub_admin firstName: Hub lastName: Admin email: [email protected] enabled: true emailVerified: false credentials: - type: password value: <ch8ngeme> clientRoles: automation-hub: - hubadmin Click Create and wait for the process to complete. When a user is created, the Operator creates a Secret containing both the username and password using the following naming pattern: credential-<realm name>-<username>-<namespace> . In this example the credential is called credential-ansible-automation-platform-hub-admin-rh-sso . When a user is created the Operator does not update the user's password. Password changes are not reflected in the Secret. | [
"apiVersion: keycloak.org/v1alpha1 kind: KeycloakUser metadata: name: hubadmin-user labels: app: sso realm: ansible-automation-platform namespace: rh-sso spec: realmSelector: matchLabels: app: sso realm: ansible-automation-platform user: username: hub_admin firstName: Hub lastName: Admin email: [email protected] enabled: true emailVerified: false credentials: - type: password value: <ch8ngeme> clientRoles: automation-hub: - hubadmin"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/proc-create-a-user_using-a-rhsso-operator |
Upgrade Guide | Upgrade Guide Red Hat Ceph Storage 7 Upgrading a Red Hat Ceph Storage Cluster Red Hat Ceph Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/upgrade_guide/index |
Chapter 5. Probe [monitoring.coreos.com/v1] | Chapter 5. Probe [monitoring.coreos.com/v1] Description Probe defines monitoring for a set of static targets or ingresses. Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of desired Ingress selection for target discovery by Prometheus. 5.1.1. .spec Description Specification of desired Ingress selection for target discovery by Prometheus. Type object Property Type Description authorization object Authorization section for this endpoint basicAuth object BasicAuth allow an endpoint to authenticate over basic authentication. More info: https://prometheus.io/docs/operating/configuration/#endpoint bearerTokenSecret object Secret to mount to read bearer token for scraping targets. The secret needs to be in the same namespace as the probe and accessible by the Prometheus Operator. interval string Interval at which targets are probed using the configured prober. If not specified Prometheus' global scrape interval is used. jobName string The job name assigned to scraped metrics by default. labelLimit integer Per-scrape limit on number of labels that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. labelNameLengthLimit integer Per-scrape limit on length of labels name that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. labelValueLengthLimit integer Per-scrape limit on length of labels value that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. metricRelabelings array MetricRelabelConfigs to apply to samples before ingestion. metricRelabelings[] object RelabelConfig allows dynamic rewriting of the label set, being applied to samples before ingestion. It defines <metric_relabel_configs> -section of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs module string The module to use for probing specifying how to probe the target. Example module configuring in the blackbox exporter: https://github.com/prometheus/blackbox_exporter/blob/master/example.yml oauth2 object OAuth2 for the URL. Only valid in Prometheus versions 2.27.0 and newer. prober object Specification for the prober to use for probing targets. The prober.URL parameter is required. Targets cannot be probed if left empty. sampleLimit integer SampleLimit defines per-scrape limit on number of scraped samples that will be accepted. scrapeTimeout string Timeout for scraping metrics from the Prometheus exporter. If not specified, the Prometheus global scrape interval is used. targetLimit integer TargetLimit defines a limit on the number of scraped targets that will be accepted. targets object Targets defines a set of static or dynamically discovered targets to probe. tlsConfig object TLS configuration to use when scraping the endpoint. 5.1.2. .spec.authorization Description Authorization section for this endpoint Type object Property Type Description credentials object The secret's key that contains the credentials of the request type string Set the authentication type. Defaults to Bearer, Basic will cause an error 5.1.3. .spec.authorization.credentials Description The secret's key that contains the credentials of the request Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 5.1.4. .spec.basicAuth Description BasicAuth allow an endpoint to authenticate over basic authentication. More info: https://prometheus.io/docs/operating/configuration/#endpoint Type object Property Type Description password object The secret in the service monitor namespace that contains the password for authentication. username object The secret in the service monitor namespace that contains the username for authentication. 5.1.5. .spec.basicAuth.password Description The secret in the service monitor namespace that contains the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 5.1.6. .spec.basicAuth.username Description The secret in the service monitor namespace that contains the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 5.1.7. .spec.bearerTokenSecret Description Secret to mount to read bearer token for scraping targets. The secret needs to be in the same namespace as the probe and accessible by the Prometheus Operator. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 5.1.8. .spec.metricRelabelings Description MetricRelabelConfigs to apply to samples before ingestion. Type array 5.1.9. .spec.metricRelabelings[] Description RelabelConfig allows dynamic rewriting of the label set, being applied to samples before ingestion. It defines <metric_relabel_configs> -section of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs Type object Property Type Description action string Action to perform based on regex matching. Default is 'replace'. uppercase and lowercase actions require Prometheus >= 2.36. modulus integer Modulus to take of the hash of the source label values. regex string Regular expression against which the extracted value is matched. Default is '(.*)' replacement string Replacement value against which a regex replace is performed if the regular expression matches. Regex capture groups are available. Default is 'USD1' separator string Separator placed between concatenated source label values. default is ';'. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured separator and matched against the configured regular expression for the replace, keep, and drop actions. targetLabel string Label to which the resulting value is written in a replace action. It is mandatory for replace actions. Regex capture groups are available. 5.1.10. .spec.oauth2 Description OAuth2 for the URL. Only valid in Prometheus versions 2.27.0 and newer. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object The secret or configmap containing the OAuth2 client id clientSecret object The secret containing the OAuth2 client secret endpointParams object (string) Parameters to append to the token URL scopes array (string) OAuth2 scopes used for the token request tokenUrl string The URL to fetch the token from 5.1.11. .spec.oauth2.clientId Description The secret or configmap containing the OAuth2 client id Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 5.1.12. .spec.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 5.1.13. .spec.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 5.1.14. .spec.oauth2.clientSecret Description The secret containing the OAuth2 client secret Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 5.1.15. .spec.prober Description Specification for the prober to use for probing targets. The prober.URL parameter is required. Targets cannot be probed if left empty. Type object Required url Property Type Description path string Path to collect metrics from. Defaults to /probe . proxyUrl string Optional ProxyURL. scheme string HTTP scheme to use for scraping. Defaults to http . url string Mandatory URL of the prober. 5.1.16. .spec.targets Description Targets defines a set of static or dynamically discovered targets to probe. Type object Property Type Description ingress object ingress defines the Ingress objects to probe and the relabeling configuration. If staticConfig is also defined, staticConfig takes precedence. staticConfig object staticConfig defines the static list of targets to probe and the relabeling configuration. If ingress is also defined, staticConfig takes precedence. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#static_config . 5.1.17. .spec.targets.ingress Description ingress defines the Ingress objects to probe and the relabeling configuration. If staticConfig is also defined, staticConfig takes precedence. Type object Property Type Description namespaceSelector object From which namespaces to select Ingress objects. relabelingConfigs array RelabelConfigs to apply to the label set of the target before it gets scraped. The original ingress address is available via the \__tmp_prometheus_ingress_address label. It can be used to customize the probed URL. The original scrape job's name is available via the \\__tmp_prometheus_job_name label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config relabelingConfigs[] object RelabelConfig allows dynamic rewriting of the label set, being applied to samples before ingestion. It defines <metric_relabel_configs> -section of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs selector object Selector to select the Ingress objects. 5.1.18. .spec.targets.ingress.namespaceSelector Description From which namespaces to select Ingress objects. Type object Property Type Description any boolean Boolean describing whether all namespaces are selected in contrast to a list restricting them. matchNames array (string) List of namespace names to select from. 5.1.19. .spec.targets.ingress.relabelingConfigs Description RelabelConfigs to apply to the label set of the target before it gets scraped. The original ingress address is available via the \__tmp_prometheus_ingress_address label. It can be used to customize the probed URL. The original scrape job's name is available via the \\__tmp_prometheus_job_name label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type array 5.1.20. .spec.targets.ingress.relabelingConfigs[] Description RelabelConfig allows dynamic rewriting of the label set, being applied to samples before ingestion. It defines <metric_relabel_configs> -section of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs Type object Property Type Description action string Action to perform based on regex matching. Default is 'replace'. uppercase and lowercase actions require Prometheus >= 2.36. modulus integer Modulus to take of the hash of the source label values. regex string Regular expression against which the extracted value is matched. Default is '(.*)' replacement string Replacement value against which a regex replace is performed if the regular expression matches. Regex capture groups are available. Default is 'USD1' separator string Separator placed between concatenated source label values. default is ';'. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured separator and matched against the configured regular expression for the replace, keep, and drop actions. targetLabel string Label to which the resulting value is written in a replace action. It is mandatory for replace actions. Regex capture groups are available. 5.1.21. .spec.targets.ingress.selector Description Selector to select the Ingress objects. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 5.1.22. .spec.targets.ingress.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 5.1.23. .spec.targets.ingress.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 5.1.24. .spec.targets.staticConfig Description staticConfig defines the static list of targets to probe and the relabeling configuration. If ingress is also defined, staticConfig takes precedence. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#static_config . Type object Property Type Description labels object (string) Labels assigned to all metrics scraped from the targets. relabelingConfigs array RelabelConfigs to apply to the label set of the targets before it gets scraped. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config relabelingConfigs[] object RelabelConfig allows dynamic rewriting of the label set, being applied to samples before ingestion. It defines <metric_relabel_configs> -section of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs static array (string) The list of hosts to probe. 5.1.25. .spec.targets.staticConfig.relabelingConfigs Description RelabelConfigs to apply to the label set of the targets before it gets scraped. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type array 5.1.26. .spec.targets.staticConfig.relabelingConfigs[] Description RelabelConfig allows dynamic rewriting of the label set, being applied to samples before ingestion. It defines <metric_relabel_configs> -section of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs Type object Property Type Description action string Action to perform based on regex matching. Default is 'replace'. uppercase and lowercase actions require Prometheus >= 2.36. modulus integer Modulus to take of the hash of the source label values. regex string Regular expression against which the extracted value is matched. Default is '(.*)' replacement string Replacement value against which a regex replace is performed if the regular expression matches. Regex capture groups are available. Default is 'USD1' separator string Separator placed between concatenated source label values. default is ';'. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured separator and matched against the configured regular expression for the replace, keep, and drop actions. targetLabel string Label to which the resulting value is written in a replace action. It is mandatory for replace actions. Regex capture groups are available. 5.1.27. .spec.tlsConfig Description TLS configuration to use when scraping the endpoint. Type object Property Type Description ca object Struct containing the CA cert to use for the targets. cert object Struct containing the client cert file for the targets. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 5.1.28. .spec.tlsConfig.ca Description Struct containing the CA cert to use for the targets. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 5.1.29. .spec.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 5.1.30. .spec.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 5.1.31. .spec.tlsConfig.cert Description Struct containing the client cert file for the targets. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 5.1.32. .spec.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 5.1.33. .spec.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 5.1.34. .spec.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 5.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/probes GET : list objects of kind Probe /apis/monitoring.coreos.com/v1/namespaces/{namespace}/probes DELETE : delete collection of Probe GET : list objects of kind Probe POST : create a Probe /apis/monitoring.coreos.com/v1/namespaces/{namespace}/probes/{name} DELETE : delete a Probe GET : read the specified Probe PATCH : partially update the specified Probe PUT : replace the specified Probe 5.2.1. /apis/monitoring.coreos.com/v1/probes Table 5.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind Probe Table 5.2. HTTP responses HTTP code Reponse body 200 - OK ProbeList schema 401 - Unauthorized Empty 5.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/probes Table 5.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 5.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Probe Table 5.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Probe Table 5.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.8. HTTP responses HTTP code Reponse body 200 - OK ProbeList schema 401 - Unauthorized Empty HTTP method POST Description create a Probe Table 5.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.10. Body parameters Parameter Type Description body Probe schema Table 5.11. HTTP responses HTTP code Reponse body 200 - OK Probe schema 201 - Created Probe schema 202 - Accepted Probe schema 401 - Unauthorized Empty 5.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/probes/{name} Table 5.12. Global path parameters Parameter Type Description name string name of the Probe namespace string object name and auth scope, such as for teams and projects Table 5.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Probe Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.15. Body parameters Parameter Type Description body DeleteOptions schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Probe Table 5.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 5.18. HTTP responses HTTP code Reponse body 200 - OK Probe schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Probe Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body Patch schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK Probe schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Probe Table 5.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.23. Body parameters Parameter Type Description body Probe schema Table 5.24. HTTP responses HTTP code Reponse body 200 - OK Probe schema 201 - Created Probe schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/monitoring_apis/probe-monitoring-coreos-com-v1 |
Chapter 35. Jira Update Issue Sink | Chapter 35. Jira Update Issue Sink Update fields of an existing issue in Jira. The Kamelet expects the following headers to be set: issueKey / ce-issueKey : as the issue code in Jira. issueTypeName / ce-issueTypeName : as the name of the issue type (example: Bug, Enhancement). issueSummary / ce-issueSummary : as the title or summary of the issue. issueAssignee / ce-issueAssignee : as the user assigned to the issue (Optional). issuePriorityName / ce-issuePriorityName : as the priority name of the issue (example: Critical, Blocker, Trivial) (Optional). issueComponents / ce-issueComponents : as list of string with the valid component names (Optional). issueDescription / ce-issueDescription : as the issue description (Optional). The issue description can be set from the body of the message or the issueDescription / ce-issueDescription in the header, however the body takes precedence. 35.1. Configuration Options The following table summarizes the configuration options available for the jira-update-issue-sink Kamelet: Property Name Description Type Default Example jiraUrl * Jira URL The URL of your instance of Jira string "http://my_jira.com:8081" password * Password The password or the API Token to access Jira string username * Username The username to access Jira string Note Fields marked with an asterisk (*) are mandatory. 35.2. Dependencies At runtime, the jira-update-issue-sink Kamelet relies upon the presence of the following dependencies: camel:core camel:jackson camel:jira camel:kamelet mvn:com.fasterxml.jackson.datatype:jackson-datatype-joda:2.12.4.redhat-00001 35.3. Usage This section describes how you can use the jira-update-issue-sink . 35.3.1. Knative Sink You can use the jira-update-issue-sink Kamelet as a Knative sink by binding it to a Knative object. jira-update-issue-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-update-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueKey" value: "MYP-163" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueTypeName" value: "Bug" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueSummary" value: "The issue summary" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issuePriorityName" value: "Low" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: "jira server url" username: "username" password: "password" 35.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 35.3.1.2. Procedure for using the cluster CLI Save the jira-update-issue-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jira-update-issue-sink-binding.yaml 35.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jira-update-issue-sink-binding timer-source?message="The new comment"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTypeName -p step-1.value=Story --step insert-header-action -p step-2.name=issueSummary -p step-2.value="This is a story 123" --step insert-header-action -p step-3.name=issuePriorityName -p step-3.value=Highest jira-update-issue-sink?jiraUrl="jira url"\&username="username"\&password="password" This command creates the KameletBinding in the current namespace on the cluster. 35.3.2. Kafka Sink You can use the jira-update-issue-sink Kamelet as a Kafka sink by binding it to a Kafka topic. jira-update-issue-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-update-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueKey" value: "MYP-163" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueTypeName" value: "Bug" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueSummary" value: "The issue summary" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issuePriorityName" value: "Low" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-update-issue-sink properties: jiraUrl: "jira server url" username: "username" password: "password" 35.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 35.3.2.2. Procedure for using the cluster CLI Save the jira-update-issue-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jira-update-issue-sink-binding.yaml 35.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jira-update-issue-sink-binding timer-source?message="The new comment"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTypeName -p step-1.value=Story --step insert-header-action -p step-2.name=issueSummary -p step-2.value="This is a story 123" --step insert-header-action -p step-3.name=issuePriorityName -p step-3.value=Highest jira-update-issue-sink?jiraUrl="jira url"\&username="username"\&password="password" This command creates the KameletBinding in the current namespace on the cluster. 35.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/jira-update-issue-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-update-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueKey\" value: \"MYP-163\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueTypeName\" value: \"Bug\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueSummary\" value: \"The issue summary\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issuePriorityName\" value: \"Low\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: \"jira server url\" username: \"username\" password: \"password\"",
"apply -f jira-update-issue-sink-binding.yaml",
"kamel bind --name jira-update-issue-sink-binding timer-source?message=\"The new comment\"\\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTypeName -p step-1.value=Story --step insert-header-action -p step-2.name=issueSummary -p step-2.value=\"This is a story 123\" --step insert-header-action -p step-3.name=issuePriorityName -p step-3.value=Highest jira-update-issue-sink?jiraUrl=\"jira url\"\\&username=\"username\"\\&password=\"password\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-update-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueKey\" value: \"MYP-163\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueTypeName\" value: \"Bug\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueSummary\" value: \"The issue summary\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issuePriorityName\" value: \"Low\" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-update-issue-sink properties: jiraUrl: \"jira server url\" username: \"username\" password: \"password\"",
"apply -f jira-update-issue-sink-binding.yaml",
"kamel bind --name jira-update-issue-sink-binding timer-source?message=\"The new comment\"\\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTypeName -p step-1.value=Story --step insert-header-action -p step-2.name=issueSummary -p step-2.value=\"This is a story 123\" --step insert-header-action -p step-3.name=issuePriorityName -p step-3.value=Highest jira-update-issue-sink?jiraUrl=\"jira url\"\\&username=\"username\"\\&password=\"password\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/jira-update-issue-sink |
4.6. Booleans | 4.6. Booleans Booleans allow parts of SELinux policy to be changed at runtime, without any knowledge of SELinux policy writing. This allows changes, such as allowing services access to NFS volumes, without reloading or recompiling SELinux policy. 4.6.1. Listing Booleans For a list of Booleans, an explanation of what each one is, and whether they are on or off, run the semanage boolean -l command as the Linux root user. The following example does not list all Booleans and the output is shortened for brevity: Note To have more detailed descriptions, install the selinux-policy-devel package. The SELinux boolean column lists Boolean names. The Description column lists whether the Booleans are on or off, and what they do. The getsebool -a command lists Booleans, whether they are on or off, but does not give a description of each one. The following example does not list all Booleans: Run the getsebool boolean-name command to only list the status of the boolean-name Boolean: Use a space-separated list to list multiple Booleans: 4.6.2. Configuring Booleans Run the setsebool utility in the setsebool boolean_name on/off form to enable or disable Booleans. The following example demonstrates configuring the httpd_can_network_connect_db Boolean: Procedure 4.5. Configuring Booleans By default, the httpd_can_network_connect_db Boolean is off, preventing Apache HTTP Server scripts and modules from connecting to database servers: To temporarily enable Apache HTTP Server scripts and modules to connect to database servers, enter the following command as root: Use the getsebool utility to verify the Boolean has been enabled: This allows Apache HTTP Server scripts and modules to connect to database servers. This change is not persistent across reboots. To make changes persistent across reboots, run the setsebool -P boolean-name on command as root: [3] 4.6.3. Shell Auto-Completion It is possible to use shell auto-completion with the getsebool , setsebool , and semanage utilities. Use the auto-completion with getsebool and setsebool to complete both command-line parameters and Booleans. To list only the command-line parameters, add the hyphen character ("-") after the command name and hit the Tab key: To complete a Boolean, start writing the Boolean name and then hit Tab : The semanage utility is used with several command-line arguments that are completed one by one. The first argument of a semanage command is an option, which specifies what part of SELinux policy is managed: Then, one or more command-line parameters follow: Finally, complete the name of a particular SELinux entry, such as a Boolean, SELinux user, domain, or another. Start typing the entry and hit Tab : Command-line parameters can be chained in a command: [3] To temporarily revert to the default behavior, as the Linux root user, run the setsebool httpd_can_network_connect_db off command. For changes that persist across reboots, run the setsebool -P httpd_can_network_connect_db off command. | [
"~]# semanage boolean -l SELinux boolean State Default Description smartmon_3ware (off , off) Determine whether smartmon can mpd_enable_homedirs (off , off) Determine whether mpd can traverse",
"~]USD getsebool -a cvs_read_shadow --> off daemons_dump_core --> on",
"~]USD getsebool cvs_read_shadow cvs_read_shadow --> off",
"~]USD getsebool cvs_read_shadow daemons_dump_core cvs_read_shadow --> off daemons_dump_core --> on",
"~]USD getsebool httpd_can_network_connect_db httpd_can_network_connect_db --> off",
"~]# setsebool httpd_can_network_connect_db on",
"~]USD getsebool httpd_can_network_connect_db httpd_can_network_connect_db --> on",
"~]# setsebool -P httpd_can_network_connect_db on",
"~]# setsebool -[Tab] -P",
"~]USD getsebool samba_[Tab] samba_create_home_dirs samba_export_all_ro samba_run_unconfined samba_domain_controller samba_export_all_rw samba_share_fusefs samba_enable_home_dirs samba_portmapper samba_share_nfs",
"~]# setsebool -P virt_use_[Tab] virt_use_comm virt_use_nfs virt_use_sanlock virt_use_execmem virt_use_rawip virt_use_usb virt_use_fusefs virt_use_samba virt_use_xserver",
"~]# semanage [Tab] boolean export import login node port dontaudit fcontext interface module permissive user",
"~]# semanage fcontext -[Tab] -a -D --equal --help -m -o --add --delete -f -l --modify -S -C --deleteall --ftype --list -n -t -d -e -h --locallist --noheading --type",
"~]# semanage fcontext -a -t samba<tab> samba_etc_t samba_secrets_t sambagui_exec_t samba_share_t samba_initrc_exec_t samba_unconfined_script_exec_t samba_log_t samba_unit_file_t samba_net_exec_t",
"~]# semanage port -a -t http_port_t -p tcp 81"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-Security-Enhanced_Linux-Working_with_SELinux-Booleans |
6.2. Configuration Tasks | 6.2. Configuration Tasks Configuring Red Hat High Availability Add-On software with the ccs consists of the following steps: Ensuring that ricci is running on all nodes in the cluster. Refer to Section 6.3, "Starting ricci " . Creating a cluster. Refer to Section 6.4, "Creating and Modifying a Cluster" . Configuring fence devices. Refer to Section 6.5, "Configuring Fence Devices" . Configuring fencing for cluster members. Refer to Section 6.7, "Configuring Fencing for Cluster Members" . Creating failover domains. Refer to Section 6.8, "Configuring a Failover Domain" . Creating resources. Refer to Section 6.9, "Configuring Global Cluster Resources" . Creating cluster services. Refer to Section 6.10, "Adding a Cluster Service to the Cluster" . Configuring a quorum disk, if necessary. Refer to Section 6.13, "Configuring a Quorum Disk" . Configuring global cluster properties. Refer to Section 6.14, "Miscellaneous Cluster Configuration" . Propagating the cluster configuration file to all of the cluster nodes. Refer to Section 6.15, "Propagating the Configuration File to the Cluster Nodes" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-tasks-ccs-ca |
Chapter 17. Network-Bound Disk Encryption (NBDE) | Chapter 17. Network-Bound Disk Encryption (NBDE) 17.1. About disk encryption technology Network-Bound Disk Encryption (NBDE) allows you to encrypt root volumes of hard drives on physical and virtual machines without having to manually enter a password when restarting machines. 17.1.1. Disk encryption technology comparison To understand the merits of Network-Bound Disk Encryption (NBDE) for securing data at rest on edge servers, compare key escrow and TPM disk encryption without Clevis to NBDE on systems running Red Hat Enterprise Linux (RHEL). The following table presents some tradeoffs to consider around the threat model and the complexity of each encryption solution. Scenario Key escrow TPM disk encryption (without Clevis) NBDE Protects against single-disk theft X X X Protects against entire-server theft X X Systems can reboot independently from the network X No periodic rekeying X Key is never transmitted over a network X X Supported by OpenShift X X 17.1.1.1. Key escrow Key escrow is the traditional system for storing cryptographic keys. The key server on the network stores the encryption key for a node with an encrypted boot disk and returns it when queried. The complexities around key management, transport encryption, and authentication do not make this a reasonable choice for boot disk encryption. Although available in Red Hat Enterprise Linux (RHEL), key escrow-based disk encryption setup and management is a manual process and not suited to OpenShift Container Platform automation operations, including automated addition of nodes, and currently not supported by OpenShift Container Platform. 17.1.1.2. TPM encryption Trusted Platform Module (TPM) disk encryption is best suited for data centers or installations in remote protected locations. Full disk encryption utilities such as dm-crypt and BitLocker encrypt disks with a TPM bind key, and then store the TPM bind key in the TPM, which is attached to the motherboard of the node. The main benefit of this method is that there is no external dependency, and the node is able to decrypt its own disks at boot time without any external interaction. TPM disk encryption protects against decryption of data if the disk is stolen from the node and analyzed externally. However, for insecure locations this may not be sufficient. For example, if an attacker steals the entire node, the attacker can intercept the data when powering on the node, because the node decrypts its own disks. This applies to nodes with physical TPM2 chips as well as virtual machines with Virtual Trusted Platform Module (VTPM) access. 17.1.1.3. Network-Bound Disk Encryption (NBDE) Network-Bound Disk Encryption (NBDE) effectively ties the encryption key to an external server or set of servers in a secure and anonymous way across the network. This is not a key escrow, in that the nodes do not store the encryption key or transfer it over the network, but otherwise behaves in a similar fashion. Clevis and Tang are generic client and server components that provide network-bound encryption. Red Hat Enterprise Linux CoreOS (RHCOS) uses these components in conjunction with Linux Unified Key Setup-on-disk-format (LUKS) to encrypt and decrypt root and non-root storage volumes to accomplish Network-Bound Disk Encryption. When a node starts, it attempts to contact a predefined set of Tang servers by performing a cryptographic handshake. If it can reach the required number of Tang servers, the node can construct its disk decryption key and unlock the disks to continue booting. If the node cannot access a Tang server due to a network outage or server unavailability, the node cannot boot and continues retrying indefinitely until the Tang servers become available again. Because the key is effectively tied to the node's presence in a network, an attacker attempting to gain access to the data at rest would need to obtain both the disks on the node, and network access to the Tang server as well. The following figure illustrates the deployment model for NBDE. The following figure illustrates NBDE behavior during a reboot. 17.1.1.4. Secret sharing encryption Shamir's secret sharing (sss) is a cryptographic algorithm to securely divide up, distribute, and re-assemble keys. Using this algorithm, OpenShift Container Platform can support more complicated mixtures of key protection. When you configure a cluster node to use multiple Tang servers, OpenShift Container Platform uses sss to set up a decryption policy that will succeed if at least one of the specified servers is available. You can create layers for additional security. For example, you can define a policy where OpenShift Container Platform requires both the TPM and one of the given list of Tang servers to decrypt the disk. 17.1.2. Tang server disk encryption The following components and technologies implement Network-Bound Disk Encryption (NBDE). Figure 17.1. NBDE scheme when using a LUKS1-encrypted volume. The luksmeta package is not used for LUKS2 volumes. Tang is a server for binding data to network presence. It makes a node containing the data available when the node is bound to a certain secure network. Tang is stateless and does not require Transport Layer Security (TLS) or authentication. Unlike escrow-based solutions, where the key server stores all encryption keys and has knowledge of every encryption key, Tang never interacts with any node keys, so it never gains any identifying information from the node. Clevis is a pluggable framework for automated decryption that provides automated unlocking of Linux Unified Key Setup-on-disk-format (LUKS) volumes. The Clevis package runs on the node and provides the client side of the feature. A Clevis pin is a plugin into the Clevis framework. There are three pin types: TPM2 Binds the disk encryption to the TPM2. Tang Binds the disk encryption to a Tang server to enable NBDE. Shamir's secret sharing (sss) Allows more complex combinations of other pins. It allows more nuanced policies such as the following: Must be able to reach one of these three Tang servers Must be able to reach three of these five Tang servers Must be able to reach the TPM2 AND at least one of these three Tang servers 17.1.3. Tang server location planning When planning your Tang server environment, consider the physical and network locations of the Tang servers. Physical location The geographic location of the Tang servers is relatively unimportant, as long as they are suitably secured from unauthorized access or theft and offer the required availability and accessibility to run a critical service. Nodes with Clevis clients do not require local Tang servers as long as the Tang servers are available at all times. Disaster recovery requires both redundant power and redundant network connectivity to Tang servers regardless of their location. Network location Any node with network access to the Tang servers can decrypt their own disk partitions, or any other disks encrypted by the same Tang servers. Select network locations for the Tang servers that ensure the presence or absence of network connectivity from a given host allows for permission to decrypt. For example, firewall protections might be in place to prohibit access from any type of guest or public network, or any network jack located in an unsecured area of the building. Additionally, maintain network segregation between production and development networks. This assists in defining appropriate network locations and adds an additional layer of security. Do not deploy Tang servers on the same resource, for example, the same rolebindings.rbac.authorization.k8s.io cluster, that they are responsible for unlocking. However, a cluster of Tang servers and other security resources can be a useful configuration to enable support of multiple additional clusters and cluster resources. 17.1.4. Tang server sizing requirements The requirements around availability, network, and physical location drive the decision of how many Tang servers to use, rather than any concern over server capacity. Tang servers do not maintain the state of data encrypted using Tang resources. Tang servers are either fully independent or share only their key material, which enables them to scale well. There are two ways Tang servers handle key material: Multiple Tang servers share key material: You must load balance Tang servers sharing keys behind the same URL. The configuration can be as simple as round-robin DNS, or you can use physical load balancers. You can scale from a single Tang server to multiple Tang servers. Scaling Tang servers does not require rekeying or client reconfiguration on the node when the Tang servers share key material and the same URL. Client node setup and key rotation only requires one Tang server. Multiple Tang servers generate their own key material: You can configure multiple Tang servers at installation time. You can scale an individual Tang server behind a load balancer. All Tang servers must be available during client node setup or key rotation. When a client node boots using the default configuration, the Clevis client contacts all Tang servers. Only n Tang servers must be online to proceed with decryption. The default value for n is 1. Red Hat does not support postinstallation configuration that changes the behavior of the Tang servers. 17.1.5. Logging considerations Centralized logging of Tang traffic is advantageous because it might allow you to detect such things as unexpected decryption requests. For example: A node requesting decryption of a passphrase that does not correspond to its boot sequence A node requesting decryption outside of a known maintenance activity, such as cycling keys 17.2. Tang server installation considerations Network-Bound Disk Encryption (NBDE) must be enabled when a cluster node is installed. However, you can change the disk encryption policy at any time after it was initialized at installation. 17.2.1. Installation scenarios Consider the following recommendations when planning Tang server installations: Small environments can use a single set of key material, even when using multiple Tang servers: Key rotations are easier. Tang servers can scale easily to permit high availability. Large environments can benefit from multiple sets of key material: Physically diverse installations do not require the copying and synchronizing of key material between geographic regions. Key rotations are more complex in large environments. Node installation and rekeying require network connectivity to all Tang servers. A small increase in network traffic can occur due to a booting node querying all Tang servers during decryption. Note that while only one Clevis client query must succeed, Clevis queries all Tang servers. Further complexity: Additional manual reconfiguration can permit the Shamir's secret sharing (sss) of any N of M servers online in order to decrypt the disk partition. Decrypting disks in this scenario requires multiple sets of key material, and manual management of Tang servers and nodes with Clevis clients after the initial installation. High level recommendations: For a single RAN deployment, a limited set of Tang servers can run in the corresponding domain controller (DC). For multiple RAN deployments, you must decide whether to run Tang servers in each corresponding DC or whether a global Tang environment better suits the other needs and requirements of the system. 17.2.2. Installing a Tang server To deploy one or more Tang servers, you can choose from the following options depending on your scenario: Deploying a Tang server using the NBDE Tang Server Operator Deploying a Tang server with SELinux in enforcing mode on RHEL systems Configuring a Tang server in the RHEL web console Deploying Tang as a container Using the nbde_server System Role for setting up multiple Tang servers 17.2.2.1. Compute requirements The computational requirements for the Tang server are very low. Any typical server grade configuration that you would use to deploy a server into production can provision sufficient compute capacity. High availability considerations are solely for availability and not additional compute power to satisfy client demands. 17.2.2.2. Automatic start at boot Due to the sensitive nature of the key material the Tang server uses, you should keep in mind that the overhead of manual intervention during the Tang server's boot sequence can be beneficial. By default, if a Tang server starts and does not have key material present in the expected local volume, it will create fresh material and serve it. You can avoid this default behavior by either starting with pre-existing key material or aborting the startup and waiting for manual intervention. 17.2.2.3. HTTP versus HTTPS Traffic to the Tang server can be encrypted (HTTPS) or plaintext (HTTP). There are no significant security advantages of encrypting this traffic, and leaving it decrypted removes any complexity or failure conditions related to Transport Layer Security (TLS) certificate checking in the node running a Clevis client. While it is possible to perform passive monitoring of unencrypted traffic between the node's Clevis client and the Tang server, the ability to use this traffic to determine the key material is at best a future theoretical concern. Any such traffic analysis would require large quantities of captured data. Key rotation would immediately invalidate it. Finally, any threat actor able to perform passive monitoring has already obtained the necessary network access to perform manual connections to the Tang server and can perform the simpler manual decryption of captured Clevis headers. However, because other network policies in place at the installation site might require traffic encryption regardless of application, consider leaving this decision to the cluster administrator. Additional resources Configuring automated unlocking of encrypted volumes using policy-based decryption in the RHEL 8 Security hardening document Official Tang server container Encrypting and mirroring disks during installation 17.3. Tang server encryption key management The cryptographic mechanism to recreate the encryption key is based on the blinded key stored on the node and the private key of the involved Tang servers. To protect against the possibility of an attacker who has obtained both the Tang server private key and the node's encrypted disk, periodic rekeying is advisable. You must perform the rekeying operation for every node before you can delete the old key from the Tang server. The following sections provide procedures for rekeying and deleting old keys. 17.3.1. Backing up keys for a Tang server The Tang server uses /usr/libexec/tangd-keygen to generate new keys and stores them in the /var/db/tang directory by default. To recover the Tang server in the event of a failure, back up this directory. The keys are sensitive and because they are able to perform the boot disk decryption of all hosts that have used them, the keys must be protected accordingly. Procedure Copy the backup key from the /var/db/tang directory to the temp directory from which you can restore the key. 17.3.2. Recovering keys for a Tang server You can recover the keys for a Tang server by accessing the keys from a backup. Procedure Restore the key from your backup folder to the /var/db/tang/ directory. When the Tang server starts up, it advertises and uses these restored keys. 17.3.3. Rekeying Tang servers This procedure uses a set of three Tang servers, each with unique keys, as an example. Using redundant Tang servers reduces the chances of nodes failing to boot automatically. Rekeying a Tang server, and all associated NBDE-encrypted nodes, is a three-step procedure. Prerequisites A working Network-Bound Disk Encryption (NBDE) installation on one or more nodes. Procedure Generate a new Tang server key. Rekey all NBDE-encrypted nodes so they use the new key. Delete the old Tang server key. Note Deleting the old key before all NBDE-encrypted nodes have completed their rekeying causes those nodes to become overly dependent on any other configured Tang servers. Figure 17.2. Example workflow for rekeying a Tang server 17.3.3.1. Generating a new Tang server key Prerequisites A root shell on the Linux machine running the Tang server. To facilitate verification of the Tang server key rotation, encrypt a small test file with the old key: # echo plaintext | clevis encrypt tang '{"url":"http://localhost:7500"}' -y >/tmp/encrypted.oldkey Verify that the encryption succeeded and the file can be decrypted to produce the same string plaintext : # clevis decrypt </tmp/encrypted.oldkey Procedure Locate and access the directory that stores the Tang server key. This is usually the /var/db/tang directory. Check the currently advertised key thumbprint: # tang-show-keys 7500 Example output 36AHjNH3NZDSnlONLz1-V4ie6t8 Enter the Tang server key directory: # cd /var/db/tang/ List the current Tang server keys: # ls -A1 Example output 36AHjNH3NZDSnlONLz1-V4ie6t8.jwk gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk During normal Tang server operations, there are two .jwk files in this directory: one for signing and verification, and another for key derivation. Disable advertisement of the old keys: # for key in *.jwk; do \ mv -- "USDkey" ".USDkey"; \ done New clients setting up Network-Bound Disk Encryption (NBDE) or requesting keys will no longer see the old keys. Existing clients can still access and use the old keys until they are deleted. The Tang server reads but does not advertise keys stored in UNIX hidden files, which start with the . character. Generate a new key: # /usr/libexec/tangd-keygen /var/db/tang List the current Tang server keys to verify the old keys are no longer advertised, as they are now hidden files, and new keys are present: # ls -A1 Example output .36AHjNH3NZDSnlONLz1-V4ie6t8.jwk .gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk Tang automatically advertises the new keys. Note More recent Tang server installations include a helper /usr/libexec/tangd-rotate-keys directory that takes care of disabling advertisement and generating the new keys simultaneously. If you are running multiple Tang servers behind a load balancer that share the same key material, ensure the changes made here are properly synchronized across the entire set of servers before proceeding. Verification Verify that the Tang server is advertising the new key, and not advertising the old key: # tang-show-keys 7500 Example output WOjQYkyK7DxY_T5pMncMO5w0f6E Verify that the old key, while not advertised, is still available to decryption requests: # clevis decrypt </tmp/encrypted.oldkey 17.3.3.2. Rekeying all NBDE nodes You can rekey all of the nodes on a remote cluster by using a DaemonSet object without incurring any downtime to the remote cluster. Note If a node loses power during the rekeying, it is possible that it might become unbootable, and must be redeployed via Red Hat Advanced Cluster Management (RHACM) or a GitOps pipeline. Prerequisites cluster-admin access to all clusters with Network-Bound Disk Encryption (NBDE) nodes. All Tang servers must be accessible to every NBDE node undergoing rekeying, even if the keys of a Tang server have not changed. Obtain the Tang server URL and key thumbprint for every Tang server. Procedure Create a DaemonSet object based on the following template. This template sets up three redundant Tang servers, but can be easily adapted to other situations. Change the Tang server URLs and thumbprints in the NEW_TANG_PIN environment to suit your environment: apiVersion: apps/v1 kind: DaemonSet metadata: name: tang-rekey namespace: openshift-machine-config-operator spec: selector: matchLabels: name: tang-rekey template: metadata: labels: name: tang-rekey spec: containers: - name: tang-rekey image: registry.access.redhat.com/ubi9/ubi-minimal:latest imagePullPolicy: IfNotPresent command: - "/sbin/chroot" - "/host" - "/bin/bash" - "-ec" args: - | rm -f /tmp/rekey-complete || true echo "Current tang pin:" clevis-luks-list -d USDROOT_DEV -s 1 echo "Applying new tang pin: USDNEW_TANG_PIN" clevis-luks-edit -f -d USDROOT_DEV -s 1 -c "USDNEW_TANG_PIN" echo "Pin applied successfully" touch /tmp/rekey-complete sleep infinity readinessProbe: exec: command: - cat - /host/tmp/rekey-complete initialDelaySeconds: 30 periodSeconds: 10 env: - name: ROOT_DEV value: /dev/disk/by-partlabel/root - name: NEW_TANG_PIN value: >- {"t":1,"pins":{"tang":[ {"url":"http://tangserver01:7500","thp":"WOjQYkyK7DxY_T5pMncMO5w0f6E"}, {"url":"http://tangserver02:7500","thp":"I5Ynh2JefoAO3tNH9TgI4obIaXI"}, {"url":"http://tangserver03:7500","thp":"38qWZVeDKzCPG9pHLqKzs6k1ons"} ]}} volumeMounts: - name: hostroot mountPath: /host securityContext: privileged: true volumes: - name: hostroot hostPath: path: / nodeSelector: kubernetes.io/os: linux priorityClassName: system-node-critical restartPolicy: Always serviceAccount: machine-config-daemon serviceAccountName: machine-config-daemon In this case, even though you are rekeying tangserver01 , you must specify not only the new thumbprint for tangserver01 , but also the current thumbprints for all other Tang servers. Failure to specify all thumbprints for a rekeying operation opens up the opportunity for a man-in-the-middle attack. To distribute the daemon set to every cluster that must be rekeyed, run the following command: USD oc apply -f tang-rekey.yaml However, to run at scale, wrap the daemon set in an ACM policy. This ACM configuration must contain one policy to deploy the daemon set, a second policy to check that all the daemon set pods are READY, and a placement rule to apply it to the appropriate set of clusters. Note After validating that the daemon set has successfully rekeyed all servers, delete the daemon set. If you do not delete the daemon set, it must be deleted before the rekeying operation. Verification After you distribute the daemon set, monitor the daemon sets to ensure that the rekeying has completed successfully. The script in the example daemon set terminates with an error if the rekeying failed, and remains in the CURRENT state if successful. There is also a readiness probe that marks the pod as READY when the rekeying has completed successfully. This is an example of the output listing for the daemon set before the rekeying has completed: USD oc get -n openshift-machine-config-operator ds tang-rekey Example output NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE tang-rekey 1 1 0 1 0 kubernetes.io/os=linux 11s This is an example of the output listing for the daemon set after the rekeying has completed successfully: USD oc get -n openshift-machine-config-operator ds tang-rekey Example output NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE tang-rekey 1 1 1 1 1 kubernetes.io/os=linux 13h Rekeying usually takes a few minutes to complete. Note If you use ACM policies to distribute the daemon sets to multiple clusters, you must include a compliance policy that checks every daemon set's READY count is equal to the DESIRED count. In this way, compliance to such a policy demonstrates that all daemon set pods are READY and the rekeying has completed successfully. You could also use an ACM search to query all of the daemon sets' states. 17.3.3.3. Troubleshooting temporary rekeying errors for Tang servers To determine if the error condition from rekeying the Tang servers is temporary, perform the following procedure. Temporary error conditions might include: Temporary network outages Tang server maintenance Generally, when these types of temporary error conditions occur, you can wait until the daemon set succeeds in resolving the error or you can delete the daemon set and not try again until the temporary error condition has been resolved. Procedure Restart the pod that performs the rekeying operation using the normal Kubernetes pod restart policy. If any of the associated Tang servers are unavailable, try rekeying until all the servers are back online. 17.3.3.4. Troubleshooting permanent rekeying errors for Tang servers If, after rekeying the Tang servers, the READY count does not equal the DESIRED count after an extended period of time, it might indicate a permanent failure condition. In this case, the following conditions might apply: A typographical error in the Tang server URL or thumbprint in the NEW_TANG_PIN definition. The Tang server is decommissioned or the keys are permanently lost. Prerequisites The commands shown in this procedure can be run on the Tang server or on any Linux system that has network access to the Tang server. Procedure Validate the Tang server configuration by performing a simple encrypt and decrypt operation on each Tang server's configuration as defined in the daemon set. This is an example of an encryption and decryption attempt with a bad thumbprint: USD echo "okay" | clevis encrypt tang \ '{"url":"http://tangserver02:7500","thp":"badthumbprint"}' | \ clevis decrypt Example output Unable to fetch advertisement: 'http://tangserver02:7500/adv/badthumbprint'! This is an example of an encryption and decryption attempt with a good thumbprint: USD echo "okay" | clevis encrypt tang \ '{"url":"http://tangserver03:7500","thp":"goodthumbprint"}' | \ clevis decrypt Example output okay After you identify the root cause, remedy the underlying situation: Delete the non-working daemon set. Edit the daemon set definition to fix the underlying issue. This might include any of the following actions: Edit a Tang server entry to correct the URL and thumbprint. Remove a Tang server that is no longer in service. Add a new Tang server that is a replacement for a decommissioned server. Distribute the updated daemon set again. Note When replacing, removing, or adding a Tang server from a configuration, the rekeying operation will succeed as long as at least one original server is still functional, including the server currently being rekeyed. If none of the original Tang servers are functional or can be recovered, recovery of the system is impossible and you must redeploy the affected nodes. Verification Check the logs from each pod in the daemon set to determine whether the rekeying completed successfully. If the rekeying is not successful, the logs might indicate the failure condition. Locate the name of the container that was created by the daemon set: USD oc get pods -A | grep tang-rekey Example output openshift-machine-config-operator tang-rekey-7ks6h 1/1 Running 20 (8m39s ago) 89m Print the logs from the container. The following log is from a completed successful rekeying operation: USD oc logs tang-rekey-7ks6h Example output Current tang pin: 1: sss '{"t":1,"pins":{"tang":[{"url":"http://10.46.55.192:7500"},{"url":"http://10.46.55.192:7501"},{"url":"http://10.46.55.192:7502"}]}}' Applying new tang pin: {"t":1,"pins":{"tang":[ {"url":"http://tangserver01:7500","thp":"WOjQYkyK7DxY_T5pMncMO5w0f6E"}, {"url":"http://tangserver02:7500","thp":"I5Ynh2JefoAO3tNH9TgI4obIaXI"}, {"url":"http://tangserver03:7500","thp":"38qWZVeDKzCPG9pHLqKzs6k1ons"} ]}} Updating binding... Binding edited successfully Pin applied successfully 17.3.4. Deleting old Tang server keys Prerequisites A root shell on the Linux machine running the Tang server. Procedure Locate and access the directory where the Tang server key is stored. This is usually the /var/db/tang directory: # cd /var/db/tang/ List the current Tang server keys, showing the advertised and unadvertised keys: # ls -A1 Example output .36AHjNH3NZDSnlONLz1-V4ie6t8.jwk .gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk Delete the old keys: # rm .*.jwk List the current Tang server keys to verify the unadvertised keys are no longer present: # ls -A1 Example output Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk Verification At this point, the server still advertises the new keys, but an attempt to decrypt based on the old key will fail. Query the Tang server for the current advertised key thumbprints: # tang-show-keys 7500 Example output WOjQYkyK7DxY_T5pMncMO5w0f6E Decrypt the test file created earlier to verify decryption against the old keys fails: # clevis decrypt </tmp/encryptValidation Example output Error communicating with the server! If you are running multiple Tang servers behind a load balancer that share the same key material, ensure the changes made are properly synchronized across the entire set of servers before proceeding. 17.4. Disaster recovery considerations This section describes several potential disaster situations and the procedures to respond to each of them. Additional situations will be added here as they are discovered or presumed likely to be possible. 17.4.1. Loss of a client machine The loss of a cluster node that uses the Tang server to decrypt its disk partition is not a disaster. Whether the machine was stolen, suffered hardware failure, or another loss scenario is not important: the disks are encrypted and considered unrecoverable. However, in the event of theft, a precautionary rotation of the Tang server's keys and rekeying of all remaining nodes would be prudent to ensure the disks remain unrecoverable even in the event the thieves subsequently gain access to the Tang servers. To recover from this situation, either reinstall or replace the node. 17.4.2. Planning for a loss of client network connectivity The loss of network connectivity to an individual node will cause it to become unable to boot in an unattended fashion. If you are planning work that might cause a loss of network connectivity, you can reveal the passphrase for an onsite technician to use manually, and then rotate the keys afterwards to invalidate it: Procedure Before the network becomes unavailable, show the password used in the first slot -s 1 of device /dev/vda2 with this command: USD sudo clevis luks pass -d /dev/vda2 -s 1 Invalidate that value and regenerate a new random boot-time passphrase with this command: USD sudo clevis luks regen -d /dev/vda2 -s 1 17.4.3. Unexpected loss of network connectivity If the network disruption is unexpected and a node reboots, consider the following scenarios: If any nodes are still online, ensure that they do not reboot until network connectivity is restored. This is not applicable for single-node clusters. The node will remain offline until such time that either network connectivity is restored, or a pre-established passphrase is entered manually at the console. In exceptional circumstances, network administrators might be able to reconfigure network segments to reestablish access, but this is counter to the intent of NBDE, which is that lack of network access means lack of ability to boot. The lack of network access at the node can reasonably be expected to impact that node's ability to function as well as its ability to boot. Even if the node were to boot via manual intervention, the lack of network access would make it effectively useless. 17.4.4. Recovering network connectivity manually A somewhat complex and manually intensive process is also available to the onsite technician for network recovery. Procedure The onsite technician extracts the Clevis header from the hard disks. Depending on BIOS lockdown, this might involve removing the disks and installing them in a lab machine. The onsite technician transmits the Clevis headers to a colleague with legitimate access to the Tang network who then performs the decryption. Due to the necessity of limited access to the Tang network, the technician should not be able to access that network via VPN or other remote connectivity. Similarly, the technician cannot patch the remote server through to this network in order to decrypt the disks automatically. The technician reinstalls the disk and manually enters the plain text passphrase provided by their colleague. The machine successfully starts even without direct access to the Tang servers. Note that the transmission of the key material from the install site to another site with network access must be done carefully. When network connectivity is restored, the technician rotates the encryption keys. 17.4.5. Emergency recovery of network connectivity If you are unable to recover network connectivity manually, consider the following steps. Be aware that these steps are discouraged if other methods to recover network connectivity are available. This method must only be performed by a highly trusted technician. Taking the Tang server's key material to the remote site is considered to be a breach of the key material and all servers must be rekeyed and re-encrypted. This method must be used in extreme cases only, or as a proof of concept recovery method to demonstrate its viability. Equally extreme, but theoretically possible, is to power the server in question with an Uninterruptible Power Supply (UPS), transport the server to a location with network connectivity to boot and decrypt the disks, and then restore the server at the original location on battery power to continue operation. If you want to use a backup manual passphrase, you must create it before the failure situation occurs. Just as attack scenarios become more complex with TPM and Tang compared to a stand-alone Tang installation, so emergency disaster recovery processes are also made more complex if leveraging the same method. 17.4.6. Loss of a network segment The loss of a network segment, making a Tang server temporarily unavailable, has the following consequences: OpenShift Container Platform nodes continue to boot as normal, provided other servers are available. New nodes cannot establish their encryption keys until the network segment is restored. In this case, ensure connectivity to remote geographic locations for the purposes of high availability and redundancy. This is because when you are installing a new node or rekeying an existing node, all of the Tang servers you are referencing in that operation must be available. A hybrid model for a vastly diverse network, such as five geographic regions in which each client is connected to the closest three clients is worth investigating. In this scenario, new clients are able to establish their encryption keys with the subset of servers that are reachable. For example, in the set of tang1 , tang2 and tang3 servers, if tang2 becomes unreachable clients can still establish their encryption keys with tang1 and tang3 , and at a later time re-establish with the full set. This can involve either a manual intervention or a more complex automation to be available. 17.4.7. Loss of a Tang server The loss of an individual Tang server within a load balanced set of servers with identical key material is completely transparent to the clients. The temporary failure of all Tang servers associated with the same URL, that is, the entire load balanced set, can be considered the same as the loss of a network segment. Existing clients have the ability to decrypt their disk partitions so long as another preconfigured Tang server is available. New clients cannot enroll until at least one of these servers comes back online. You can mitigate the physical loss of a Tang server by either reinstalling the server or restoring the server from backups. Ensure that the backup and restore processes of the key material is adequately protected from unauthorized access. 17.4.8. Rekeying compromised key material If key material is potentially exposed to unauthorized third parties, such as through the physical theft of a Tang server or associated data, immediately rotate the keys. Procedure Rekey any Tang server holding the affected material. Rekey all clients using the Tang server. Destroy the original key material. Scrutinize any incidents that result in unintended exposure of the master encryption key. If possible, take compromised nodes offline and re-encrypt their disks. Tip Reformatting and reinstalling on the same physical hardware, although slow, is easy to automate and test. | [
"echo plaintext | clevis encrypt tang '{\"url\":\"http://localhost:7500\"}' -y >/tmp/encrypted.oldkey",
"clevis decrypt </tmp/encrypted.oldkey",
"tang-show-keys 7500",
"36AHjNH3NZDSnlONLz1-V4ie6t8",
"cd /var/db/tang/",
"ls -A1",
"36AHjNH3NZDSnlONLz1-V4ie6t8.jwk gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk",
"for key in *.jwk; do mv -- \"USDkey\" \".USDkey\"; done",
"/usr/libexec/tangd-keygen /var/db/tang",
"ls -A1",
".36AHjNH3NZDSnlONLz1-V4ie6t8.jwk .gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk",
"tang-show-keys 7500",
"WOjQYkyK7DxY_T5pMncMO5w0f6E",
"clevis decrypt </tmp/encrypted.oldkey",
"apiVersion: apps/v1 kind: DaemonSet metadata: name: tang-rekey namespace: openshift-machine-config-operator spec: selector: matchLabels: name: tang-rekey template: metadata: labels: name: tang-rekey spec: containers: - name: tang-rekey image: registry.access.redhat.com/ubi9/ubi-minimal:latest imagePullPolicy: IfNotPresent command: - \"/sbin/chroot\" - \"/host\" - \"/bin/bash\" - \"-ec\" args: - | rm -f /tmp/rekey-complete || true echo \"Current tang pin:\" clevis-luks-list -d USDROOT_DEV -s 1 echo \"Applying new tang pin: USDNEW_TANG_PIN\" clevis-luks-edit -f -d USDROOT_DEV -s 1 -c \"USDNEW_TANG_PIN\" echo \"Pin applied successfully\" touch /tmp/rekey-complete sleep infinity readinessProbe: exec: command: - cat - /host/tmp/rekey-complete initialDelaySeconds: 30 periodSeconds: 10 env: - name: ROOT_DEV value: /dev/disk/by-partlabel/root - name: NEW_TANG_PIN value: >- {\"t\":1,\"pins\":{\"tang\":[ {\"url\":\"http://tangserver01:7500\",\"thp\":\"WOjQYkyK7DxY_T5pMncMO5w0f6E\"}, {\"url\":\"http://tangserver02:7500\",\"thp\":\"I5Ynh2JefoAO3tNH9TgI4obIaXI\"}, {\"url\":\"http://tangserver03:7500\",\"thp\":\"38qWZVeDKzCPG9pHLqKzs6k1ons\"} ]}} volumeMounts: - name: hostroot mountPath: /host securityContext: privileged: true volumes: - name: hostroot hostPath: path: / nodeSelector: kubernetes.io/os: linux priorityClassName: system-node-critical restartPolicy: Always serviceAccount: machine-config-daemon serviceAccountName: machine-config-daemon",
"oc apply -f tang-rekey.yaml",
"oc get -n openshift-machine-config-operator ds tang-rekey",
"NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE tang-rekey 1 1 0 1 0 kubernetes.io/os=linux 11s",
"oc get -n openshift-machine-config-operator ds tang-rekey",
"NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE tang-rekey 1 1 1 1 1 kubernetes.io/os=linux 13h",
"echo \"okay\" | clevis encrypt tang '{\"url\":\"http://tangserver02:7500\",\"thp\":\"badthumbprint\"}' | clevis decrypt",
"Unable to fetch advertisement: 'http://tangserver02:7500/adv/badthumbprint'!",
"echo \"okay\" | clevis encrypt tang '{\"url\":\"http://tangserver03:7500\",\"thp\":\"goodthumbprint\"}' | clevis decrypt",
"okay",
"oc get pods -A | grep tang-rekey",
"openshift-machine-config-operator tang-rekey-7ks6h 1/1 Running 20 (8m39s ago) 89m",
"oc logs tang-rekey-7ks6h",
"Current tang pin: 1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://10.46.55.192:7500\"},{\"url\":\"http://10.46.55.192:7501\"},{\"url\":\"http://10.46.55.192:7502\"}]}}' Applying new tang pin: {\"t\":1,\"pins\":{\"tang\":[ {\"url\":\"http://tangserver01:7500\",\"thp\":\"WOjQYkyK7DxY_T5pMncMO5w0f6E\"}, {\"url\":\"http://tangserver02:7500\",\"thp\":\"I5Ynh2JefoAO3tNH9TgI4obIaXI\"}, {\"url\":\"http://tangserver03:7500\",\"thp\":\"38qWZVeDKzCPG9pHLqKzs6k1ons\"} ]}} Updating binding Binding edited successfully Pin applied successfully",
"cd /var/db/tang/",
"ls -A1",
".36AHjNH3NZDSnlONLz1-V4ie6t8.jwk .gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk",
"rm .*.jwk",
"ls -A1",
"Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk",
"tang-show-keys 7500",
"WOjQYkyK7DxY_T5pMncMO5w0f6E",
"clevis decrypt </tmp/encryptValidation",
"Error communicating with the server!",
"sudo clevis luks pass -d /dev/vda2 -s 1",
"sudo clevis luks regen -d /dev/vda2 -s 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/security_and_compliance/network-bound-disk-encryption-nbde |
Chapter 11. Converting a connected cluster to a disconnected cluster | Chapter 11. Converting a connected cluster to a disconnected cluster There might be some scenarios where you need to convert your OpenShift Container Platform cluster from a connected cluster to a disconnected cluster. A disconnected cluster, also known as a restricted cluster, does not have an active connection to the internet. As such, you must mirror the contents of your registries and installation media. You can create this mirror registry on a host that can access both the internet and your closed network, or copy images to a device that you can move across network boundaries. This topic describes the general process for converting an existing, connected cluster into a disconnected cluster. 11.1. About the mirror registry You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift , a small-scale container registry included with OpenShift Container Platform subscriptions. You can use any container registry that supports Docker v2-2 , such as Red Hat Quay, the mirror registry for Red Hat OpenShift , Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry. Important The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. If choosing a container registry that is not the mirror registry for Red Hat OpenShift , it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters. When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring . If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring . For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location. Note Red Hat does not test third party registries with OpenShift Container Platform. 11.2. Prerequisites The oc client is installed. A running cluster. An installed mirror registry, which is a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as one of the following registries: Red Hat Quay JFrog Artifactory Sonatype Nexus Repository Harbor If you have an subscription to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Quay Operator . The mirror repository must be configured to share images. For example, a Red Hat Quay repository requires Organizations in order to share images. Access to the internet to obtain the necessary container images. 11.3. Preparing the cluster for mirroring Before disconnecting your cluster, you must mirror, or copy, the images to a mirror registry that is reachable by every node in your disconnected cluster. In order to mirror the images, you must prepare your cluster by: Adding the mirror registry certificates to the list of trusted CAs on your host. Creating a .dockerconfigjson file that contains your image pull secret, which is from the cloud.openshift.com token. Procedure Configuring credentials that allow image mirroring: Add the CA certificate for the mirror registry, in the simple PEM or DER file formats, to the list of trusted CAs. For example: USD cp </path/to/cert.crt> /usr/share/pki/ca-trust-source/anchors/ where, </path/to/cert.crt> Specifies the path to the certificate on your local file system. Update the CA trust. For example, in Linux: USD update-ca-trust Extract the .dockerconfigjson file from the global pull secret: USD oc extract secret/pull-secret -n openshift-config --confirm --to=. Example output .dockerconfigjson Edit the .dockerconfigjson file to add your mirror registry and authentication credentials and save it as a new file: {"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}},"<registry>:<port>/<namespace>/":{"auth":"<token>"}}} where: <local_registry> Specifies the registry domain name, and optionally the port, that your mirror registry uses to serve content. auth Specifies the base64-encoded user name and password for your mirror registry. <registry>:<port>/<namespace> Specifies the mirror registry details. <token> Specifies the base64-encoded username:password for your mirror registry. For example: USD {"auths":{"cloud.openshift.com":{"auth":"b3BlbnNoaWZ0Y3UjhGOVZPT0lOMEFaUjdPUzRGTA==","email":"[email protected]"}, "quay.io":{"auth":"b3BlbnNoaWZ0LXJlbGVhc2UtZGOVZPT0lOMEFaUGSTd4VGVGVUjdPUzRGTA==","email":"[email protected]"}, "registry.connect.redhat.com"{"auth":"NTE3MTMwNDB8dWhjLTFEZlN3VHkxOSTd4VGVGVU1MdTpleUpoYkdjaUailA==","email":"[email protected]"}, "registry.redhat.io":{"auth":"NTE3MTMwNDB8dWhjLTFEZlN3VH3BGSTd4VGVGVU1MdTpleUpoYkdjaU9fZw==","email":"[email protected]"}, "registry.svc.ci.openshift.org":{"auth":"dXNlcjpyWjAwWVFjSEJiT2RKVW1pSmg4dW92dGp1SXRxQ3RGN1pwajJhN1ZXeTRV"},"my-registry:5000/my-namespace/":{"auth":"dXNlcm5hbWU6cGFzc3dvcmQ="}}} 11.4. Mirroring the images After the cluster is properly configured, you can mirror the images from your external repositories to the mirror repository. Procedure Mirror the Operator Lifecycle Manager (OLM) images: USD oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v{product-version} <mirror_registry>:<port>/olm -a <reg_creds> where: product-version Specifies the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.8 . mirror_registry Specifies the fully qualified domain name (FQDN) for the target registry and namespace to mirror the Operator content to, where <namespace> is any existing namespace on the registry. reg_creds Specifies the location of your modified .dockerconfigjson file. For example: USD oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*' Mirror the content for any other Red Hat-provided Operator: USD oc adm catalog mirror <index_image> <mirror_registry>:<port>/<namespace> -a <reg_creds> where: index_image Specifies the index image for the catalog you want to mirror. For example, this might be a pruned index image that you created previously, or one of the source index images for the default catalogs, such as {index-image-pullspec} . mirror_registry Specifies the FQDN for the target registry and namespace to mirror the Operator content to, where <namespace> is any existing namespace on the registry. reg_creds Optional: Specifies the location of your registry credentials file, if required. For example: USD oc adm catalog mirror registry.redhat.io/redhat/community-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*' Mirror the OpenShift Container Platform image repository: USD oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:v<product-version>-<architecture> --to=<local_registry>/<local_repository> --to-release-image=<local_registry>/<local_repository>:v<product-version>-<architecture> where: product-version Specifies the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.8.15-x86_64 . architecture Specifies the type of architecture for your server, such as x86_64 . local_registry Specifies the registry domain name for your mirror repository. local_repository Specifies the name of the repository to create in your registry, such as ocp4/openshift4 . For example: USD oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:4.8.15-x86_64 --to=mirror.registry.com:443/ocp/release --to-release-image=mirror.registry.com:443/ocp/release:4.8.15-x86_64 Example output info: Mirroring 109 images to mirror.registry.com/ocp/release ... mirror.registry.com:443/ ocp/release manifests: sha256:086224cadce475029065a0efc5244923f43fb9bb3bb47637e0aaf1f32b9cad47 -> 4.8.15-x86_64-thanos sha256:0a214f12737cb1cfbec473cc301aa2c289d4837224c9603e99d1e90fc00328db -> 4.8.15-x86_64-kuryr-controller sha256:0cf5fd36ac4b95f9de506623b902118a90ff17a07b663aad5d57c425ca44038c -> 4.8.15-x86_64-pod sha256:0d1c356c26d6e5945a488ab2b050b75a8b838fc948a75c0fa13a9084974680cb -> 4.8.15-x86_64-kube-client-agent ..... sha256:66e37d2532607e6c91eedf23b9600b4db904ce68e92b43c43d5b417ca6c8e63c mirror.registry.com:443/ocp/release:4.5.41-multus-admission-controller sha256:d36efdbf8d5b2cbc4dcdbd64297107d88a31ef6b0ec4a39695915c10db4973f1 mirror.registry.com:443/ocp/release:4.5.41-cluster-kube-scheduler-operator sha256:bd1baa5c8239b23ecdf76819ddb63cd1cd6091119fecdbf1a0db1fb3760321a2 mirror.registry.com:443/ocp/release:4.5.41-aws-machine-controllers info: Mirroring completed in 2.02s (0B/s) Success Update image: mirror.registry.com:443/ocp/release:4.5.41-x86_64 Mirror prefix: mirror.registry.com:443/ocp/release Mirror any other registries, as needed: USD oc image mirror <online_registry>/my/image:latest <mirror_registry> Additional information For more information about mirroring Operator catalogs, see Mirroring an Operator catalog . For more information about the oc adm catalog mirror command, see the OpenShift CLI administrator command reference . 11.5. Configuring the cluster for the mirror registry After creating and mirroring the images to the mirror registry, you must modify your cluster so that pods can pull images from the mirror registry. You must: Add the mirror registry credentials to the global pull secret. Add the mirror registry server certificate to the cluster. Create an ImageContentSourcePolicy custom resource (ICSP), which associates the mirror registry with the source registry. Add mirror registry credential to the cluster global pull-secret: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. For example: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=.mirrorsecretconfigjson Add the CA-signed mirror registry server certificate to the nodes in the cluster: Create a config map that includes the server certificate for the mirror registry USD oc create configmap <config_map_name> --from-file=<mirror_address_host>..<port>=USDpath/ca.crt -n openshift-config For example: S oc create configmap registry-config --from-file=mirror.registry.com..443=/root/certs/ca-chain.cert.pem -n openshift-config Use the config map to update the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"<config_map_name>"}}}' --type=merge For example: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge Create an ICSP to redirect container pull requests from the online registries to the mirror registry: Create the ImageContentSourcePolicy custom resource: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 Specifies the name of the mirror image registry and repository. 2 Specifies the online registry and repository containing the content that is mirrored. Create the ICSP object: USD oc create -f registryrepomirror.yaml Example output imagecontentsourcepolicy.operator.openshift.io/mirror-ocp created OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. Verify that the credentials, CA, and ICSP for mirror registry were added: Log into a node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host Check the config.json file for the credentials: sh-4.4# cat /var/lib/kubelet/config.json Example output {"auths":{"brew.registry.redhat.io":{"xx=="},"brewregistry.stage.redhat.io":{"auth":"xxx=="},"mirror.registry.com:443":{"auth":"xx="}}} 1 1 Ensure that the mirror registry and credentials are present. Change to the certs.d directory sh-4.4# cd /etc/docker/certs.d/ List the certificates in the certs.d directory: sh-4.4# ls Example output 1 Ensure that the mirror registry is in the list. Check that the ICSP added the mirror registry to the registries.conf file: sh-4.4# cat /etc/containers/registries.conf Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "quay.io/openshift-release-dev/ocp-release" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.registry.com:443/ocp/release" [[registry]] prefix = "" location = "quay.io/openshift-release-dev/ocp-v4.0-art-dev" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.registry.com:443/ocp/release" The registry.mirror parameters indicate that the mirror registry is searched before the original registry. Exit the node. sh-4.4# exit 11.6. Ensure applications continue to work Before disconnecting the cluster from the network, ensure that your cluster is working as expected and all of your applications are working as expected. Procedure Use the following commands to check the status of your cluster: Ensure your pods are running: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-0 1/1 Running 0 39m kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-1 1/1 Running 0 39m kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-2 1/1 Running 0 39m openshift-apiserver-operator openshift-apiserver-operator-79c7c646fd-5rvr5 1/1 Running 3 45m openshift-apiserver apiserver-b944c4645-q694g 2/2 Running 0 29m openshift-apiserver apiserver-b944c4645-shdxb 2/2 Running 0 31m openshift-apiserver apiserver-b944c4645-x7rf2 2/2 Running 0 33m ... Ensure your nodes are in the READY status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ci-ln-47ltxtb-f76d1-mrffg-master-0 Ready master 42m v1.23.0 ci-ln-47ltxtb-f76d1-mrffg-master-1 Ready master 42m v1.23.0 ci-ln-47ltxtb-f76d1-mrffg-master-2 Ready master 42m v1.23.0 ci-ln-47ltxtb-f76d1-mrffg-worker-a-gsxbz Ready worker 35m v1.23.0 ci-ln-47ltxtb-f76d1-mrffg-worker-b-5qqdx Ready worker 35m v1.23.0 ci-ln-47ltxtb-f76d1-mrffg-worker-c-rjkpq Ready worker 34m v1.23.0 11.7. Disconnect the cluster from the network After mirroring all the required repositories and configuring your cluster to work as a disconnected cluster, you can disconnect the cluster from the network. Note The Insights Operator is degraded when the cluster loses its Internet connection. You can avoid this problem by temporarily disabling the Insights Operator until you can restore it. 11.8. Restoring a degraded Insights Operator Disconnecting the cluster from the network necessarily causes the cluster to lose the Internet connection. The Insights Operator becomes degraded because it requires access to Red Hat Insights . This topic describes how to recover from a degraded Insights Operator. Procedure Edit your .dockerconfigjson file to remove the cloud.openshift.com entry, for example: "cloud.openshift.com":{"auth":"<hash>","email":"[email protected]"} Save the file. Update the cluster secret with the edited .dockerconfigjson file: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./.dockerconfigjson Verify that the Insights Operator is no longer degraded: USD oc get co insights Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE insights 4.5.41 True False False 3d 11.9. Restoring the network If you want to reconnect a disconnected cluster and pull images from online registries, delete the cluster's ImageContentSourcePolicy (ICSP) objects. Without the ICSP, pull requests to external registries are no longer redirected to the mirror registry. Procedure View the ICSP objects in your cluster: USD oc get imagecontentsourcepolicy Example output NAME AGE mirror-ocp 6d20h ocp4-index-0 6d18h qe45-index-0 6d15h Delete all the ICSP objects you created when disconnecting your cluster: USD oc delete imagecontentsourcepolicy <icsp_name> <icsp_name> <icsp_name> For example: USD oc delete imagecontentsourcepolicy mirror-ocp ocp4-index-0 qe45-index-0 Example output imagecontentsourcepolicy.operator.openshift.io "mirror-ocp" deleted imagecontentsourcepolicy.operator.openshift.io "ocp4-index-0" deleted imagecontentsourcepolicy.operator.openshift.io "qe45-index-0" deleted Wait for all the nodes to restart and return to the READY status and verify that the registries.conf file is pointing to the original registries and not the mirror registries: Log into a node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host Examine the registries.conf file: sh-4.4# cat /etc/containers/registries.conf Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] 1 1 The registry and registry.mirror entries created by the ICSPs you deleted are removed. | [
"cp </path/to/cert.crt> /usr/share/pki/ca-trust-source/anchors/",
"update-ca-trust",
"oc extract secret/pull-secret -n openshift-config --confirm --to=.",
".dockerconfigjson",
"{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}},\"<registry>:<port>/<namespace>/\":{\"auth\":\"<token>\"}}}",
"{\"auths\":{\"cloud.openshift.com\":{\"auth\":\"b3BlbnNoaWZ0Y3UjhGOVZPT0lOMEFaUjdPUzRGTA==\",\"email\":\"[email protected]\"}, \"quay.io\":{\"auth\":\"b3BlbnNoaWZ0LXJlbGVhc2UtZGOVZPT0lOMEFaUGSTd4VGVGVUjdPUzRGTA==\",\"email\":\"[email protected]\"}, \"registry.connect.redhat.com\"{\"auth\":\"NTE3MTMwNDB8dWhjLTFEZlN3VHkxOSTd4VGVGVU1MdTpleUpoYkdjaUailA==\",\"email\":\"[email protected]\"}, \"registry.redhat.io\":{\"auth\":\"NTE3MTMwNDB8dWhjLTFEZlN3VH3BGSTd4VGVGVU1MdTpleUpoYkdjaU9fZw==\",\"email\":\"[email protected]\"}, \"registry.svc.ci.openshift.org\":{\"auth\":\"dXNlcjpyWjAwWVFjSEJiT2RKVW1pSmg4dW92dGp1SXRxQ3RGN1pwajJhN1ZXeTRV\"},\"my-registry:5000/my-namespace/\":{\"auth\":\"dXNlcm5hbWU6cGFzc3dvcmQ=\"}}}",
"oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v{product-version} <mirror_registry>:<port>/olm -a <reg_creds>",
"oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*'",
"oc adm catalog mirror <index_image> <mirror_registry>:<port>/<namespace> -a <reg_creds>",
"oc adm catalog mirror registry.redhat.io/redhat/community-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*'",
"oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:v<product-version>-<architecture> --to=<local_registry>/<local_repository> --to-release-image=<local_registry>/<local_repository>:v<product-version>-<architecture>",
"oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:4.8.15-x86_64 --to=mirror.registry.com:443/ocp/release --to-release-image=mirror.registry.com:443/ocp/release:4.8.15-x86_64",
"info: Mirroring 109 images to mirror.registry.com/ocp/release mirror.registry.com:443/ ocp/release manifests: sha256:086224cadce475029065a0efc5244923f43fb9bb3bb47637e0aaf1f32b9cad47 -> 4.8.15-x86_64-thanos sha256:0a214f12737cb1cfbec473cc301aa2c289d4837224c9603e99d1e90fc00328db -> 4.8.15-x86_64-kuryr-controller sha256:0cf5fd36ac4b95f9de506623b902118a90ff17a07b663aad5d57c425ca44038c -> 4.8.15-x86_64-pod sha256:0d1c356c26d6e5945a488ab2b050b75a8b838fc948a75c0fa13a9084974680cb -> 4.8.15-x86_64-kube-client-agent ..... sha256:66e37d2532607e6c91eedf23b9600b4db904ce68e92b43c43d5b417ca6c8e63c mirror.registry.com:443/ocp/release:4.5.41-multus-admission-controller sha256:d36efdbf8d5b2cbc4dcdbd64297107d88a31ef6b0ec4a39695915c10db4973f1 mirror.registry.com:443/ocp/release:4.5.41-cluster-kube-scheduler-operator sha256:bd1baa5c8239b23ecdf76819ddb63cd1cd6091119fecdbf1a0db1fb3760321a2 mirror.registry.com:443/ocp/release:4.5.41-aws-machine-controllers info: Mirroring completed in 2.02s (0B/s) Success Update image: mirror.registry.com:443/ocp/release:4.5.41-x86_64 Mirror prefix: mirror.registry.com:443/ocp/release",
"oc image mirror <online_registry>/my/image:latest <mirror_registry>",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=.mirrorsecretconfigjson",
"oc create configmap <config_map_name> --from-file=<mirror_address_host>..<port>=USDpath/ca.crt -n openshift-config",
"S oc create configmap registry-config --from-file=mirror.registry.com..443=/root/certs/ca-chain.cert.pem -n openshift-config",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"<config_map_name>\"}}}' --type=merge",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"oc create -f registryrepomirror.yaml",
"imagecontentsourcepolicy.operator.openshift.io/mirror-ocp created",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# cat /var/lib/kubelet/config.json",
"{\"auths\":{\"brew.registry.redhat.io\":{\"xx==\"},\"brewregistry.stage.redhat.io\":{\"auth\":\"xxx==\"},\"mirror.registry.com:443\":{\"auth\":\"xx=\"}}} 1",
"sh-4.4# cd /etc/docker/certs.d/",
"sh-4.4# ls",
"image-registry.openshift-image-registry.svc.cluster.local:5000 image-registry.openshift-image-registry.svc:5000 mirror.registry.com:443 1",
"sh-4.4# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"quay.io/openshift-release-dev/ocp-release\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.registry.com:443/ocp/release\" [[registry]] prefix = \"\" location = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.registry.com:443/ocp/release\"",
"sh-4.4# exit",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-0 1/1 Running 0 39m kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-1 1/1 Running 0 39m kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-2 1/1 Running 0 39m openshift-apiserver-operator openshift-apiserver-operator-79c7c646fd-5rvr5 1/1 Running 3 45m openshift-apiserver apiserver-b944c4645-q694g 2/2 Running 0 29m openshift-apiserver apiserver-b944c4645-shdxb 2/2 Running 0 31m openshift-apiserver apiserver-b944c4645-x7rf2 2/2 Running 0 33m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ci-ln-47ltxtb-f76d1-mrffg-master-0 Ready master 42m v1.23.0 ci-ln-47ltxtb-f76d1-mrffg-master-1 Ready master 42m v1.23.0 ci-ln-47ltxtb-f76d1-mrffg-master-2 Ready master 42m v1.23.0 ci-ln-47ltxtb-f76d1-mrffg-worker-a-gsxbz Ready worker 35m v1.23.0 ci-ln-47ltxtb-f76d1-mrffg-worker-b-5qqdx Ready worker 35m v1.23.0 ci-ln-47ltxtb-f76d1-mrffg-worker-c-rjkpq Ready worker 34m v1.23.0",
"\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"[email protected]\"}",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./.dockerconfigjson",
"oc get co insights",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE insights 4.5.41 True False False 3d",
"oc get imagecontentsourcepolicy",
"NAME AGE mirror-ocp 6d20h ocp4-index-0 6d18h qe45-index-0 6d15h",
"oc delete imagecontentsourcepolicy <icsp_name> <icsp_name> <icsp_name>",
"oc delete imagecontentsourcepolicy mirror-ocp ocp4-index-0 qe45-index-0",
"imagecontentsourcepolicy.operator.openshift.io \"mirror-ocp\" deleted imagecontentsourcepolicy.operator.openshift.io \"ocp4-index-0\" deleted imagecontentsourcepolicy.operator.openshift.io \"qe45-index-0\" deleted",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/post-installation_configuration/connected-to-disconnected |
20.10. Connecting the Serial Console for the Guest Virtual Machine | 20.10. Connecting the Serial Console for the Guest Virtual Machine The virsh console domain [--devname devicename ] [--force] [--safe] command connects the virtual serial console for the guest virtual machine. This is very useful for example for guests that do not provide VNC or SPICE protocols (and thus does not offer video display for GUI tools ) and that do not have network connection (and thus cannot be interacted with using SSH). The optional --devname parameter refers to the device alias of an alternate console, serial, or parallel device configured for the guest virtual machine. If this parameter is omitted, the primary console will be opened. If the --safe option is specified, the connection is only attempted if the driver supports safe console handling. This option specifies that the server has to ensure exclusive access to console devices. Optionally, the force option may be specified, which requests to disconnect any existing sessions, such as in the case of a broken connection. Example 20.19. How to start a guest virtual machine in console mode The following example starts a previously created guest1 virtual machine so that it connects to the serial console using safe console handling: # virsh console guest1 --safe | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-domain_commands-connecting_the_serial_console_for_the_guest_virtual_machine |
Chapter 24. System Monitoring Tools | Chapter 24. System Monitoring Tools In order to configure the system, system administrators often need to determine the amount of free memory, how much free disk space is available, how the hard drive is partitioned, or what processes are running. 24.1. Viewing System Processes 24.1.1. Using the ps Command The ps command allows you to display information about running processes. It produces a static list, that is, a snapshot of what is running when you execute the command. If you want a constantly updated list of running processes, use the top command or the System Monitor application instead. To list all processes that are currently running on the system including processes owned by other users, type the following at a shell prompt: ps ax For each listed process, the ps ax command displays the process ID ( PID ), the terminal that is associated with it ( TTY ), the current status ( STAT ), the cumulated CPU time ( TIME ), and the name of the executable file ( COMMAND ). For example: To display the owner alongside each process, use the following command: ps aux Apart from the information provided by the ps ax command, ps aux displays the effective user name of the process owner ( USER ), the percentage of the CPU ( %CPU ) and memory ( %MEM ) usage, the virtual memory size in kilobytes ( VSZ ), the non-swapped physical memory size in kilobytes ( RSS ), and the time or date the process was started. For instance: You can also use the ps command in a combination with grep to see if a particular process is running. For example, to determine if Emacs is running, type: For a complete list of available command-line options, see the ps (1) manual page. 24.1.2. Using the top Command The top command displays a real-time list of processes that are running on the system. It also displays additional information about the system uptime, current CPU and memory usage, or total number of running processes, and allows you to perform actions such as sorting the list or killing a process. To run the top command, type the following at a shell prompt: top For each listed process, the top command displays the process ID ( PID ), the effective user name of the process owner ( USER ), the priority ( PR ), the nice value ( NI ), the amount of virtual memory the process uses ( VIRT ), the amount of non-swapped physical memory the process uses ( RES ), the amount of shared memory the process uses ( SHR ), the process status field S ), the percentage of the CPU ( %CPU ) and memory ( %MEM ) usage, the accumulated CPU time ( TIME+ ), and the name of the executable file ( COMMAND ). For example: Table 24.1, "Interactive top commands" contains useful interactive commands that you can use with top . For more information, see the top (1) manual page. Table 24.1. Interactive top commands Command Description Enter , Space Immediately refreshes the display. h , ? Displays a help screen. k Kills a process. You are prompted for the process ID and the signal to send to it. n Changes the number of displayed processes. You are prompted to enter the number. u Sorts the list by user. M Sorts the list by memory usage. P Sorts the list by CPU usage. q Terminates the utility and returns to the shell prompt. 24.1.3. Using the System Monitor Tool The Processes tab of the System Monitor tool allows you to view, search for, change the priority of, and kill processes from the graphical user interface. To install the tool, issue the following command as root : To start the System Monitor tool, either select Applications System Tools System Monitor from the panel, or type gnome-system-monitor at a shell prompt. Then click the Processes tab to view the list of running processes. Figure 24.1. System Monitor - Processes For each listed process, the System Monitor tool displays its name ( Process Name ), current status ( Status ), percentage of the CPU usage ( % CPU ), nice value ( Nice ), process ID ( ID ), memory usage ( Memory ), the channel the process is waiting in ( Waiting Channel ), and additional details about the session ( Session ). To sort the information by a specific column in ascending order, click the name of that column. Click the name of the column again to toggle the sort between ascending and descending order. By default, the System Monitor tool displays a list of processes that are owned by the current user. Selecting various options from the View menu allows you to: view only active processes, view all processes, view your processes, view process dependencies, view a memory map of a selected process, view the files opened by a selected process, and refresh the list of processes. Additionally, various options in the Edit menu allows you to: stop a process, continue running a stopped process, end a process, kill a process, change the priority of a selected process, and edit the System Monitor preferences, such as the refresh interval for the list of processes, or what information to show. You can also end a process by selecting it from the list and clicking the End Process button. | [
"~]USD ps ax PID TTY STAT TIME COMMAND 1 ? Ss 0:01 /sbin/init 2 ? S 0:00 [kthreadd] 3 ? S 0:00 [migration/0] 4 ? S 0:00 [ksoftirqd/0] 5 ? S 0:00 [migration/0] 6 ? S 0:00 [watchdog/0] [output truncated]",
"~]USD ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.1 19404 832 ? Ss Mar02 0:01 /sbin/init root 2 0.0 0.0 0 0 ? S Mar02 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S Mar02 0:00 [migration/0] root 4 0.0 0.0 0 0 ? S Mar02 0:00 [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S Mar02 0:00 [migration/0] root 6 0.0 0.0 0 0 ? R Mar02 0:00 [watchdog/0] [output truncated]",
"~]USD ps ax | grep emacs 12056 pts/3 S+ 0:00 emacs 12060 pts/2 S+ 0:00 grep --color=auto emacs",
"~]USD top top - 02:19:11 up 4 days, 10:37, 5 users, load average: 0.07, 0.13, 0.09 Tasks: 160 total, 1 running, 159 sleeping, 0 stopped, 0 zombie Cpu(s): 10.7%us, 1.0%sy, 0.0%ni, 88.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 760752k total, 644360k used, 116392k free, 3988k buffers Swap: 1540088k total, 76648k used, 1463440k free, 196832k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14401 jhradile 20 0 313m 10m 5732 S 5.6 1.4 6:27.29 gnome-system-mo 1764 root 20 0 133m 23m 4756 S 5.3 3.2 6:32.66 Xorg 13865 jhradile 20 0 1625m 177m 6628 S 0.7 23.8 0:57.26 java 20 root 20 0 0 0 0 S 0.3 0.0 4:44.39 ata/0 2085 root 20 0 40396 348 276 S 0.3 0.0 1:57.13 udisks-daemon 1 root 20 0 19404 832 604 S 0.0 0.1 0:01.21 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd 3 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 4 root 20 0 0 0 0 S 0.0 0.0 0:00.02 ksoftirqd/0 5 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 7 root 20 0 0 0 0 S 0.0 0.0 0:01.00 events/0 8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuset 9 root 20 0 0 0 0 S 0.0 0.0 0:00.00 khelper 10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 netns 11 root 20 0 0 0 0 S 0.0 0.0 0:00.00 async/mgr 12 root 20 0 0 0 0 S 0.0 0.0 0:00.00 pm [output truncated]",
"~]# yum install gnome-system-monitor"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-system_monitoring_tools |
6.11. Kernel | 6.11. Kernel kernel component Sun Fire X4500 data server enumerates the e1000 card with Peripheral Component Interconnect Extended (PCI-X) and enables 64-bit direct memory access (DMA), however, 64-bit DMA is not fully supported on this hardware. If possible, disable 64-bit DMA in BIOS. grubby component Use of multiboot images makes discerning different image types problematic during kernel updates. As a consequence, using the tboot package and multiple types of kernels at the same time does not work properly. If, for example, tboot is in use and the kernel-debug package is installed, bootloader configuration can sometimes reflect an incorrect image list. To avoid this, do not use the kernel-debug on a system utilizing tboot , or vice versa. If such a situation is unavoidable, manually verify that the bootloader configuration is reasonable after each update before rebooting. kexec-tools component When the debug kernel is installed and also used as the Red Hat Enterprise Linux kdump kernel, the reserved kdump memory must be increased to a minimum of 256 MB. To assure this setting, start the system-config-kdump tool, modify the kdump memory, and reboot your Linux instance. Alternatively, you can configure a particular kernel that is always used as the kdump kernel, independently of the running kernel. For more information, consult the Red Hat Enterprise Linux 6 Deployment Guide . kernel component Red Hat Enterprise Linux 6.4 changed the maximum read/write socket memory default value to be higher, allowing for better performance on some machines. It was observed that if the values of ?mem_max are not symmetrical between two machines, the performance can be negatively affected. To work around this problem, adjust the value of ?mem_max to be equal across all Red Hat Enterprise Linux systems in the network. kabi-whitelists component The vxfs module might not work properly on Red Hat Enterprise Linux 6.4 and later because of the broken radix_tree_gang_lookup_slot symbol. Consult Symantec should you require a workaround for this issue. kernel component Enabling TCP Segmentation Offload (TSO) on TAP interface may cause low throughput when the uplink is a high-speed interface. To improve throughput, turn off TSO on the tap interface of the virtual machine. kernel component When using Chelsio's iSCSI HBAs for an iSCSI root partition, the first boot after install fails. This occurs because Chelsio's iSCSI HBA is not properly detected. To work around this issue, users must add the iscsi_firmware parameter to grub's kernel command line. This will signal to dracut to boot from the iSCSI HBA. kernel component The installation of Red Hat Enterprise Linux 6.3 i386 and later may occasionally fail. To work around this issue, add the following parameter to the kernel command line: kernel component If a device reports an error, while it is opened (via the open(2) system call), then the device is closed (via the close(2) system call), and the /dev/disk/by-id link for the device may be removed. When the problem on the device that caused the error is resolved, the by-id link is not re-created. To work around this issue, run the following command: kernel component When an HBA that uses the mpt2sas driver is connected to a storage using an SAS switch LSI SAS 6160, the driver may become unresponsive during Controller Fail Drive Fail (CFDF) testing. This is due to faulty firmware that is present on the switch. To fix this issue, use a newer version (14.00.00.00 or later) of firmware for the LSI SAS 6160 switch. kernel component, BZ# 745713 In some cases, Red Hat Enterprise Linux 6 guests running fully-virtualized under Red Hat Enterprise Linux 5 experience a time drift or fail to boot. In other cases, drifting may start after migration of the virtual machine to a host with different speed. This is due to limitations in the Red Hat Enterprise Linux 5 Xen hypervisor. To work around this, add the nohpet parameter or, alternatively, the clocksource=jiffies parameter to the kernel command line of the guest. Or, if running under Red Hat Enterprise Linux 5.7 or newer, locate the guest configuration file for the guest and add the hpet=0 parameter in it. kernel component On some systems, Xen full-virt guests may print the following message when booting: It is possible to avoid the memory trimming by using the disable_mtrr_trim kernel command line option. kernel component The perf record command becomes unresponsive when specifying a tracepoint event and a hardware event at the same time. kernel component On 64-bit PowerPC, the following command may cause kernel panic: kernel component Applications are increasingly using more than 1024 file descriptors. It is not recommended to increase the default soft limit of file descriptors because it may break applications that use the select() call. However, it is safe to increase the default hard limit; that way, applications requiring a large amount of file descriptors can increase their soft limit without needing root privileges and without any user intervention. kernel component In network only use of Brocade Converged Network Adapters (CNAs), switches that are not properly configured to work with Brocade FCoE functionality can cause a continuous linkup/linkdown condition. This causes continuous messages on the host console: To work around this issue, unload the Brocade bfa driver. kernel component In Red Hat Enterprise Linux 6, a legacy bug in the PowerEdge Expandable RAID Controller 5 (PERC5) which causes the kdump kernel to fail to scan for scsi devices. It is usually triggered when a large amounts of I/O operations are pending on the controller in the first kernel before performing a kdump. kernel component, BZ# 679262 In Red Hat Enterprise Linux 6.2 and later, due to security concerns, addresses in /proc/kallsyms and /proc/modules show all zeros when accessed by a non-root user. kernel component Superfluous information is displayed on the console due to a correctable machine check error occurring. This information can be safely ignored by the user. Machine check error reporting can be disabled by using the nomce kernel boot option, which disables machine check error reporting, or the mce=ignore_ce kernel boot option, which disables correctable machine check error reporting. kernel component The order in which PCI devices are scanned may change from one major Red Hat Enterprise Linux release to another. This may result in device names changing, for example, when upgrading from Red Hat Enterprise Linux 5 to 6. You must confirm that a device you refer to during installation, is the intended device. One way to assure the correctness of device names is to, in some configurations, determine the mapping from the controller name to the controller's PCI address in the older release, and then compare this to the mapping in the newer release, to ensure that the device name is as expected. The following is an example from /var/log/messages: If the device name is incorrect, add the pci=bfsort parameter to the kernel command line, and check again. kernel component The minimum firmware version for NIC adapters managed by netxen_nic is 4.0.550. This includes the boot firmware which is flashed in option ROM on the adapter itself. kernel component High stress on 64-bit IBM POWER series machines prevents kdump from successfully capturing the vmcore . As a result, the second kernel is not loaded, and the system becomes unresponsive. kernel component Triggering kdump to capture a vmcore through the network using the Intel 82575EB ethernet device in a 32 bit environment causes the networking driver to not function properly in the kdump kernel, and prevent the vmcore from being captured. kernel component Memory Type Range Register (MTRR) setup on some hyperthreaded machines may be incorrect following a suspend/resume cycle. This can cause graphics performance (specifically, scrolling) to slow considerably after a suspend/resume cycle. To work around this issue, disable and then re-enable the hyperthreaded sibling CPUs around suspend/resume, for example: #!/bin/sh # Disable hyper-threading processor cores on suspend and hibernate, re-enable # on resume. # This file goes into /etc/pm/sleep.d/ case USD1 in hibernate|suspend) echo 0 > /sys/devices/system/cpu/cpu1/online echo 0 > /sys/devices/system/cpu/cpu3/online ;; thaw|resume) echo 1 > /sys/devices/system/cpu/cpu1/online echo 1 > /sys/devices/system/cpu/cpu3/online ;; esac kernel component In Red Hat Enterprise Linux 6.2, nmi_watchdog registers with the perf subsystem. Consequently, during boot, the perf subsystem grabs control of the performance counter registers, blocking OProfile from working. To resolve this, either boot with the nmi_watchdog=0 kernel parameter set, or run the following command to disable it at run time: To re-enable nmi-watchdog , use the following command kernel component, BZ# 603911 Due to the way ftrace works when modifying the code during start-up, the NMI watchdog causes too much noise and ftrace can not find a quiet period to instrument the code. Consequently, machines with more than 512 CPUs will encounter issues with the NMI watchdog. Such issues will return error messages similar to BUG: NMI Watchdog detected LOCKUP and have either ftrace_modify_code or ipi_handler in the backtrace. To work around this issue, disable NMI watchdog by setting the nmi_watchdog=0 kernel parameter, or using the following command at run time: kernel component On 64-bit POWER systems the EHEA NIC driver will fail when attempting to dump a vmcore via NFS. To work around this issue, utilize other kdump facilities, for example dumping to the local file system, or dumping over SSH. kernel component, BZ# 587909 A BIOS emulated floppy disk might cause the installation or kernel boot process to hang. To avoid this, disable emulated floppy disk support in the BIOS. kernel component The preferred method to enable nmi_watchdog on 32-bit x86 systems is to use either nmi_watchdog=2 or nmi_watchdog=lapic parameters. The parameter nmi_watchdog=1 is not supported. kernel component The kernel parameter, pci=noioapicquirk , is required when installing the 32-bit variant of Red Hat Enterprise Linux 6 on HP xw9300 workstations. Note that the parameter change is not required when installing the 64-bit variant. | [
"vmalloc=256MB",
"~]# echo 'change' > /sys/class/block/sdX/uevent",
"WARNING: BIOS bug: CPU MTRRs don't cover all of memory, losing <number>MB of RAM",
"~]# ./perf record -agT -e sched:sched_switch -F 100 -- sleep 3",
"bfa xxxx:xx:xx.x: Base port (WWN = xx:xx:xx:xx:xx:xx:xx:xx) lost fabric connectivity",
"kernel: cciss0: <0x3230> at PCI 0000:1f:00.0 IRQ 71 using DAC ... kernel: cciss1: <0x3230> at PCI 0000:02:00.0 IRQ 75 using DAC",
"#!/bin/sh Disable hyper-threading processor cores on suspend and hibernate, re-enable on resume. This file goes into /etc/pm/sleep.d/ case USD1 in hibernate|suspend) echo 0 > /sys/devices/system/cpu/cpu1/online echo 0 > /sys/devices/system/cpu/cpu3/online ;; thaw|resume) echo 1 > /sys/devices/system/cpu/cpu1/online echo 1 > /sys/devices/system/cpu/cpu3/online ;; esac",
"echo 0 > /proc/sys/kernel/nmi_watchdog",
"echo 1 > /proc/sys/kernel/nmi_watchdog",
"echo 0 > /proc/sys/kernel/nmi_watchdog"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/kernel_issues |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/configuring_sap_hana_scale-up_multitarget_system_replication_for_disaster_recovery/conscious-language-message_configuring-hana-scale-up-multitarget-system-replication-disaster-recovery |
6.3. Configuring NAT using nftables | 6.3. Configuring NAT using nftables With nftables , you can configure the following network address translation ( NAT ) types: Masquerading Source NAT ( SNAT ) Destination NAT ( DNAT ) Redirect 6.3.1. The different NAT types: masquerading, source NAT, destination NAT, and redirect These are the different network address translation ( NAT ) types: Masquerading and source NAT (SNAT) Use one of these NAT types to change the source IP address of packets. For example, Internet Service Providers do not route private IP ranges, such as 10.0.0.0/8 . If you use private IP ranges in your network and users should be able to reach servers on the Internet, map the source IP address of packets from these ranges to a public IP address. Both masquerading and SNAT are very similar. The differences are: Masquerading automatically uses the IP address of the outgoing interface. Therefore, use masquerading if the outgoing interface uses a dynamic IP address. SNAT sets the source IP address of packets to a specified IP and does not dynamically look up the IP of the outgoing interface. Therefore, SNAT is faster than masquerading. Use SNAT if the outgoing interface uses a fixed IP address. Destination NAT (DNAT) Use this NAT type to route incoming traffic to a different host. For example, if your web server uses an IP address from a reserved IP range and is, therefore, not directly accessible from the Internet, you can set a DNAT rule on the router to redirect incoming traffic to this server. Redirect This type is a special case of DNAT that redirects packets to the local machine depending on the chain hook. For example, if a service runs on a different port than its standard port, you can redirect incoming traffic from the standard port to this specific port. 6.3.2. Configuring masquerading using nftables Masquerading enables a router to dynamically change the source IP of packets sent through an interface to the IP address of the interface. This means that if the interface gets a new IP assigned, nftables automatically uses the new IP when replacing the source IP. The following procedure describes how to replace the source IP of packets leaving the host through the ens3 interface to the IP set on ens3 . Procedure 6.9. Configuring masquerading using nftables Create a table: Add the prerouting and postrouting chains to the table: Important Even if you do not add a rule to the prerouting chain, the nftables framework requires this chain to match incoming packet replies. Note that you must pass the -- option to the nft command to avoid that the shell interprets the negative priority value as an option of the nft command. Add a rule to the postrouting chain that matches outgoing packets on the ens3 interface: 6.3.3. Configuring source NAT using nftables On a router, Source NAT ( SNAT ) enables you to change the IP of packets sent through an interface to a specific IP address. The following procedure describes how to replace the source IP of packets leaving the router through the ens3 interface to 192.0.2.1 . Procedure 6.10. Configuring source NAT using nftables Create a table: Add the prerouting and postrouting chains to the table: Important Even if you do not add a rule to the prerouting chain, the nftables framework requires this chain to match outgoing packet replies. Note that you must pass the -- option to the nft command to avoid that the shell interprets the negative priority value as an option of the nft command. Add a rule to the postrouting chain that replaces the source IP of outgoing packets through ens3 with 192.0.2.1 : Additional resources For more information, see Section 6.6.2, "Forwarding incoming packets on a specific local port to a different host" 6.3.4. Configuring destination NAT using nftables Destination NAT enables you to redirect traffic on a router to a host that is not directly accessible from the Internet. The following procedure describes how to redirect incoming traffic sent to port 80 and 443 of the router to the host with the 192.0.2.1 IP address. Procedure 6.11. Configuring destination NAT using nftables Create a table: Add the prerouting and postrouting chains to the table: Important Even if you do not add a rule to the postrouting chain, the nftables framework requires this chain to match outgoing packet replies. Note that you must pass the -- option to the nft command to avoid that the shell interprets the negative priority value as an option of the nft command. Add a rule to the prerouting chain that redirects incoming traffic on the ens3 interface sent to port 80 and 443 to the host with the 192.0.2.1 IP: Depending on your environment, add either a SNAT or masquerading rule to change the source address: If the ens3 interface used dynamic IP addresses, add a masquerading rule: If the ens3 interface uses a static IP address, add a SNAT rule. For example, if the ens3 uses the 198.51.100.1 IP address: Additional resources For more information, see Section 6.3.1, "The different NAT types: masquerading, source NAT, destination NAT, and redirect" 6.3.5. Configuring a redirect using nftables The redirect feature is a special case of destination network address translation (DNAT) that redirects packets to the local machine depending on the chain hook. The following procedure describes how to redirect incoming and forwarded traffic sent to port 22 of the local host to port 2222 . Procedure 6.12. Configuring a redirect using nftables Create a table: Add the prerouting chain to the table: Note that you must pass the -- option to the nft command to avoid that the shell interprets the negative priority value as an option of the nft command. Add a rule to the prerouting chain that redirects incoming traffic on port 22 to port 2222 : Additional resources For more information, see Section 6.3.1, "The different NAT types: masquerading, source NAT, destination NAT, and redirect" | [
"nft add table nat",
"nft -- add chain nat prerouting { type nat hook prerouting priority -100 \\; } nft add chain nat postrouting { type nat hook postrouting priority 100 \\; }",
"nft add rule nat postrouting oifname \"ens3\" masquerade",
"nft add table nat",
"nft -- add chain nat prerouting { type nat hook prerouting priority -100 \\; } nft add chain nat postrouting { type nat hook postrouting priority 100 \\; }",
"nft add rule nat postrouting oifname \"ens3\" snat to 192.0.2.1",
"nft add table nat",
"nft -- add chain nat prerouting { type nat hook prerouting priority -100 \\; } nft add chain nat postrouting { type nat hook postrouting priority 100 \\; }",
"nft add rule nat prerouting iifname ens3 tcp dport { 80, 443 } dnat to 192.0.2.1",
"nft add rule nat postrouting oifname \"ens3\" masquerade",
"nft add rule nat postrouting oifname \"ens3\" snat to 198.51.100.1",
"nft add table nat",
"nft -- add chain nat prerouting { type nat hook prerouting priority -100 \\; }",
"nft add rule nat prerouting tcp dport 22 redirect to 2222"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-Configuring_NAT_using_nftables |
Chapter 1. Preparing to install on Nutanix | Chapter 1. Preparing to install on Nutanix Before you install an OpenShift Container Platform cluster, be sure that your Nutanix environment meets the following requirements. 1.1. Nutanix version requirements You must install the OpenShift Container Platform cluster to a Nutanix environment that meets the following requirements. Table 1.1. Version requirements for Nutanix virtual environments Component Required version Nutanix AOS 6.5.2.7 or later Prism Central pc.2022.6 or later 1.2. Environment requirements Before you install an OpenShift Container Platform cluster, review the following Nutanix AOS environment requirements. 1.2.1. Required account privileges The installation program requires access to a Nutanix account with the necessary permissions to deploy the cluster and to maintain the daily operation of it. The following options are available to you: You can use a local Prism Central user account with administrative privileges. Using a local account is the quickest way to grant access to an account with the required permissions. If your organization's security policies require that you use a more restrictive set of permissions, use the permissions that are listed in the following table to create a custom Cloud Native role in Prism Central. You can then assign the role to a user account that is a member of a Prism Central authentication directory. Consider the following when managing this user account: When assigning entities to the role, ensure that the user can access only the Prism Element and subnet that are required to deploy the virtual machines. Ensure that the user is a member of the project to which it needs to assign virtual machines. For more information, see the Nutanix documentation about creating a Custom Cloud Native role , assigning a role , and adding a user to a project . Example 1.1. Required permissions for creating a Custom Cloud Native role Nutanix Object When required Required permissions in Nutanix API Description Categories Always Create_Category_Mapping Create_Or_Update_Name_Category Create_Or_Update_Value_Category Delete_Category_Mapping Delete_Name_Category Delete_Value_Category View_Category_Mapping View_Name_Category View_Value_Category Create, read, and delete categories that are assigned to the OpenShift Container Platform machines. Images Always Create_Image Delete_Image View_Image Create, read, and delete the operating system images used for the OpenShift Container Platform machines. Virtual Machines Always Create_Virtual_Machine Delete_Virtual_Machine View_Virtual_Machine Create, read, and delete the OpenShift Container Platform machines. Clusters Always View_Cluster View the Prism Element clusters that host the OpenShift Container Platform machines. Subnets Always View_Subnet View the subnets that host the OpenShift Container Platform machines. Projects If you will associate a project with compute machines, control plane machines, or all machines. View_Project View the projects defined in Prism Central and allow a project to be assigned to the OpenShift Container Platform machines. 1.2.2. Cluster limits Available resources vary between clusters. The number of possible clusters within a Nutanix environment is limited primarily by available storage space and any limitations associated with the resources that the cluster creates, and resources that you require to deploy the cluster, such a IP addresses and networks. 1.2.3. Cluster resources A minimum of 800 GB of storage is required to use a standard cluster. When you deploy a OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your Nutanix instance. Although these resources use 856 GB of storage, the bootstrap node is destroyed as part of the installation process. A standard OpenShift Container Platform installation creates the following resources: 1 label Virtual machines: 1 disk image 1 temporary bootstrap node 3 control plane nodes 3 compute machines 1.2.4. Networking requirements You must use either AHV IP Address Management (IPAM) or Dynamic Host Configuration Protocol (DHCP) for the network and ensure that it is configured to provide persistent IP addresses to the cluster machines. Additionally, create the following networking resources before you install the OpenShift Container Platform cluster: IP addresses DNS records Note It is recommended that each OpenShift Container Platform node in the cluster have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, an NTP server prevents errors typically associated with asynchronous server clocks. 1.2.4.1. Required IP Addresses An installer-provisioned installation requires two static virtual IP (VIP) addresses: A VIP address for the API is required. This address is used to access the cluster API. A VIP address for ingress is required. This address is used for cluster ingress traffic. You specify these IP addresses when you install the OpenShift Container Platform cluster. 1.2.4.2. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the Nutanix instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. If you use your own DNS or DHCP server, you must also create records for each node, including the bootstrap, control plane, and compute nodes. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 1.2. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 1.3. Configuring the Cloud Credential Operator utility The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). To install a cluster on Nutanix, you must set the CCO to manual mode as part of the installation process. To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Preparing to update a cluster with manually maintained credentials | [
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_nutanix/preparing-to-install-on-nutanix |
Chapter 2. Installing and configuring Instance HA | Chapter 2. Installing and configuring Instance HA Red Hat OpenStack Platform (RHOSP) director deploys Instance High Availability (HA). However, you must perform additional steps to configure a new Instance HA deployment on a new overcloud. After you complete the steps, Instance HA will run on a subset of Compute nodes with a custom role. Important Instance HA is not supported on RHOSP hyperconverged infrastructures (HCI) environments. To use Instance HA in your RHOSP HCI environment, you must designate a subset of the Compute nodes with the ComputeInstanceHA role to use the Instance HA. Red Hat Ceph Storage services must not be hosted on the Compute nodes that host Instance HA. Important To enable instance HA in a different environment, such as an existing overcloud that uses standard or custom roles, perform only the procedures that are relevant to your deployment and adapt your templates accordingly. 2.1. Configuring the Instance HA role, flavor, and profile Before deploying Instance HA, add the Instance HA role to your roles-data.yaml file, create the Instance HA flavor, tag each Comput node that you want to manage with Instanc HA with the Instance HA profile, and map the Instance HA role to the Instance HA flavor. Note You can modify the example file and role names in this procedure according to your environment. Procedure Add the ComputeInstanceHA role to your roles-data.yaml file and regenerate the file. The ComputeInstanceHA role includes all the services in the default Compute role, the ComputeInstanceHA services, and the PacemakerRemote services. Create the compute-instance-ha flavor to tag the Compute nodes to manage with Instance HA. Tag each Compute node that you want to manage with Instance HA with the compute-instance-ha profile, and replace <NODE UUID> with the actual UUID: Map the ComputeInstanceHA role to the compute-instance-ha flavor by creating an environment file with the following parameter: Additional resources Roles 2.2. Enabling fencing on an overcloud with Instance HA Enable fencing on all Controller and Compute nodes in the overcloud by creating an environment file with fencing information. Procedure Create the environment file in an accessible location, such as ~/templates , and include the following content: If you do not use shared storage for your Compute instance, add the following parameter to the environment file that you created: Additional resources Section 1.2, "Planning your Instance HA deployment" Fencing Controller Nodes with STONITH 2.3. Deploying the overcloud with Instance HA If you already deployed the overcloud, rerun the openstack overcloud deploy command with the additional Instance HA files you created. You can configure Instance HA for your overcloud at any time after you create the undercloud. Prerequisites Instance HA role, flavor, and profile is configured. Fencing is enabled on the overcloud. Procedure Use the openstack overcloud deploy command with the -e option for each environment file that you created and with the compute-instanceha.yaml environment file. Replace <FLAVOR_ENV_FILE> and <FENCING_ENV_FILE> with the appropriate file names in your environment: Note Do not modify the compute-instanceha.yaml environment file. Include the full path to each environment file that you want to include in the overcloud deployment. After the deployment is complete, each Compute node includes a STONITH device and a GuestNode service. 2.4. Testing Instance HA evacuation To test that Instance HA evacuates instances correctly, you trigger evacuation on a Compute node and check that the Instance HA agents successfully evacuate and re-create the instance on a different Compute node. Warning The following procedure involves deliberately crashing a Compute node, which triggers the automated evacuation of instances with Instance HA. Prerequisites Instance HA is deployed on the Compute node. Procedure Start one or more instances on the overcloud. Log in to the Compute node that hosts the instances and change to the root user. Replace compute-n with the name of the Compute node: Crash the Compute node. Wait a few minutes for the node to restart, and then verify that the instances from the Compute node that you crash are re-created on another Compute node: 2.5. Designating instances to evacuate with Instance HA By default, Instance HA evacuates all instances from a failed node. You can configure Instance HA to only evacuate instances with specific images or flavors. Prerequisites Instance HA is deployed on the overcloud. Procedure Log in to the undercloud as the stack user. Source the overcloudrc file: Use one of the following options: Tag an image: Replace <image_id> with the ID of the image that you want to evacuate. Tag a flavor: Replace <flavor_id> with the ID of the flavor that you want to evacuate. 2.6. Additional resources Director Installation and Usage Composable Services and Custom Roles | [
"openstack overcloud roles generate -o ~/my_roles_data.yaml Controller Compute ComputeInstanceHA",
"source ~/stackrc openstack flavor create --id auto --ram 6144 --disk 40 --vcpus 4 compute-instance-ha openstack flavor set --property \"cpu_arch\"=\"x86_64\" --property \"capabilities:boot_option\"=\"local\" --property \"capabilities:profile\"=\"compute-instance-ha\" compute-instance-ha openstack flavor set --property resources:VCPU=0 --property resources:MEMORY_MB=0 --property resources:DISK_GB=0 --property resources:CUSTOM_BAREMETAL=1 compute-instance-ha",
"openstack baremetal node set --property capabilities='profile:compute-instance-ha,boot_option:local' <NODE UUID>",
"parameter_defaults: OvercloudComputeInstanceHAFlavor: compute-instance-ha",
"parameter_defaults: EnableFencing: true FencingConfig: devices: - agent: fence_ipmilan host_mac: \"00:ec:ad:cb:3c:c7\" params: login: admin ipaddr: 192.168.24.1 ipport: 6230 passwd: password lanplus: 1 - agent: fence_ipmilan host_mac: \"00:ec:ad:cb:3c:cb\" params: login: admin ipaddr: 192.168.24.1 ipport: 6231 passwd: password lanplus: 1 - agent: fence_ipmilan host_mac: \"00:ec:ad:cb:3c:cf\" params: login: admin ipaddr: 192.168.24.1 ipport: 6232 passwd: password lanplus: 1 - agent: fence_ipmilan host_mac: \"00:ec:ad:cb:3c:d3\" params: login: admin ipaddr: 192.168.24.1 ipport: 6233 passwd: password lanplus: 1 - agent: fence_ipmilan host_mac: \"00:ec:ad:cb:3c:d7\" params: login: admin ipaddr: 192.168.24.1 ipport: 6234 passwd: password lanplus: 1",
"parameter_defaults: ExtraConfig: tripleo::instanceha::no_shared_storage: true",
"openstack overcloud deploy --templates -e <FLAVOR_ENV_FILE> -e <FENCING_ENV_FILE> -r my_roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/compute-instanceha.yaml",
"stack@director USD . overcloudrc stack@director USD openstack server create --image cirros --flavor 2 test-failover stack@director USD openstack server list -c Name -c Status",
"stack@director USD . stackrc stack@director USD ssh -l heat-admin compute-n heat-admin@ compute-n USD su -",
"root@ compute-n USD echo c > /proc/sysrq-trigger",
"stack@director USD openstack server list -c Name -c Status stack@director USD openstack compute service list",
"source ~/overcloudrc",
"(overcloud) USD openstack image set --tag evacuable <image_id>",
"(overcloud) USD openstack flavor set --property evacuable=true <flavor_id>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/high_availability_for_compute_instances/assembly_installing-configuring-instanceha_rhosp |
Chapter 12. Installation configuration parameters for OpenStack | Chapter 12. Installation configuration parameters for OpenStack Before you deploy an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP), you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 12.1. Available installation configuration parameters for OpenStack The following tables specify the required, optional, and OpenStack-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 12.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 12.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 12.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Consider the following information before you configure network parameters for your cluster: If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you deployed nodes in an OpenShift Container Platform cluster with a network that supports both IPv4 and non-link-local IPv6 addresses, configure your cluster to use a dual-stack network. For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway. This ensures that in a multiple network interface controller (NIC) environment, a cluster can detect what NIC to use based on the available network interface. For more information, see "OVN-Kubernetes IPv6 and dual-stack limitations" in About the OVN-Kubernetes network plugin . To prevent network connectivity issues, do not install a single-stack IPv4 cluster on a host that supports dual-stack networking. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a non-overlapping range of private IP addresses for the cluster and service networks in each cluster. Table 12.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Configures the IPv4 join subnet that is used internally by ovn-kubernetes . This subnet must not overlap with any other subnet that OpenShift Container Platform is using, including the node network. The size of the subnet must be larger than the number of nodes. You cannot change the value after installation. An IP network block in CIDR notation. The default value is 100.64.0.0/16 . 12.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 12.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Mint , Passthrough , Manual or an empty string ( "" ). Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings Required to set the NLB load balancer type in AWS. Valid values are Classic or NLB . If no value is specified, the installation program defaults to Classic . The installation program sets the value provided here in the ingress cluster configuration object. If you do not specify a load balancer type for other Ingress Controllers, they use the type set in this parameter. Classic or NLB . The default value is Classic . How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . + Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough , or Manual . If you are installing on GCP into a shared virtual private cloud (VPC), credentialsMode must be set to Passthrough or Manual . + Important Setting this parameter to Manual enables alternatives to storing administrator-level secrets in the kube-system project, which require additional configuration steps. For more information, see "Alternatives to storing administrator-level secrets in the kube-system project". 12.1.4. Optional AWS configuration parameters Optional AWS configuration parameters are described in the following table: Table 12.4. Optional AWS parameters Parameter Description Values The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. The name of the IAM instance profile that you use for the machine. If you want the installation program to create the IAM instance profile for you, do not use the iamProfile parameter. You can specify either the iamProfile or iamRole parameter, but you cannot specify both. String The name of the IAM instance role that you use for the machine. When you specify an IAM role, the installation program creates an instance profile. If you want the installation program to create the IAM instance role for you, do not select the iamRole parameter. You can specify either the iamRole or iamProfile parameter, but you cannot specify both. String The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. Integer, for example 4000 . The size in GiB of the root volume. Integer, for example 500 . The type of the root volume. Valid AWS EBS volume type , such as io1 . The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of worker nodes with a specific KMS key. Valid key ID or the key ARN . The EC2 instance type for the compute machines. Valid AWS instance type, such as m4.2xlarge . See the Supported AWS machine types table that follows. The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . The AWS region that the installation program creates compute resources in. Any valid AWS region , such as us-east-1 . You can use the AWS CLI to access the regions available based on your selected instance type. For example: aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. Currently, AWS Graviton3 processors are only available in some regions. The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. The name of the IAM instance profile that you use for the machine. If you want the installation program to create the IAM instance profile for you, do not use the iamProfile parameter. You can specify either the iamProfile or iamRole parameter, but you cannot specify both. String The name of the IAM instance role that you use for the machine. When you specify an IAM role, the installation program creates an instance profile. If you want the installation program to create the IAM instance role for you, do not use the iamRole parameter. You can specify either the iamRole or iamProfile parameter, but you cannot specify both. String The Input/Output Operations Per Second (IOPS) that is reserved for the root volume on control plane machines. Integer, for example 4000 . The size in GiB of the root volume for control plane machines. Integer, for example 500 . The type of the root volume for control plane machines. Valid AWS EBS volume type , such as io1 . The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of control plane nodes with a specific KMS key. Valid key ID and the key ARN . The EC2 instance type for the control plane machines. Valid AWS instance type, such as m6i.xlarge . See the Supported AWS machine types table that follows. The availability zones where the installation program creates machines for the control plane machine pool. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . The AWS region that the installation program creates control plane resources in. Valid AWS region , such as us-east-1 . The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. String, for example Z3URY6TWQ91KVV . An Amazon Resource Name (ARN) for an existing IAM role in the account containing the specified hosted zone. The installation program and cluster operators will assume this role when performing operations on the hosted zone. This parameter should only be used if you are installing a cluster into a shared VPC. String, for example arn:aws:iam::1234567890:role/shared-vpc-role . The AWS service endpoint name and URL. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. Valid AWS service endpoint name and valid AWS service endpoint URL. A map of keys and values that the installation program adds as tags to all resources that it creates. Any valid YAML map, such as key value pairs in the <key>: <value> format. For more information about AWS tags, see Tagging Your Amazon EC2 Resources in the AWS documentation. Note You can add up to 25 user defined tags during installation. The remaining 25 tags are reserved for OpenShift Container Platform. A flag that directs in-cluster Operators to include the specified user tags in the tags of the AWS resources that the Operators create. Boolean values, for example true or false . If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same machineNetwork[].cidr ranges that you specify. For a standard cluster, specify a public and a private subnet for each availability zone. For a private cluster, specify a private subnet for each availability zone. For clusters that use AWS Local Zones, you must add AWS Local Zone subnets to this list to ensure edge machine pool creation. Valid subnet IDs. The public IPv4 pool ID that is used to allocate Elastic IPs (EIPs) when publish is set to External . You must provision and advertise the pool in the same AWS account and region of the cluster. You must ensure that you have 2n + 1 IPv4 available in the pool where n is the total number of AWS zones used to deploy the Network Load Balancer (NLB) for API, NAT gateways, and bootstrap node. For more information about bring your own IP addresses (BYOIP) in AWS, see Onboard your BYOIP . A valid public IPv4 pool id Note BYOIP can be enabled only for customized installations that have no network restrictions. Prevents the S3 bucket from being deleted after completion of bootstrapping. true or false . The default value is false , which results in the S3 bucket being deleted. 12.1.5. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 12.5. Additional RHOSP parameters Parameter Description Values For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . For compute machines, the root volume types. A list of strings, for example, { performance-host1 , performance-host2 , performance-host3 }. [1] For compute machines, the root volume's type. This property is deprecated and is replaced by compute.platform.openstack.rootVolume.types . String, for example, performance . [2] For compute machines, the Cinder availability zone to install root volumes on. If you do not set a value for this parameter, the installation program selects the default availability zone. This parameter is mandatory when compute.platform.openstack.zones is defined. A list of strings, for example ["zone-1", "zone-2"] . For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . For control plane machines, the root volume types. A list of strings, for example, { performance-host1 , performance-host2 , performance-host3 }. [1] For control plane machines, the root volume's type. This property is deprecated and is replaced by compute.platform.openstack.rootVolume.types . String, for example, performance . [2] For control plane machines, the Cinder availability zone to install root volumes on. If you do not set this value, the installation program selects the default availability zone. This parameter is mandatory when controlPlane.platform.openstack.zones is defined. A list of strings, for example ["zone-1", "zone-2"] . The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. In the cloud configuration in the clouds.yaml file, if possible, use application credentials rather than a user name and password combination. Using application credentials avoids disruptions from secret propogation that follow user name and password rotation. String, for example MyCloud . The RHOSP external network name to be used for installation. String, for example external . The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . If the machine pool defines zones , the count of types can either be a single item or match the number of items in zones . For example, the count of types cannot be 2 if there are 3 items in zones . If you have any existing reference to this property, the installer populates the corresponding value in the controlPlane.platform.openstack.rootVolume.types field. 12.1.6. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 12.6. Optional RHOSP parameters Parameter Description Values Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the RHOSP administrator configured. A list of strings. For example, ["zone-1", "zone-2"] . Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. Additional networks that are attached to a control plane machine are also attached to the bootstrap node. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the RHOSP administrator configured. A list of strings. For example, ["zone-1", "zone-2"] . Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . The location from which the installation program downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . Subnets for the machines to use. A list of subnet names or UUIDs to use in cluster installation. A network for the machines to use. The UUID or name of an RHOSP network to use in cluster installation. The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . Whether or not to use the default, internal load balancer. If the value is set to UserManaged , this default load balancer is disabled so that you can deploy a cluster that uses an external, user-managed load balancer. If the parameter is not set, or if the value is OpenShiftManagedDefault , the cluster uses the default load balancer. UserManaged or OpenShiftManagedDefault . The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 12.1.7. Additional Google Cloud Platform (GCP) configuration parameters Additional GCP configuration parameters are described in the following table: Table 12.7. Additional GCP parameters Parameter Description Values Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot control plane machines. You can override the default behavior by specifying the location of a custom RHCOS image that the installation program is to use for control plane machines only. String. The name of GCP project where the image is located. The name of the custom RHCOS image that the installation program is to use to boot control plane machines. If you use controlPlane.platform.gcp.osImage.project , this field is required. String. The name of the RHCOS image. Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot compute machines. You can override the default behavior by specifying the location of a custom RHCOS image that the installation program is to use for compute machines only. String. The name of GCP project where the image is located. The name of the custom RHCOS image that the installation program is to use to boot compute machines. If you use compute.platform.gcp.osImage.project , this field is required. String. The name of the RHCOS image. Specifies the email address of a GCP service account to be used during installations. This service account will be used to provision compute machines. String. The email address of the service account. The name of the existing Virtual Private Cloud (VPC) where you want to deploy your cluster. If you want to deploy your cluster into a shared VPC, you must set platform.gcp.networkProjectID with the name of the GCP project that contains the shared VPC. String. Optional. The name of the GCP project that contains the shared VPC where you want to deploy your cluster. String. The name of the GCP project where the installation program installs the cluster. String. The name of the GCP region that hosts your cluster. Any valid region name, such as us-central1 . The name of the existing subnet where you want to deploy your control plane machines. The subnet name. The name of the existing subnet where you want to deploy your compute machines. The subnet name. The availability zones where the installation program creates machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . Important When running your cluster on GCP 64-bit ARM infrastructures, ensure that you use a zone where Ampere Altra Arm CPU's are available. You can find which zones are compatible with 64-bit ARM processors in the "GCP availability zones" link. The size of the disk in gigabytes (GB). Any size between 16 GB and 65536 GB. The GCP disk type . The default disk type for all machines. Valid values are pd-balanced , pd-ssd , pd-standard , or hyperdisk-balanced . The default value is pd-ssd . Control plane machines cannot use the pd-standard disk type, so if you specify pd-standard as the default machine platform disk type, you must specify a different disk type using the controlPlane.platform.gcp.osDisk.diskType parameter. Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot control plane and compute machines. You can override the default behavior by specifying the location of a custom RHCOS image that the installation program is to use for both types of machines. String. The name of GCP project where the image is located. The name of the custom RHCOS image that the installation program is to use to boot control plane and compute machines. If you use platform.gcp.defaultMachinePlatform.osImage.project , this field is required. String. The name of the RHCOS image. Optional. Additional network tags to add to the control plane and compute machines. One or more strings, for example network-tag1 . The GCP machine type for control plane and compute machines. The GCP machine type, for example n1-standard-4 . The name of the customer managed encryption key to be used for machine disk encryption. The encryption key name. The name of the Key Management Service (KMS) key ring to which the KMS key belongs. The KMS key ring name. The GCP location in which the KMS key ring exists. The GCP location. The ID of the project in which the KMS key ring exists. This value defaults to the value of the platform.gcp.projectID parameter if it is not set. The GCP project ID. The GCP service account used for the encryption request for control plane and compute machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . Whether to enable Shielded VM secure boot for all machines in the cluster. Shielded VMs have additional security protocols such as secure boot, firmware and integrity monitoring, and rootkit protection. For more information on Shielded VMs, see Google's documentation on Shielded VMs . Enabled or Disabled . The default value is Disabled . Whether to use Confidential VMs for all machines in the cluster. Confidential VMs provide encryption for data during processing. For more information on Confidential computing, see Google's documentation on Confidential computing . Enabled or Disabled . The default value is Disabled . Specifies the behavior of all VMs during a host maintenance event, such as a software or hardware update. For Confidential VMs, this parameter must be set to Terminate . Confidential VMs do not support live VM migration. Terminate or Migrate . The default value is Migrate . The name of the customer managed encryption key to be used for control plane machine disk encryption. The encryption key name. For control plane machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. For control plane machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. The GCP service account used for the encryption request for control plane machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . The size of the disk in gigabytes (GB). This value applies to control plane machines. Any integer between 16 and 65536. The GCP disk type for control plane machines. Valid values are pd-balanced , pd-ssd , or hyperdisk-balanced . The default value is pd-ssd . Optional. Additional network tags to add to the control plane machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.tags parameter for control plane machines. One or more strings, for example control-plane-tag1 . The GCP machine type for control plane machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.type parameter. The GCP machine type, for example n1-standard-4 . The availability zones where the installation program creates control plane machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . Important When running your cluster on GCP 64-bit ARM infrastructures, ensure that you use a zone where Ampere Altra Arm CPU's are available. You can find which zones are compatible with 64-bit ARM processors in the "GCP availability zones" link. Whether to enable Shielded VM secure boot for control plane machines. Shielded VMs have additional security protocols such as secure boot, firmware and integrity monitoring, and rootkit protection. For more information on Shielded VMs, see Google's documentation on Shielded VMs . Enabled or Disabled . The default value is Disabled . Whether to enable Confidential VMs for control plane machines. Confidential VMs provide encryption for data while it is being processed. For more information on Confidential VMs, see Google's documentation on Confidential Computing . Enabled or Disabled . The default value is Disabled . Specifies the behavior of control plane VMs during a host maintenance event, such as a software or hardware update. For Confidential VMs, this parameter must be set to Terminate . Confidential VMs do not support live VM migration. Terminate or Migrate . The default value is Migrate . Specifies the email address of a GCP service account to be used during installations. This service account will be used to provision control plane machines. Important In the case of shared VPC installations, when the service account is not provided, the installer service account must have the resourcemanager.projects.getIamPolicy and resourcemanager.projects.setIamPolicy permissions in the host project. String. The email address of the service account. The name of the customer managed encryption key to be used for compute machine disk encryption. The encryption key name. For compute machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. For compute machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. The GCP service account used for the encryption request for compute machines. If this value is not set, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . The size of the disk in gigabytes (GB). This value applies to compute machines. Any integer between 16 and 65536. The GCP disk type for compute machines. Valid values are pd-balanced , pd-ssd , pd-standard , or hyperdisk-balanced . The default value is pd-ssd . Optional. Additional network tags to add to the compute machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.tags parameter for compute machines. One or more strings, for example compute-network-tag1 . The GCP machine type for compute machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.type parameter. The GCP machine type, for example n1-standard-4 . The availability zones where the installation program creates compute machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . Important When running your cluster on GCP 64-bit ARM infrastructures, ensure that you use a zone where Ampere Altra Arm CPU's are available. You can find which zones are compatible with 64-bit ARM processors in the "GCP availability zones" link. Whether to enable Shielded VM secure boot for compute machines. Shielded VMs have additional security protocols such as secure boot, firmware and integrity monitoring, and rootkit protection. For more information on Shielded VMs, see Google's documentation on Shielded VMs . Enabled or Disabled . The default value is Disabled . Whether to enable Confidential VMs for compute machines. Confidential VMs provide encryption for data while it is being processed. For more information on Confidential VMs, see Google's documentation on Confidential Computing . Enabled or Disabled . The default value is Disabled . Specifies the behavior of compute VMs during a host maintenance event, such as a software or hardware update. For Confidential VMs, this parameter must be set to Terminate . Confidential VMs do not support live VM migration. Terminate or Migrate . The default value is Migrate . | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"networking: ovnKubernetesConfig: ipv4: internalJoinSubnet:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"platform: aws: lbType:",
"publish:",
"sshKey:",
"compute: platform: aws: amiID:",
"compute: platform: aws: iamProfile:",
"compute: platform: aws: iamRole:",
"compute: platform: aws: rootVolume: iops:",
"compute: platform: aws: rootVolume: size:",
"compute: platform: aws: rootVolume: type:",
"compute: platform: aws: rootVolume: kmsKeyARN:",
"compute: platform: aws: type:",
"compute: platform: aws: zones:",
"compute: aws: region:",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"controlPlane: platform: aws: amiID:",
"controlPlane: platform: aws: iamProfile:",
"controlPlane: platform: aws: iamRole:",
"controlPlane: platform: aws: rootVolume: iops:",
"controlPlane: platform: aws: rootVolume: size:",
"controlPlane: platform: aws: rootVolume: type:",
"controlPlane: platform: aws: rootVolume: kmsKeyARN:",
"controlPlane: platform: aws: type:",
"controlPlane: platform: aws: zones:",
"controlPlane: aws: region:",
"platform: aws: amiID:",
"platform: aws: hostedZone:",
"platform: aws: hostedZoneRole:",
"platform: aws: serviceEndpoints: - name: url:",
"platform: aws: userTags:",
"platform: aws: propagateUserTags:",
"platform: aws: subnets:",
"platform: aws: publicIpv4Pool:",
"platform: aws: preserveBootstrapIgnition:",
"compute: platform: openstack: rootVolume: size:",
"compute: platform: openstack: rootVolume: types:",
"compute: platform: openstack: rootVolume: type:",
"compute: platform: openstack: rootVolume: zones:",
"controlPlane: platform: openstack: rootVolume: size:",
"controlPlane: platform: openstack: rootVolume: types:",
"controlPlane: platform: openstack: rootVolume: type:",
"controlPlane: platform: openstack: rootVolume: zones:",
"platform: openstack: cloud:",
"platform: openstack: externalNetwork:",
"platform: openstack: computeFlavor:",
"compute: platform: openstack: additionalNetworkIDs:",
"compute: platform: openstack: additionalSecurityGroupIDs:",
"compute: platform: openstack: zones:",
"compute: platform: openstack: serverGroupPolicy:",
"controlPlane: platform: openstack: additionalNetworkIDs:",
"controlPlane: platform: openstack: additionalSecurityGroupIDs:",
"controlPlane: platform: openstack: zones:",
"controlPlane: platform: openstack: serverGroupPolicy:",
"platform: openstack: clusterOSImage:",
"platform: openstack: clusterOSImageProperties:",
"platform: openstack: controlPlanePort: fixedIPs:",
"platform: openstack: controlPlanePort: network:",
"platform: openstack: defaultMachinePlatform:",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"platform: openstack: ingressFloatingIP:",
"platform: openstack: apiFloatingIP:",
"platform: openstack: externalDNS:",
"platform: openstack: loadbalancer:",
"platform: openstack: machinesSubnet:",
"controlPlane: platform: gcp: osImage: project:",
"controlPlane: platform: gcp: osImage: name:",
"compute: platform: gcp: osImage: project:",
"compute: platform: gcp: osImage: name:",
"compute: platform: gcp: serviceAccount:",
"platform: gcp: network:",
"platform: gcp: networkProjectID:",
"platform: gcp: projectID:",
"platform: gcp: region:",
"platform: gcp: controlPlaneSubnet:",
"platform: gcp: computeSubnet:",
"platform: gcp: defaultMachinePlatform: zones:",
"platform: gcp: defaultMachinePlatform: osDisk: diskSizeGB:",
"platform: gcp: defaultMachinePlatform: osDisk: diskType:",
"platform: gcp: defaultMachinePlatform: osImage: project:",
"platform: gcp: defaultMachinePlatform: osImage: name:",
"platform: gcp: defaultMachinePlatform: tags:",
"platform: gcp: defaultMachinePlatform: type:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: name:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: keyRing:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: location:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: projectID:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKeyServiceAccount:",
"platform: gcp: defaultMachinePlatform: secureBoot:",
"platform: gcp: defaultMachinePlatform: confidentialCompute:",
"platform: gcp: defaultMachinePlatform: onHostMaintenance:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: name:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: keyRing:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: location:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: projectID:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKeyServiceAccount:",
"controlPlane: platform: gcp: osDisk: diskSizeGB:",
"controlPlane: platform: gcp: osDisk: diskType:",
"controlPlane: platform: gcp: tags:",
"controlPlane: platform: gcp: type:",
"controlPlane: platform: gcp: zones:",
"controlPlane: platform: gcp: secureBoot:",
"controlPlane: platform: gcp: confidentialCompute:",
"controlPlane: platform: gcp: onHostMaintenance:",
"controlPlane: platform: gcp: serviceAccount:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: name:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: keyRing:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: location:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: projectID:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKeyServiceAccount:",
"compute: platform: gcp: osDisk: diskSizeGB:",
"compute: platform: gcp: osDisk: diskType:",
"compute: platform: gcp: tags:",
"compute: platform: gcp: type:",
"compute: platform: gcp: zones:",
"compute: platform: gcp: secureBoot:",
"compute: platform: gcp: confidentialCompute:",
"compute: platform: gcp: onHostMaintenance:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_openstack/installation-config-parameters-openstack |
Chapter 10. Management of Ceph object gateway using the Ceph Orchestrator | Chapter 10. Management of Ceph object gateway using the Ceph Orchestrator As a storage administrator, you can deploy Ceph object gateway using the command line interface or by using the service specification. You can also configure multi-site object gateways, and remove the Ceph object gateway using the Ceph Orchestrator. Cephadm deploys Ceph object gateway as a collection of daemons that manages a single-cluster deployment or a particular realm and zone in a multisite deployment. Note With Cephadm, the object gateway daemons are configured using the monitor configuration database instead of a ceph.conf or the command line. If that configuration is not already in the client.rgw section, then the object gateway daemons will start up with default settings and bind to the port 80 . Note The .default.rgw.buckets.index pool is created only after the bucket is created in Ceph Object Gateway, while the .default.rgw.buckets.data pool is created after the data is uploaded to the bucket. This section covers the following administrative tasks: Deploying the Ceph object gateway using the command line interface . Deploying the Ceph object gateway using the service specification . Deploying a multi-site Ceph object gateway using the Ceph Orchestrator . Removing the Ceph object gateway using the Ceph Orchestrator . Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. All the managers, monitors, and OSDs are deployed in the storage cluster. 10.1. Deploying the Ceph Object Gateway using the command line interface Using the Ceph Orchestrator, you can deploy the Ceph Object Gateway with the ceph orch command in the command line interface. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. All manager, monitor and OSD daemons are deployed. Procedure Log into the Cephadm shell: Example You can deploy the Ceph object gateway daemons in three different ways: Method 1 Create realm, zone group, zone, and then use the placement specification with the host name: Create a realm: Syntax Example Create a zone group: Syntax Example Create a zone: Syntax Example Commit the changes: Syntax Example Run the ceph orch apply command: Syntax Example Method 2 Use an arbitrary service name to deploy two Ceph Object Gateway daemons for a single cluster deployment: Syntax Example Method 3 Use an arbitrary service name on a labeled set of hosts: Syntax Note NUMBER_OF_DAEMONS controls the number of Ceph object gateways deployed on each host. To achieve the highest performance without incurring an additional cost, set this value to 2. Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 10.2. Deploying the Ceph Object Gateway using the service specification You can deploy the Ceph Object Gateway using the service specification with either the default or the custom realms, zones, and zone groups. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the bootstrapped host. Hosts are added to the cluster. All manager, monitor, and OSD daemons are deployed. Procedure As a root user, create a specification file: Example Configure S3 requests to wait for the duration defined in the rgw_exit_timeout_secs parameter for all outstanding requests to complete by setting rgw_graceful_stop to 'true' during Ceph Object gateway shutdown/restart. Syntax Note In containerized deployments, an additional extra_container_agrs configuration of --stop-timeout=120 (or the value of rgw_exit_timeout_secs configuration, if not default) is also necessary in order for it to work as expected with ceph orch stop/restart commands. Edit the radosgw.yml file to include the following details for the default realm, zone, and zone group: Syntax Note NUMBER_OF_DAEMONS controls the number of Ceph Object Gateways deployed on each host. To achieve the highest performance without incurring an additional cost, set this value to 2. Example Optional: For custom realm, zone, and zone group, create the resources and then create the radosgw.yml file: Create the custom realm, zone, and zone group: Example Create the radosgw.yml file with the following details: Example Mount the radosgw.yml file under a directory in the container: Example Note Every time you exit the shell, you have to mount the file in the container before deploying the daemon. Deploy the Ceph Object Gateway using the service specification: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 10.3. Deploying a multi-site Ceph Object Gateway using the Ceph Orchestrator Ceph Orchestrator supports multi-site configuration options for the Ceph Object Gateway. You can configure each object gateway to work in an active-active zone configuration allowing writes to a non-primary zone. The multi-site configuration is stored within a container called a realm. The realm stores zone groups, zones, and a time period. The rgw daemons handle the synchronization eliminating the need for a separate synchronization agent, thereby operating with an active-active configuration. You can also deploy multi-site zones using the command line interface (CLI). Note The following configuration assumes at least two Red Hat Ceph Storage clusters are in geographically separate locations. However, the configuration also works on the same site. Prerequisites At least two running Red Hat Ceph Storage clusters. At least two Ceph Object Gateway instances, one for each Red Hat Ceph Storage cluster. Root-level access to all the nodes. Nodes or containers are added to the storage cluster. All Ceph Manager, Monitor and OSD daemons are deployed. Procedure In the cephadm shell, configure the primary zone: Create a realm: Syntax Example If the storage cluster has a single realm, then specify the --default flag. Create a primary zone group: Syntax Example Create a primary zone: Syntax Example Optional: Delete the default zone, zone group, and the associated pools. Important Do not delete the default zone and its pools if you are using the default zone and zone group to store data. Also, removing the default zone group deletes the system user. To access old data in the default zone and zonegroup, use --rgw-zone default and --rgw-zonegroup default in radosgw-admin commands. Example Create a system user: Syntax Example Make a note of the access_key and secret_key . Add the access key and system key to the primary zone: Syntax Example Commit the changes: Syntax Example Outside the cephadm shell, fetch the FSID of the storage cluster and the processes: Example Start the Ceph Object Gateway daemon: Syntax Example In the Cephadm shell, configure the secondary zone. Pull the primary realm configuration from the host: Syntax Example Pull the primary period configuration from the host: Syntax Example Configure a secondary zone: Syntax Example Optional: Delete the default zone. Important Do not delete the default zone and its pools if you are using the default zone and zone group to store data. To access old data in the default zone and zonegroup, use --rgw-zone default and --rgw-zonegroup default in radosgw-admin commands. Example Update the Ceph configuration database: Syntax Example Commit the changes: Syntax Example Outside the Cephadm shell, fetch the FSID of the storage cluster and the processes: Example Start the Ceph Object Gateway daemon: Syntax Example Optional: Deploy multi-site Ceph Object Gateways using the placement specification: Syntax Example Verification Check the synchronization status to verify the deployment: Example 10.4. Removing the Ceph Object Gateway using the Ceph Orchestrator You can remove the Ceph object gateway daemons using the ceph orch rm command. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. At least one Ceph object gateway daemon deployed on the hosts. Procedure Log into the Cephadm shell: Example List the service: Example Remove the service: Syntax Example Verification List the hosts, daemons, and processes: Syntax Example Additional Resources See Deploying the Ceph object gateway using the command line interface section in the Red Hat Ceph Storage Operations Guide for more information. See Deploying the Ceph object gateway using the service specification section in the Red Hat Ceph Storage Operations Guide for more information. | [
"cephadm shell",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=test_realm --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=default --master --default",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default",
"radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=test_zone --master --default",
"radosgw-admin period update --rgw-realm= REALM_NAME --commit",
"radosgw-admin period update --rgw-realm=test_realm --commit",
"ceph orch apply rgw NAME [--realm= REALM_NAME ] [--zone= ZONE_NAME ] --placement=\" NUMBER_OF_DAEMONS [ HOST_NAME_1 HOST_NAME_2 ]\"",
"ceph orch apply rgw test --realm=test_realm --zone=test_zone --placement=\"2 host01 host02\"",
"ceph orch apply rgw SERVICE_NAME",
"ceph orch apply rgw foo",
"ceph orch host label add HOST_NAME_1 LABEL_NAME ceph orch host label add HOSTNAME_2 LABEL_NAME ceph orch apply rgw SERVICE_NAME --placement=\"label: LABEL_NAME count-per-host: NUMBER_OF_DAEMONS \" --port=8000",
"ceph orch host label add host01 rgw # the 'rgw' label can be anything ceph orch host label add host02 rgw ceph orch apply rgw foo --placement=\"2 label:rgw\" --port=8000",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=rgw",
"touch radosgw.yml",
"ceph config set client.rgw rgw_graceful_stop true ceph config set client.rgw rgw_exit_timeout_secs 120",
"[root@host1 ~]USD cat rgw_spec.yaml service_type: rgw service_id: foo placement: count_per_host: 1 hosts: - rgw_node spec: rgw_frontend_port: 8081 extra_container_args: - --stop-timeout=120",
"service_type: rgw service_id: REALM_NAME . ZONE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 count_per_host: NUMBER_OF_DAEMONS spec: rgw_realm: REALM_NAME rgw_zone: ZONE_NAME rgw_frontend_port: FRONT_END_PORT networks: - NETWORK_CIDR # Ceph Object Gateway service binds to a specific network",
"service_type: rgw service_id: default placement: hosts: - host01 - host02 - host03 count_per_host: 2 spec: rgw_realm: default rgw_zone: default rgw_frontend_port: 1234 networks: - 192.169.142.0/24",
"radosgw-admin realm create --rgw-realm=test_realm radosgw-admin zonegroup create --rgw-zonegroup=test_zonegroup radosgw-admin zone create --rgw-zonegroup=test_zonegroup --rgw-zone=test_zone radosgw-admin period update --rgw-realm=test_realm --commit",
"service_type: rgw service_id: test_realm.test_zone placement: hosts: - host01 - host02 - host03 count_per_host: 2 spec: rgw_realm: test_realm rgw_zone: test_zone rgw_frontend_port: 1234 networks: - 192.169.142.0/24",
"cephadm shell --mount radosgw.yml:/var/lib/ceph/radosgw/radosgw.yml",
"ceph orch apply -i FILE_NAME .yml",
"ceph orch apply -i radosgw.yml",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=rgw",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=test_realm --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_PRIMARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --master --default",
"radosgw-admin zone create --rgw-zonegroup= PRIMARY_ZONE_GROUP_NAME --rgw-zone= PRIMARY_ZONE_NAME --endpoints=http:// RGW_PRIMARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-1 --endpoints=http://rgw1:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zonegroup delete --rgw-zonegroup=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it",
"radosgw-admin user create --uid= USER_NAME --display-name=\" USER_NAME \" --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY --system",
"radosgw-admin user create --uid=zone.user --display-name=\"Zone user\" --system",
"radosgw-admin zone modify --rgw-zone= PRIMARY_ZONE_NAME --access-key= ACCESS_KEY --secret= SECRET_KEY",
"radosgw-admin zone modify --rgw-zone=us-east-1 --access-key=NE48APYCAODEPLKBCZVQ--secret=u24GHQWRE3yxxNBnFBzjM4jn14mFIckQ4EKL6LoW",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"systemctl list-units | grep ceph",
"systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]_realm.us-east-1.host01.ahdtsw.service systemctl enable [email protected]_realm.us-east-1.host01.ahdtsw.service",
"radosgw-admin realm pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin realm pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin period pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin period pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= SECONDARY_ZONE_NAME --endpoints=http:// RGW_SECONDARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY --endpoints=http:// FQDN :80 [--read-only]",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-2 --endpoints=http://rgw2:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ --endpoints=http://rgw.example.com:80",
"radosgw-admin zone rm --rgw-zone=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it",
"ceph config set SERVICE_NAME rgw_zone SECONDARY_ZONE_NAME",
"ceph config set rgw rgw_zone us-east-2",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"systemctl list-units | grep ceph",
"systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]_realm.us-east-2.host04.ahdtsw.service systemctl enable [email protected]_realm.us-east-2.host04.ahdtsw.service",
"ceph orch apply rgw NAME --realm= REALM_NAME --zone= PRIMARY_ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw east --realm=test_realm --zone=us-east-1 --placement=\"2 host01 host02\"",
"radosgw-admin sync status",
"cephadm shell",
"ceph orch ls",
"ceph orch rm SERVICE_NAME",
"ceph orch rm rgw.test_realm.test_zone_bb",
"ceph orch ps",
"ceph orch ps"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/operations_guide/management-of-ceph-object-gateway-services-using-the-ceph-orchestrator |
2.2. Client Access Control | 2.2. Client Access Control libvirt 's client access control framework allows system administrators to setup fine-grained permission rules across client users, managed objects, and API operations. This allows client connections to be locked down to a minimal set of privileges. In the default configuration, the libvirtd daemon has three levels of access control: All connections start off in an unauthenticated state, where the only API operations allowed are those required to complete authentication. After successful authentication, a connection either has full, unrestricted access to all libvirt API calls, or is locked down to only "read only" operations, according to what socket the client connection originated on. The access control framework allows authenticated connections to have fine-grained permission rules to be defined by the administrator. Every API call in libvirt has a set of permissions that will be validated against the object being used. Further permissions will also be checked if certain flags are set in the API call. In addition to checks on the object passed in to an API call, some methods will filter their results. 2.2.1. Access Control Drivers The access control framework is designed as a pluggable system to enable future integration with arbitrary access control technologies. By default, the none driver is used, which performs no access control checks at all. Currently, libvirt provides support for using polkit as a real access control driver. To learn how to use the polkit access driver see the configuration documentation . The access driver is configured in the /etc/libvirt/libvirtd.conf configuration file, using the access_drivers parameter. This parameter accepts an array of access control driver names. If more than one access driver is requested, then all must succeed in order for access to be granted. To enable 'polkit' as the driver, use the augtool command: To set the driver back to the default (no access control), enter the following command: For the changes made to libvirtd.conf to take effect, restart the libvirtd service. 2.2.2. Objects and Permissions libvirt applies access control to all the main object types in its API. Each object type, in turn, has a set of permissions defined. To determine what permissions are checked for a specific API call, consult the API reference manual documentation for the API in question. For the complete list of objects and permissions, see libvirt.org . 2.2.3. Security Concerns when Adding Block Devices to a Guest The host physical machine should not use file system labels to identify file systems in the fstab file, the initrd file or on the kernel command line. Doing so presents a security risk if guest virtual machines have write access to whole partitions or LVM volumes, because a guest virtual machine could potentially write a file-system label belonging to the host physical machine, to its own block device storage. Upon reboot of the host physical machine, the host physical machine could then mistakenly use the guest virtual machine's disk as a system disk, which would compromise the host physical machine system. It is preferable to use the UUID of a device to identify it in the /etc/fstab file, the /dev/initrd file, or on the kernel command line. Guest virtual machines should not be given write access to entire disks or block devices (for example, /dev/sdb ). Guest virtual machines with access to entire block devices may be able to modify volume labels, which can be used to compromise the host physical machine system. Use partitions (for example, /dev/sdb1 ) or LVM volumes to prevent this problem. See LVM Administration with CLI Commands or LVM Configuration Examples for information on LVM administration and configuration examples. If you are using raw access to partitions, for example /dev/sdb1 or raw disks such as /dev/sdb, you should configure LVM to only scan disks that are safe, using the global_filter setting. See the Logical Volume Manager Administration Guide for an example of an LVM configuration script using the global_filter command. | [
"augtool -s set '/files/etc/libvirt/libvirtd.conf/access_drivers[1]' polkit",
"augtool -s rm /files/etc/libvirt/libvirtd.conf/access_drivers",
"systemctl restart libvirtd.service"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_security_guide/sect-securing_the_host_physical_machine_and_improving_performance-client_access_control |
Development Guide Volume 5: Caching Guide | Development Guide Volume 5: Caching Guide Red Hat JBoss Data Virtualization 6.4 This guide is intended for developers Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_5_caching_guide/index |
Chapter 4. Ceph authentication configuration | Chapter 4. Ceph authentication configuration As a storage administrator, authenticating users and services is important to the security of the Red Hat Ceph Storage cluster. Red Hat Ceph Storage includes the Cephx protocol, as the default, for cryptographic authentication, and the tools to manage authentication in the storage cluster. 4.1. Prerequisites Installation of the Red Hat Ceph Storage software. 4.2. Cephx authentication The cephx protocol is enabled by default. Cryptographic authentication has some computational costs, though they are generally quite low. If the network environment connecting clients and hosts is considered safe and you cannot afford authentication computational costs, you can disable it. When deploying a Ceph storage cluster, the deployment tool will create the client.admin user and keyring. Important Red Hat recommends using authentication. Note If you disable authentication, you are at risk of a man-in-the-middle attack altering client and server messages, which could lead to significant security issues. Enabling and disabling Cephx Enabling Cephx requires that you have deployed keys for the Ceph Monitors and OSDs. When toggling Cephx authentication on or off, you do not have to repeat the deployment procedures. 4.3. Enabling Cephx When cephx is enabled, Ceph will look for the keyring in the default search path, which includes /etc/ceph/USDcluster.USDname.keyring . You can override this location by adding a keyring option in the [global] section of the Ceph configuration file, but this is not recommended. Execute the following procedures to enable cephx on a cluster with authentication disabled. If you or your deployment utility have already generated the keys, you may skip the steps related to generating keys. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. Procedure Create a client.admin key, and save a copy of the key for your client host: Warning This will erase the contents of any existing /etc/ceph/client.admin.keyring file. Do not perform this step if a deployment tool has already done it for you. Create a keyring for the monitor cluster and generate a monitor secret key: Copy the monitor keyring into a ceph.mon.keyring file in every monitor mon data directory. For example, to copy it to mon.a in cluster ceph , use the following: Generate a secret key for every OSD, where ID is the OSD number: By default the cephx authentication protocol is enabled. Note If the cephx authentication protocol was disabled previously by setting the authentication options to none , then by removing the following lines under the [global] section in the Ceph configuration file ( /etc/ceph/ceph.conf ) will reenable the cephx authentication protocol: Start or restart the Ceph storage cluster. Important Enabling cephx requires downtime because the cluster needs to be completely restarted, or it needs to be shut down and then started while client I/O is disabled. These flags need to be set before restarting or shutting down the storage cluster: Once cephx is enabled and all PGs are active and clean, unset the flags: 4.4. Disabling Cephx The following procedure describes how to disable Cephx. If your cluster environment is relatively safe, you can offset the computation expense of running authentication. Important Red Hat recommends enabling authentication. However, it may be easier during setup or troubleshooting to temporarily disable authentication. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. Procedure Disable cephx authentication by setting the following options in the [global] section of the Ceph configuration file: Start or restart the Ceph storage cluster. 4.5. Cephx user keyrings When you run Ceph with authentication enabled, the ceph administrative commands and Ceph clients require authentication keys to access the Ceph storage cluster. The most common way to provide these keys to the ceph administrative commands and clients is to include a Ceph keyring under the /etc/ceph/ directory. The file name is usually ceph.client.admin.keyring or USDcluster.client.admin.keyring . If you include the keyring under the /etc/ceph/ directory, you do not need to specify a keyring entry in the Ceph configuration file. Important Red Hat recommends copying the Red Hat Ceph Storage cluster keyring file to nodes where you will run administrative commands, because it contains the client.admin key. To do so, execute the following command: Replace USER with the user name used on the host with the client.admin key and HOSTNAME with the host name of that host. Note Ensure the ceph.keyring file has appropriate permissions set on the client machine. You can specify the key itself in the Ceph configuration file using the key setting, which is not recommended, or a path to a key file using the keyfile setting. 4.6. Cephx daemon keyrings Administrative users or deployment tools might generate daemon keyrings in the same way as generating user keyrings. By default, Ceph stores daemons keyrings inside their data directory. The default keyring locations, and the capabilities necessary for the daemon to function. Note The monitor keyring contains a key but no capabilities, and is not part of the Ceph storage cluster auth database. The daemon data directory locations default to directories of the form: Example You can override these locations, but it is not recommended. 4.7. Cephx message signatures Ceph provides fine-grained control so you can enable or disable signatures for service messages between the client and Ceph. You can enable or disable signatures for messages between Ceph daemons. Important Red Hat recommends that Ceph authenticate all ongoing messages between the entities using the session key set up for that initial authentication. Note Ceph kernel modules do not support signatures yet. 4.8. Additional Resources See all the Red Hat Ceph Storage Cephx configuration options in Appendix D for specific option descriptions and usage. | [
"ceph auth get-or-create client.admin mon 'allow *' osd 'allow *' -o /etc/ceph/ceph.client.admin.keyring",
"ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'",
"cp /tmp/ceph.mon.keyring /var/lib/ceph/mon/ceph-a/keyring",
"ceph auth get-or-create osd. ID mon 'allow rwx' osd 'allow *' -o /var/lib/ceph/osd/ceph- ID /keyring",
"auth_cluster_required = none auth_service_required = none auth_client_required = none",
"ceph osd set noout ceph osd set norecover ceph osd set norebalance ceph osd set nobackfill ceph osd set nodown ceph osd set pause",
"ceph osd unset noout ceph osd unset norecover ceph osd unset norebalance ceph osd unset nobackfill ceph osd unset nodown ceph osd unset pause",
"auth_cluster_required = none auth_service_required = none auth_client_required = none",
"scp USER @ HOSTNAME :/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring",
"/var/lib/ceph/USDtype/ CLUSTER - ID",
"/var/lib/ceph/osd/ceph-12"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/configuration_guide/ceph-authentication-configuration |
2.4. Uninstalling an IdM Server | 2.4. Uninstalling an IdM Server Note At domain level 0 , the procedure is different. See Section D.3.6, "Removing a Replica" . Prerequisites Before uninstalling a server that serves as a certificate authority (CA), key recovery authority (KRA), or DNS Security Extensions (DNSSEC) server, make sure these services are running on another server in the domain. Warning Removing the last replica that serves as a CA, KRA, or DNSSEC server can seriously disrupt the Identity Management functionality. Procedure To uninstall server.example.com : On another server, use the ipa server-del command to delete server.example.com from the topology: On server.example.com , use the ipa-server-install --uninstall command: Make sure all name server (NS) DNS records pointing to server.example.com are deleted from your DNS zones. This applies regardless of whether you use integrated DNS managed by IdM or external DNS. | [
"ipa server-del server.example.com",
"ipa-server-install --uninstall"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/Uninstalling_IPA_Servers |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To provide feedback, you can highlight the text in a document and add comments. This section explains how to submit feedback. Prerequisites You are logged in to the Red Hat Customer Portal. In the Red Hat Customer Portal, view the document in HTML format. Procedure To provide your feedback, perform the following steps: Click the Feedback button in the top-right corner of the document to see existing feedback. Note The feedback feature is enabled only in the HTML format. Highlight the section of the document where you want to provide feedback. Click the Add Feedback pop-up that appears near the highlighted text. A text box appears in the feedback section on the right side of the page. Enter your feedback in the text box and click Submit . A documentation issue is created. To view the issue, click the issue tracker link in the feedback view. | null | https://docs.redhat.com/en/documentation/red_hat_support_for_spring_boot/2.7/html/migrating_applications_to_spring_boot_2.7/proc_providing-feedback-on-red-hat-documentation |
Builds | Builds OpenShift Container Platform 4.13 Builds Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/builds/index |
Chapter 4. Using external storage | Chapter 4. Using external storage Organizations can have databases containing information, passwords, and other credentials. Typically, you cannot migrate existing data storage to a Red Hat build of Keycloak deployment so Red Hat build of Keycloak can federate existing external user databases. Red Hat build of Keycloak supports LDAP and Active Directory, but you can also code extensions for any custom user database by using the Red Hat build of Keycloak User Storage SPI. When a user attempts to log in, Red Hat build of Keycloak examines that user's storage to find that user. If Red Hat build of Keycloak does not find the user, Red Hat build of Keycloak iterates over each User Storage provider for the realm until it finds a match. Data from the external data storage then maps into a standard user model the Red Hat build of Keycloak runtime consumes. This user model then maps to OIDC token claims and SAML assertion attributes. External user databases rarely have the data necessary to support all the features of Red Hat build of Keycloak, so the User Storage Provider can opt to store items locally in Red Hat build of Keycloak user data storage. Providers can import users locally and sync periodically with external data storage. This approach depends on the capabilities of the provider and the configuration of the provider. For example, your external user data storage may not support OTP. The OTP can be handled and stored by Red Hat build of Keycloak, depending on the provider. 4.1. Adding a provider To add a storage provider, perform the following procedure: Procedure Click User Federation in the menu. User federation Select the provider type card from the listed cards. Red Hat build of Keycloak brings you to that provider's configuration page. 4.2. Dealing with provider failures If a User Storage Provider fails, you may not be able to log in and view users in the Admin Console. Red Hat build of Keycloak does not detect failures when using a Storage Provider to look up a user, so it cancels the invocation. If you have a Storage Provider with a high priority that fails during user lookup, the login or user query fails with an exception and will not fail over to the configured provider. Red Hat build of Keycloak searches the local Red Hat build of Keycloak user database first to resolve users before any LDAP or custom User Storage Provider. Consider creating an administrator account stored in the local Red Hat build of Keycloak user database in case of problems connecting to your LDAP and back ends. Each LDAP and custom User Storage Provider has an enable toggle on its Admin Console page. Disabling the User Storage Provider skips the provider when performing queries, so you can view and log in with user accounts in a different provider with lower priority. If your provider uses an import strategy and is disabled, imported users are still available for lookup in read-only mode. When a Storage Provider lookup fails, Red Hat build of Keycloak does not fail over because user databases often have duplicate usernames or duplicate emails between them. Duplicate usernames and emails can cause problems because the user loads from one external data store when the admin expects them to load from another data store. 4.3. Lightweight Directory Access Protocol (LDAP) and Active Directory Red Hat build of Keycloak includes an LDAP/AD provider. You can federate multiple different LDAP servers in one Red Hat build of Keycloak realm and map LDAP user attributes into the Red Hat build of Keycloak common user model. By default, Red Hat build of Keycloak maps the username, email, first name, and last name of the user account, but you can also configure additional mappings . Red Hat build of Keycloak's LDAP/AD provider supports password validation using LDAP/AD protocols and storage, edit, and synchronization modes. 4.3.1. Configuring federated LDAP storage Procedure Click User Federation in the menu. User federation Click Add LDAP providers . Red Hat build of Keycloak brings you to the LDAP configuration page. 4.3.2. Storage mode Red Hat build of Keycloak imports users from LDAP into the local Red Hat build of Keycloak user database. This copy of the user database synchronizes on-demand or through a periodic background task. An exception exists for synchronizing passwords. Red Hat build of Keycloak never imports passwords. Password validation always occurs on the LDAP server. The advantage of synchronization is that all Red Hat build of Keycloak features work efficiently because any required extra per-user data is stored locally. The disadvantage is that each time Red Hat build of Keycloak queries a specific user for the first time, Red Hat build of Keycloak performs a corresponding database insert. You can synchronize the import with your LDAP server. Import synchronization is unnecessary when LDAP mappers always read particular attributes from the LDAP rather than the database. You can use LDAP with Red Hat build of Keycloak without importing users into the Red Hat build of Keycloak user database. The LDAP server backs up the common user model that the Red Hat build of Keycloak runtime uses. If LDAP does not support data that a Red Hat build of Keycloak feature requires, that feature will not work. The advantage of this approach is that you do not have the resource usage of importing and synchronizing copies of LDAP users into the Red Hat build of Keycloak user database. The Import Users switch on the LDAP configuration page controls this storage mode. To import users, toggle this switch to ON . Note If you disable Import Users , you cannot save user profile attributes into the Red Hat build of Keycloak database. Also, you cannot save metadata except for user profile metadata mapped to the LDAP. This metadata can include role mappings, group mappings, and other metadata based on the LDAP mappers' configuration. When you attempt to change the non-LDAP mapped user data, the user update is not possible. For example, you cannot disable the LDAP mapped user unless the user's enabled flag maps to an LDAP attribute. 4.3.3. Edit mode Users and admins can modify user metadata, users through the Account Console , and administrators through the Admin Console. The Edit Mode configuration on the LDAP configuration page defines the user's LDAP update privileges. READONLY You cannot change the username, email, first name, last name, and other mapped attributes. Red Hat build of Keycloak shows an error anytime a user attempts to update these fields. Password updates are not supported. WRITABLE You can change the username, email, first name, last name, and other mapped attributes and passwords and synchronize them automatically with the LDAP store. UNSYNCED Red Hat build of Keycloak stores changes to the username, email, first name, last name, and passwords in Red Hat build of Keycloak local storage, so the administrator must synchronize this data back to LDAP. In this mode, Red Hat build of Keycloak deployments can update user metadata on read-only LDAP servers. This option also applies when importing users from LDAP into the local Red Hat build of Keycloak user database. Note When Red Hat build of Keycloak creates the LDAP provider, Red Hat build of Keycloak also creates a set of initial LDAP mappers . Red Hat build of Keycloak configures these mappers based on a combination of the Vendor , Edit Mode , and Import Users switches. For example, when edit mode is UNSYNCED, Red Hat build of Keycloak configures the mappers to read a particular user attribute from the database and not from the LDAP server. However, if you later change the edit mode, the mapper's configuration does not change because it is impossible to detect if the configuration changes changed in UNSYNCED mode. Decide the Edit Mode when creating the LDAP provider. This note applies to Import Users switch also. 4.3.4. Other configuration options Console Display Name The name of the provider to display in the admin console. Priority The priority of the provider when looking up users or adding a user. Sync Registrations Toggle this switch to ON if you want new users created by Red Hat build of Keycloak added to LDAP. Allow Kerberos authentication Enable Kerberos/SPNEGO authentication in the realm with user data provisioned from LDAP. For more information, see the Kerberos section . Other options Hover the mouse pointer over the tooltips in the Admin Console to see more details about these options. 4.3.5. Connecting to LDAP over SSL When you configure a secure connection URL to your LDAP store (for example, ldaps://myhost.com:636 ), Red Hat build of Keycloak uses SSL to communicate with the LDAP server. Configure a truststore on the Red Hat build of Keycloak server side so that Red Hat build of Keycloak can trust the SSL connection to LDAP - see Configuring a Truststore chapter. The Use Truststore SPI configuration property is deprecated. It should normally be left as Always . 4.3.6. Synchronizing LDAP users to Red Hat build of Keycloak If you set the Import Users option, the LDAP Provider handles importing LDAP users into the Red Hat build of Keycloak local database. The first time a user logs in or is returned as part of a user query (e.g. using the search field in the admin console), the LDAP provider imports the LDAP user into the Red Hat build of Keycloak database. During authentication, the LDAP password is validated. If you want to sync all LDAP users into the Red Hat build of Keycloak database, configure and enable the Sync Settings on the LDAP provider configuration page. Two types of synchronization exist: Periodic Full sync This type synchronizes all LDAP users into the Red Hat build of Keycloak database. The LDAP users already in Red Hat build of Keycloak, but different in LDAP, directly update in the Red Hat build of Keycloak database. Periodic Changed users sync When synchronizing, Red Hat build of Keycloak creates or updates users created or updated after the last sync only. The best way to synchronize is to click Synchronize all users when you first create the LDAP provider, then set up periodic synchronization of changed users. 4.3.7. LDAP mappers LDAP mappers are listeners triggered by the LDAP Provider. They provide another extension point to LDAP integration. LDAP mappers are triggered when: Users log in by using LDAP. Users initially register. The Admin Console queries a user. When you create an LDAP Federation provider, Red Hat build of Keycloak automatically provides a set of mappers for this provider. This set is changeable by users, who can also develop mappers or update/delete existing ones. User Attribute Mapper This mapper specifies which LDAP attribute maps to the attribute of the Red Hat build of Keycloak user. For example, you can configure the mail LDAP attribute to the email attribute in the Red Hat build of Keycloak database. For this mapper implementation, a one-to-one mapping always exists. FullName Mapper This mapper specifies the full name of the user. Red Hat build of Keycloak saves the name in an LDAP attribute (usually cn ) and maps the name to the firstName and lastname attributes in the Red Hat build of Keycloak database. Having cn to contain the full name of the user is common for LDAP deployments. Note When you register new users in Red Hat build of Keycloak and Sync Registrations is ON for the LDAP provider, the fullName mapper permits falling back to the username. This fallback is useful when using Microsoft Active Directory (MSAD). The common setup for MSAD is to configure the cn LDAP attribute as fullName and, at the same time, use the cn LDAP attribute as the RDN LDAP Attribute in the LDAP provider configuration. With this setup, Red Hat build of Keycloak falls back to the username. For example, if you create Red Hat build of Keycloak user "john123" and leave firstName and lastName empty, then the fullname mapper saves "john123" as the value of the cn in LDAP. When you enter "John Doe" for firstName and lastName later, the fullname mapper updates LDAP cn to the "John Doe" value as falling back to the username is unnecessary. Hardcoded Attribute Mapper This mapper adds a hardcoded attribute value to each Red Hat build of Keycloak user linked with LDAP. This mapper can also force values for the enabled or emailVerified user properties. Role Mapper This mapper configures role mappings from LDAP into Red Hat build of Keycloak role mappings. A single role mapper can map LDAP roles (usually groups from a particular branch of the LDAP tree) into roles corresponding to a specified client's realm roles or client roles. You can configure more Role mappers for the same LDAP provider. For example, you can specify that role mappings from groups under ou=main,dc=example,dc=org map to realm role mappings, and role mappings from groups under ou=finance,dc=example,dc=org map to client role mappings of client finance . Hardcoded Role Mapper This mapper grants a specified Red Hat build of Keycloak role to each Red Hat build of Keycloak user from the LDAP provider. Group Mapper This mapper maps LDAP groups from a branch of an LDAP tree into groups within Red Hat build of Keycloak. This mapper also propagates user-group mappings from LDAP into user-group mappings in Red Hat build of Keycloak. MSAD User Account Mapper This mapper is specific to Microsoft Active Directory (MSAD). It can integrate the MSAD user account state into the Red Hat build of Keycloak account state, such as enabled account or expired password. This mapper uses the userAccountControl , and pwdLastSet LDAP attributes, specific to MSAD and are not the LDAP standard. For example, if the value of pwdLastSet is 0 , the Red Hat build of Keycloak user must update their password. The result is an UPDATE_PASSWORD required action added to the user. If the value of userAccountControl is 514 (disabled account), the Red Hat build of Keycloak user is disabled. Certificate Mapper This mapper maps X.509 certificates. Red Hat build of Keycloak uses it in conjunction with X.509 authentication and Full certificate in PEM format as an identity source. This mapper behaves similarly to the User Attribute Mapper , but Red Hat build of Keycloak can filter for an LDAP attribute storing a PEM or DER format certificate. Enable Always Read Value From LDAP with this mapper. User Attribute mappers that map basic Red Hat build of Keycloak user attributes, such as username, firstname, lastname, and email, to corresponding LDAP attributes. You can extend these and provide your own additional attribute mappings. The Admin Console provides tooltips to help with configuring the corresponding mappers. 4.3.8. Password hashing When Red Hat build of Keycloak updates a password, Red Hat build of Keycloak sends the password in plain-text format. This action is different from updating the password in the built-in Red Hat build of Keycloak database, where Red Hat build of Keycloak hashes and salts the password before sending it to the database. For LDAP, Red Hat build of Keycloak relies on the LDAP server to hash and salt the password. By default, LDAP servers such as MSAD, RHDS, or FreeIPA hash and salt passwords. Other LDAP servers such as OpenLDAP or ApacheDS store the passwords in plain-text unless you use the LDAPv3 Password Modify Extended Operation as described in RFC3062 . Enable the LDAPv3 Password Modify Extended Operation in the LDAP configuration page. See the documentation of your LDAP server for more details. Warning Always verify that user passwords are properly hashed and not stored as plaintext by inspecting a changed directory entry using ldapsearch and base64 decode the userPassword attribute value. 4.3.9. Configuring the connection pool For more efficiency when managing LDAP connections and to improve performance when handling multiple connections, you can enable connection pooling. By doing that, when a connection is closed, it will be returned to the pool for future use therefore reducing the cost of creating new connections all the time. The LDAP connection pool configuration is configured using the following system properties: Name Description com.sun.jndi.ldap.connect.pool.authentication A list of space-separated authentication types of connections that may be pooled. Valid types are "none", "simple", and "DIGEST-MD5" com.sun.jndi.ldap.connect.pool.initsize The string representation of an integer that represents the number of connections per connection identity to create when initially creating a connection for the identity com.sun.jndi.ldap.connect.pool.maxsize The string representation of an integer that represents the maximum number of connections per connection identity that can be maintained concurrently com.sun.jndi.ldap.connect.pool.prefsize The string representation of an integer that represents the preferred number of connections per connection identity that should be maintained concurrently com.sun.jndi.ldap.connect.pool.timeout The string representation of an integer that represents the number of milliseconds that an idle connection may remain in the pool without being closed and removed from the pool com.sun.jndi.ldap.connect.pool.protocol A list of space-separated protocol types of connections that may be pooled. Valid types are "plain" and "ssl" com.sun.jndi.ldap.connect.pool.debug A string that indicates the level of debug output to produce. Valid values are "fine" (trace connection creation and removal) and "all" (all debugging information) For more details, see the Java LDAP Connection Pooling Configuration documentation. To set any of these properties, you can set the JAVA_OPTS_APPEND environment variable: export JAVA_OPTS_APPEND=-Dcom.sun.jndi.ldap.connect.pool.initsize=10 -Dcom.sun.jndi.ldap.connect.pool.maxsize=50 4.3.10. Troubleshooting It is useful to increase the logging level to TRACE for the category org.keycloak.storage.ldap . With this setting, many logging messages are sent to the server log in the TRACE level, including the logging for all queries to the LDAP server and the parameters, which were used to send the queries. When you are creating any LDAP question on user forum or JIRA, consider attaching the server log with enabled TRACE logging. If it is too big, the good alternative is to include just the snippet from server log with the messages, which were added to the log during the operation, which causes the issues to you. When you create an LDAP provider, a message appears in the server log in the INFO level starting with: It shows the configuration of your LDAP provider. Before you are asking the questions or reporting bugs, it will be nice to include this message to show your LDAP configuration. Eventually feel free to replace some config changes, which you do not want to include, with some placeholder values. One example is bindDn=some-placeholder . For connectionUrl , feel free to replace it as well, but it is generally useful to include at least the protocol, which was used ( ldap vs ldaps )`. Similarly it can be useful to include the details for configuration of your LDAP mappers, which are displayed with the message like this at the DEBUG level: Note those messages are displayed just with the enabled DEBUG logging. For tracking the performance or connection pooling issues, consider setting the value of property com.sun.jndi.ldap.connect.pool.debug to all . This change adds many additional messages to the server log with the included logging for the LDAP connection pooling. As a result, you can track the issues related to connection pooling or performance. For more details, see Configuring the connection pool. Note After changing the configuration of connection pooling, you may need to restart the Red Hat build of Keycloak server to enforce re-initialization of the LDAP provider connection. If no more messages appear for connection pooling even after server restart, it can indicate that connection pooling does not work with your LDAP server. For the case of reporting LDAP issue, you may consider to attach some part of your LDAP tree with the target data, which causes issues in your environment. For example if login of some user takes lot of time, you can consider attach his LDAP entry showing count of member attributes of various "group" entries. In this case, it might be useful to add if those group entries are mapped to some Group LDAP mapper (or Role LDAP Mapper) in Red Hat build of Keycloak and so on. 4.4. SSSD and FreeIPA Identity Management integration Red Hat build of Keycloak includes the System Security Services Daemon (SSSD) plugin. SSSD is part of the Fedora and Red Hat Enterprise Linux (RHEL), and it provides access to multiple identities and authentication providers. SSSD also provides benefits such as failover and offline support. For more information, see the Red Hat Enterprise Linux Identity Management documentation . SSSD integrates with the FreeIPA identity management (IdM) server, providing authentication and access control. With this integration, Red Hat build of Keycloak can authenticate against privileged access management (PAM) services and retrieve user data from SSSD. For more information about using Red Hat Identity Management in Linux environments, see the Red Hat Enterprise Linux Identity Management documentation . Red Hat build of Keycloak and SSSD communicate through read-only D-Bus interfaces. For this reason, the way to provision and update users is to use the FreeIPA/IdM administration interface. By default, the interface imports the username, email, first name, and last name. Note Red Hat build of Keycloak registers groups and roles automatically but does not synchronize them. The groups are imported from SSSD the first time the user is accessed and then they are managed entirely inside Red Hat build of Keycloak. Any changes made by the administrator in Red Hat build of Keycloak do not synchronize with SSSD or vice-versa. 4.4.1. FreeIPA/IdM server The FreeIPA Container image is available at Quay.io . To set up the FreeIPA server, see the FreeIPA documentation . Procedure Run your FreeIPA server using this command: docker run --name freeipa-server-container -it \ -h server.freeipa.local -e PASSWORD=YOUR_PASSWORD \ -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ -v /var/lib/ipa-data:/data:Z freeipa/freeipa-server The parameter -h with server.freeipa.local represents the FreeIPA/IdM server hostname. Change YOUR_PASSWORD to a password of your own. After the container starts, change the /etc/hosts file to include: x.x.x.x server.freeipa.local If you do not make this change, you must set up a DNS server. Use the following command to enroll your Linux server in the IPA domain so that the SSSD federation provider starts and runs on Red Hat build of Keycloak: ipa-client-install --mkhomedir -p admin -w password Run the following command on the client to verify the installation is working: kinit admin Enter your password. Add users to the IPA server using this command: USD ipa user-add <username> --first=<first name> --last=<surname> --email=<email address> --phone=<telephoneNumber> --street=<street> --city=<city> --state=<state> --postalcode=<postal code> --password Force set the user's password using kinit. kinit <username> Enter the following to restore normal IPA operation: kdestroy -A kinit admin 4.4.2. SSSD and D-Bus The federation provider obtains the data from SSSD using D-BUS. It authenticates the data using PAM. Procedure Install the sssd-dbus RPM. USD sudo yum install sssd-dbus Run the following provisioning script: USD bin/federation-sssd-setup.sh The script can also be used as a guide to configure SSSD and PAM for Red Hat build of Keycloak. It makes the following changes to /etc/sssd/sssd.conf : [domain/your-hostname.local] ... ldap_user_extra_attrs = mail:mail, sn:sn, givenname:givenname, telephoneNumber:telephoneNumber ... [sssd] services = nss, sudo, pam, ssh, ifp ... [ifp] allowed_uids = root, yourOSUsername user_attributes = +mail, +telephoneNumber, +givenname, +sn The ifp service is added to SSSD and configured to allow the OS user to interrogate the IPA server through this interface. The script also creates a new PAM service /etc/pam.d/keycloak to authenticate users via SSSD: auth required pam_sss.so account required pam_sss.so Run dbus-send to ensure the setup is successful. dbus-send --print-reply --system --dest=org.freedesktop.sssd.infopipe /org/freedesktop/sssd/infopipe org.freedesktop.sssd.infopipe.GetUserAttr string:<username> array:string:mail,givenname,sn,telephoneNumber dbus-send --print-reply --system --dest=org.freedesktop.sssd.infopipe /org/freedesktop/sssd/infopipe org.freedesktop.sssd.infopipe.GetUserGroups string:<username> If the setup is successful, each command displays the user's attributes and groups respectively. If there is a timeout or an error, the federation provider running on Red Hat build of Keycloak cannot retrieve any data. This error usually happens because the server is not enrolled in the FreeIPA IdM server, or does not have permission to access the SSSD service. If you do not have permission to access the SSSD service, ensure that the user running the Red Hat build of Keycloak server is in the /etc/sssd/sssd.conf file in the following section: [ifp] allowed_uids = root, yourOSUsername And the ipaapi system user is created inside the host. This user is necessary for the ifp service. Check the user is created in the system. grep ipaapi /etc/passwd ipaapi:x:992:988:IPA Framework User:/:/sbin/nologin 4.4.3. Enabling the SSSD federation provider Red Hat build of Keycloak uses DBus-Java project to communicate at a low level with D-Bus and JNA to authenticate via Operating System Pluggable Authentication Modules (PAM). Although now Red Hat build of Keycloak contains all the needed libraries to run the SSSD provider, JDK version 21 is needed. Therefore the SSSD provider will only be displayed when the host configuration is correct and JDK 21 is used to run Red Hat build of Keycloak. 4.4.4. Configuring a federated SSSD store After the installation, configure a federated SSSD store. Procedure Click User Federation in the menu. If everything is setup successfully the Add Sssd providers button will be displayed in the page. Click on it. Assign a name to the new provider. Click Save . You can now authenticate against Red Hat build of Keycloak using a FreeIPA/IdM user and credentials. 4.5. Custom providers Red Hat build of Keycloak does have a Service Provider Interface (SPI) for User Storage Federation to develop custom providers. You can find documentation on developing customer providers in the Server Developer Guide . | [
"export JAVA_OPTS_APPEND=-Dcom.sun.jndi.ldap.connect.pool.initsize=10 -Dcom.sun.jndi.ldap.connect.pool.maxsize=50",
"Creating new LDAP Store for the LDAP storage provider:",
"Mapper for provider: XXX, Mapper name: YYY, Provider: ZZZ",
"docker run --name freeipa-server-container -it -h server.freeipa.local -e PASSWORD=YOUR_PASSWORD -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /var/lib/ipa-data:/data:Z freeipa/freeipa-server",
"x.x.x.x server.freeipa.local",
"ipa-client-install --mkhomedir -p admin -w password",
"kinit admin",
"ipa user-add <username> --first=<first name> --last=<surname> --email=<email address> --phone=<telephoneNumber> --street=<street> --city=<city> --state=<state> --postalcode=<postal code> --password",
"kinit <username>",
"kdestroy -A kinit admin",
"sudo yum install sssd-dbus",
"bin/federation-sssd-setup.sh",
"[domain/your-hostname.local] ldap_user_extra_attrs = mail:mail, sn:sn, givenname:givenname, telephoneNumber:telephoneNumber [sssd] services = nss, sudo, pam, ssh, ifp [ifp] allowed_uids = root, yourOSUsername user_attributes = +mail, +telephoneNumber, +givenname, +sn",
"auth required pam_sss.so account required pam_sss.so",
"dbus-send --print-reply --system --dest=org.freedesktop.sssd.infopipe /org/freedesktop/sssd/infopipe org.freedesktop.sssd.infopipe.GetUserAttr string:<username> array:string:mail,givenname,sn,telephoneNumber dbus-send --print-reply --system --dest=org.freedesktop.sssd.infopipe /org/freedesktop/sssd/infopipe org.freedesktop.sssd.infopipe.GetUserGroups string:<username>",
"[ifp] allowed_uids = root, yourOSUsername",
"grep ipaapi /etc/passwd ipaapi:x:992:988:IPA Framework User:/:/sbin/nologin"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_administration_guide/user-storage-federation |
Chapter 10. Java Lombok | Chapter 10. Java Lombok This section shows how to enable Lombok support in your Java projects. By default, the lombok.jar file is available in all Java plug-ins provided by CodeReady Workspaces. To enable Lombok in a CodeReady Workspaces workspace, see the instructions below. Prerequisites A workspace or a devfile with: One of the Java-based plug-ins enabled ( redhat/java , redhat/java11 , redhat/java8 , redhat/quarkus-java8 or redhat/quarkus-java11 ) A valid Lombok project to import Procedure Open the workspace devfile. Edit the existing Java plug-in section, adding the preference: - id: redhat/java/latest preferences: java.jdt.ls.vmargs: '-javaagent:/lombok.jar' Verification Start or restart the workspace. Open a file containing Lombok annotations. Verify that the Class outline contains the Lombok generated methods. Additional resources For more details, see the Project Lombok website. | [
"- id: redhat/java/latest preferences: java.jdt.ls.vmargs: '-javaagent:/lombok.jar'"
] | https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/end-user_guide/proc_java-lombok_crw |
Chapter 3. Specifying dedicated nodes | Chapter 3. Specifying dedicated nodes A Kubernetes cluster runs on top of many Virtual Machines or nodes (generally anywhere between 2 and 20 nodes). Pods can be scheduled on any of these nodes. When you create or schedule a new pod, use the topology_spread_constraints setting to configure how new pods are distributed across the underlying nodes when scheduled or created. Do not schedule your pods on a single node, because if that node fails, the services that those pods provide also fails. Schedule the control plane nodes to run on different nodes to the automation job pods. If the control plane pods share nodes with the job pods, the control plane can become resource starved and degrade the performance of the whole application. 3.1. Assigning pods to specific nodes You can constrain the automation controller pods created by the operator to run on a certain subset of nodes. node_selector and postgres_selector constrain the automation controller pods to run only on the nodes that match all the specified key, or value, pairs. tolerations and postgres_tolerations enable the automation controller pods to be scheduled onto nodes with matching taints. See Taints and Toleration in the Kubernetes documentation for further details. The following table shows the settings and fields that can be set on the automation controller's specification section of the YAML (or using the OpenShift UI form). Name Description Default postgres_image Path of the image to pull postgres postgres_image_version Image version to pull 13 node_selector AutomationController pods' nodeSelector ""'' topology_spread_constraints AutomationController pods' topologySpreadConstraints ""'' tolerations AutomationController pods' tolerations ""'' annotations AutomationController pods' annotations ""'' postgres_selector Postgres pods' nodeSelector ""'' postgres_tolerations Postgres pods' tolerations ""'' topology_spread_constraints can help optimize spreading your control plane pods across the compute nodes that match your node selector. For example, with the maxSkew parameter of this option set to 100 , this means maximally spread across available nodes. So if there are three matching compute nodes and three pods, one pod will be assigned to each compute node. This parameter helps prevent the control plane pods from competing for resources with each other. Example of a custom configuration for constraining controller pods to specific nodes 3.2. Specify nodes for job execution You can add a node selector to the container group pod specification to ensure they only run against certain nodes. First add a label to the nodes you want to run jobs against. The following procedure adds a label to a node. Procedure List the nodes in your cluster, along with their labels: kubectl get nodes --show-labels The output is similar to this (shown here in a table): Name Status Roles Age Version Labels worker0 Ready <none> 1d v1.13.0 ... ,kubernetes.io/hostname=worker0 worker1 Ready <none> 1d v1.13.0 ... ,kubernetes.io/hostname=worker1 worker2 Ready <none> 1d v1.13.0 ... ,kubernetes.io/hostname=worker2 Choose one of your nodes, and add a label to it by using the following command: kubectl label nodes <your-node-name> <aap_node_type>=<execution> For example: kubectl label nodes <your-node-name> disktype=ssd where <your-node-name> is the name of your chosen node. Verify that your chosen node has a disktype=ssd label: kubectl get nodes --show-labels The output is similar to this (shown here in a table): Name Status Roles Age Version Labels worker0 Ready <none> 1d v1.13.0 ... disktype=ssd,kubernetes.io/hostname=worker0 worker1 Ready <none> 1d v1.13.0 ... ,kubernetes.io/hostname=worker1 worker2 Ready <none> 1d v1.13.0 ... ,kubernetes.io/hostname=worker2 You can see that the worker0 node now has a disktype=ssd label. In the automation controller UI, specify that label in the metadata section of your customized pod specification in the container group. apiVersion: v1 kind: Pod metadata: disktype: ssd namespace: ansible-automation-platform spec: serviceAccountName: default automountServiceAccountToken: false nodeSelector: aap_node_type: execution containers: - image: >- registry.redhat.io/ansible-automation-platform-22/ee-supported-rhel8@sha256:d134e198b179d1b21d3f067d745dd1a8e28167235c312cdc233860410ea3ec3e name: worker args: - ansible-runner - worker - '--private-data-dir=/runner' resources: requests: cpu: 250m memory: 100Mi Extra settings With extra_settings , you can pass many custom settings by using the awx-operator. The parameter extra_settings is appended to /etc/tower/settings.py and can be an alternative to the extra_volumes parameter. Name Description Default extra_settings Extra settings '' Example configuration of extra_settings parameter 3.3. Custom pod timeouts A container group job in automation controller transitions to the running state just before you submit the pod to the Kubernetes API. Automation controller then expects the pod to enter the Running state before AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT seconds has elapsed. You can set AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT to a higher value if you want automation controller to wait for longer before canceling jobs that fail to enter the Running state. AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT is how long automation controller waits from creation of a pod until the Ansible work begins in the pod. You can also extend the time if the pod cannot be scheduled because of resource constraints. You can do this using extra_settings on the automation controller specification. The default value is two hours. This is used if you are consistently launching many more jobs than Kubernetes can schedule, and jobs are spending periods longer than AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT in pending . Jobs are not launched until control capacity is available. If many more jobs are being launched than the container group has capacity to run, consider scaling up your Kubernetes worker nodes. 3.4. Jobs scheduled on the worker nodes Both automation controller and Kubernetes play a role in scheduling a job. When a job is launched, its dependencies are fulfilled, meaning any project updates or inventory updates are launched by automation controller as required by the job template, project, and inventory settings. If the job is not blocked by other business logic in automation controller and there is control capacity in the control plane to start the job, the job is submitted to the dispatcher. The default settings of the "cost" to control a job is 1 capacity . So, a control pod with 100 capacity is able to control up to 100 jobs at a time. Given control capacity, the job transitions from pending to waiting . The dispatcher, which is a background process in the control plan pod, starts a worker process to run the job. This communicates with the Kubernetes API using a service account associated with the container group and uses the pod specification as defined on the Container Group in automation controller to provision the pod. The job status in automation controller is shown as running . Kubernetes now schedules the pod. A pod can remain in the pending state for AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT . If the pod is denied through a ResourceQuota , the job starts over at pending . You can configure a resource quota on a namespace to limit how many resources may be consumed by pods in the namespace. For further information about ResourceQuotas, see Resource Quotas . | [
"spec: node_selector: | disktype: ssd kubernetes.io/arch: amd64 kubernetes.io/os: linux topology_spread_constraints: | - maxSkew: 100 topologyKey: \"topology.kubernetes.io/zone\" whenUnsatisfiable: \"ScheduleAnyway\" labelSelector: matchLabels: app.kubernetes.io/name: \"<resourcename>\" tolerations: | - key: \"dedicated\" operator: \"Equal\" value: \"AutomationController\" effect: \"NoSchedule\" postgres_selector: | disktype: ssd kubernetes.io/arch: amd64 kubernetes.io/os: linux postgres_tolerations: | - key: \"dedicated\" operator: \"Equal\" value: \"AutomationController\" effect: \"NoSchedule\"",
"get nodes --show-labels",
"label nodes <your-node-name> <aap_node_type>=<execution>",
"label nodes <your-node-name> disktype=ssd",
"get nodes --show-labels",
"apiVersion: v1 kind: Pod metadata: disktype: ssd namespace: ansible-automation-platform spec: serviceAccountName: default automountServiceAccountToken: false nodeSelector: aap_node_type: execution containers: - image: >- registry.redhat.io/ansible-automation-platform-22/ee-supported-rhel8@sha256:d134e198b179d1b21d3f067d745dd1a8e28167235c312cdc233860410ea3ec3e name: worker args: - ansible-runner - worker - '--private-data-dir=/runner' resources: requests: cpu: 250m memory: 100Mi",
"spec: extra_settings: - setting: MAX_PAGE_SIZE value: \"500\" - setting: AUTH_LDAP_BIND_DN value: \"cn=admin,dc=example,dc=com\" - setting: SYSTEM_TASK_ABS_MEM value: \"500\""
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_performance_considerations_for_operator_based_installations/assembly-specify-dedicted-nodes |
Chapter 3. Adding skills to your taxonomy tree | Chapter 3. Adding skills to your taxonomy tree The student model can learn custom skills by populating the qna.yaml file with your domain specific skill. Each qna.yaml file for skills contains a set of key-value entries with the following keys: version - The version of the qna.yaml file, this is the format of the file used for SDG. The currently supported value for this parameter is 2. created_by - Your Git username or name of contributor. task_description - A description of your skill and its function. seed_examples - A collection of key and value entries. context - Grounded skills require the user to provide additional context containing information that the model needs to know for executing the skill. The context key is not required for freeform skills. Each qna.yaml needs five context blocks and has a maximum token count of 500 tokens. question - Specify a question for the model. Each qna.yaml file needs at least five question and answer pairs with a maximum token count of 250 tokens. answer - Specify an answer for the model. Each qna.yaml file needs at least five question and answer pairs with a maximum token count of 250 tokens. 3.1. Creating a skill YAML file You can customize your taxonomy tree so the model can learn new skills for your desired use cases. The following procedure displays how to create a taxonomy tree that contains your skill qna.yaml file. Prerequisites You installed RHEL AI with the bootable container image. You initialized InstructLab and can use the ilab CLI. You have root user access on your machine. Procedure Navigate to the compositional_skills folder the taxonomy directory. Important Creating foundational skills, skills involving creating code, is not currently supported on RHEL AI version 1.3. Based on the directories that exist in the tree, select where in the tree you want to add your skill qna.yaml file. Example file path in the taxonomy tree taxonomy/compositional_skills/grounded/<add_example>/qna.yaml Using your desired text editor, create the qna.yaml file. Note For SDG to run properly, you must include at least five question and answer pair examples in your qna.yaml file. Add the necessary keys to the qna.yaml file and save your changes. For more information on formatting your qna.yaml file, see "Sample skill YAML specifications". Verification To verify that your skill is in the proper format, you can run the following command: USD ilab taxonomy diff The CLI displays if your taxonomy tree and qna.yaml file is valid and properly formatted. The CLI also displays where to fix any errors you may encounter. Example output of valid taxonomy tree and qna.yaml file compositional_skills/writing/freeform/<example>/qna.yaml Taxonomy in /taxonomy/ is valid :) Example output of invalid taxonomy tree and qna.yaml file with errors 6:11 error syntax error: mapping values are not allowed here (syntax) Reading taxonomy failed with the following error: 1 taxonomy with errors! Exiting. 3.2. Sample skills YAML specifications Skills share a similar question and answer layout as knowledge YAML files. On RHEL AI, the synthetic data generation (SDG) process uses your qna.yaml seed examples to create a large set of artificial data for the model to learn, rather than relying exclusively on user generated data. The order of the question, answer, and context pairs does not influence the SDG or training process. There are multiple types of skills that are split into categories: freeform, grounded, and foundational skills. You can see samples of each category in the following examples: Example freeform compositional skill qna.yaml file version: 2 1 created_by: <user-name> 2 task_description: 'Teach the model how to rhyme.' 3 seed_examples: - question: What are 5 words that rhyme with horn? 4 answer: warn, torn, born, thorn, and corn. 5 - question: What are 5 words that rhyme with cat? answer: bat, gnat, rat, vat, and mat. - question: What are 5 words that rhyme with poor? answer: door, shore, core, bore, and tore. - question: What are 5 words that rhyme with bank? answer: tank, rank, prank, sank, and drank. - question: What are 5 words that rhyme with bake? answer: wake, lake, steak, make, and quake. Example grounded compositional skill qna.yaml file version: 2 1 created_by: <user-name> 2 task_description: | 3 Given a context document and a user's question, determine whether you have enough information to provide a yes-or-no answer to the user's question. seed_examples: - context: | 4 Document: All regular employees except those on a leave of absence (LOA) must file Form XYZ. If you are a regular employee on a LOA, you do not need to file the form. If you are a supplemental employee (type B1, type B2, or type B3), your employee type determines which form you must file, as follows: * If you are a type B1 employee, you must follow the same rules that apply to the regular employees. * If you are a type B2 employee, the rules that apply to you are different from those that apply to the regular employees. See Pub. 31416 to find out which forms you should file. * If you are a supplemental employee of neither type B1 nor type B2, you are a type B3 employee. Different rules apply for each part of the year. For information on type B3 employees, see Pub. 31416. question: | 5 User asks the following question: "I am a supplemental employee of type B1 on a LOA. Should I file Form XYZ?" As the responding agent, do you have enough information to provide a yes-or-no answer to the user's question? Answer Yes or No, then explain your answer. answer: | 6 Yes, I have enough information to provide a yes-or-no answer to the user's question. Specifically, the answer to the user's question is "no". Indeed, the user is a supplemental employee of type B1 and is on a LOA. According to the document, supplemental employees of type B1 must follow the same rules that apply to the regular employees, and a regular employee on a LOA does not need to file Form XYZ. Hence, the user does not need to file Form XYZ. I reached this conclusion using only the information available in the document and in the user's question. - context: | Document: All regular employees except those on a leave of absence (LOA) must file Form XYZ. If you are a regular employee on a LOA, you do not need to file the form. If you are a supplemental employee (type B1, type B2, or type B3), your employee type determines which form you must file, as follows: * If you are a type B1 employee, you must follow the same rules that apply to the regular employees. * If you are a type B2 employee, the rules that apply to you are different from those that apply to the regular employees. See Pub. 31416 to find out which forms you should file. * If you are a supplemental employee of neither type B1 nor type B2, you are a type B3 employee. Different rules apply for each part of the year. For information on type B3 employees, see Pub. 31416. question: | User asks the following question: "I am a supplemental employee of type B1, not on a LOA. Should I file Form XYZ?" As the responding agent, do you have enough information to provide a yes-or-no answer to the user's question? Answer Yes or No, then explain your answer. answer: | Yes, I have enough information to provide a yes-or-no answer to the user's question. Specifically, the answer to the user's question is "yes". Indeed, the user is a supplemental employee of type B1 and is not on a LOA. According to the document, supplemental employees of type B1 must follow the same rules that apply to the regular employees, and all regular employees except those on a LOA must file Form XYZ. Hence, the user must file Form XYZ. I reached this conclusion using only the information available in the document and in the user's question. - context: | Document: The Helping Hands Program enables the employees of XYZ Technologies to support nonprofit organizations within the communities where the employees live and work. All active, regular employees of XYZ Technologies are eligible to participate in the program. The following categories are ineligible to participate in the program: * Retirees who no longer work at XYZ Technologies * XYZ employees on a leave of absence * Supplemental employees and contractors who are not regular XYZ employees Please note that XYZ employees on a leave of absence can participate in the Helping Hands Program once they return to an active status. question: | User asks the following question: "I am an active, regular employee of XYZ Technologies. I am not on a leave of absence. Can I participate in the Helping Hands Program?" As the responding agent, do you have enough information to provide a yes-or-no answer to the user's question? Answer Yes or No, then explain your answer. answer: | Yes, I have enough information to provide a yes-or-no answer to the user's question. Specifically, the answer to the user's question is "yes". Indeed, the user is an active, regular employee of XYZ Technologies. According to the document, all active, regular employees of XYZ Technologies are eligible to participate in the Helping Hands Program. The document excludes retirees, XYZ employees on a leave of absence, and those who are not regular XYZ employees; however, the user is an active and regular employee who is not on a leave of absence, so the exclusions do not apply. Therefore, the user can participate in the Helping Hands Program. I reached this conclusion using only the information available in the document and in the user's question. - context: | Document: Applying for a driver's license is as easy as 1-2-3! You can apply for a driver's license if you bring documents from each of the three sections below: Section 1. Proof of Identity - one document is required from the list below: * Unexpired passport * Birth certificate * Certificate of naturalization or citizenship * Valid, unexpired Permanent Resident Card Section 2. Proof of Social Security Number - one document is required from the list below: * Social Security card * W-2 form * SSA-1099 form * Paystub Section 3. Proof of Residency - two different documents are required from the list below: * Home utility bill * Medical document * Proof of car registration * Proof of employment * Proof of insurance * Mortgage bill * Lease agreement * Tax return question: | User asks the following question: "Can I apply for a driver's license if I bring my birth certificate, my W-2 form, and my proof of car registration?" As the responding agent, do you have enough information to provide a yes-or-no answer to the user's question? Answer Yes or No, then explain your answer. answer: | Yes, I have enough information to provide a yes-or-no answer to the user's question. Specifically, the answer to the user's question is "no". Indeed, the user wants to bring a birth certificate, a W-2 form, and a proof of car registration. According to the context document, to apply for a driver's license, the user must bring one proof of Identity, one proof of Social Security Number, and two different proof of Residency documents. A birth certificate is one proof of Identity, a W-2 form is one proof of Social Security Number, and a proof of car registration is one proof of Residency document. The user cannot apply for a driver's license without bringing a second proof of Residency document. I reached this conclusion using only the information available in the context document and in the user's question. - context: | Document: XYZ Corporation released three models of phones in 2024: Model MX, Model MY, and Model MZ. Model MX is the budget-conscious choice, it has 6-inch screen, a single rear camera, and 32 GB of storage. Model MY is the most portable choice, with a 5-inch screen, a dual rear camera, and 32 GB of storage. Model MZ is the luxury choice, sporting a 7-inch screen, a dual rear camera, and 64 GB of storage. These three models are the only ones available for sale in our store. question: | User asks the following question: "Does your store sell a phone made by XYZ Corporation that has a 6-inch screen and a dual rear camera?" As the responding agent, do you have enough information to provide a yes-or-no answer to the user's question? Answer Yes or No, then explain your answer. answer: | Yes, I have enough information to provide a yes-or-no answer to the user's question. Specifically, the answer to the user's question is "no". Indeed, the user is looking for a phone made by XYZ Corporation that has a 6-inch screen and a dual rear camera. According to the document, our store sells only three models of phones made by XYZ Corporation: Model MX, Model MY, and Model MZ. Only Model MX has a 6-inch screen, but it has a single rear camera; models MY and MZ do not have a 6-inch screen. Hence, our store does not sell a phone the user is looking for. I reached this conclusion using only the information available in the document and in the user's question. - context: | Document: Solar System Overview The solar system has one star, eight planets, five dwarf planets, at least 290 moons, more than 1.3 million asteroids, and about 3,900 comets. It is located in an outer spiral arm of the Milky Way galaxy called the Orion Arm, or Orion Spur. Our solar system orbits the center of the galaxy at about 515,000 mph (828,000 kph). It takes about 230 million years to complete one orbit around the galactic center. We call it the solar system because it is made up of our star, the Sun, and everything bound to it by gravity - the planets Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune; dwarf planets Pluto, Ceres, Makemake, Haumea, and Eris - along with hundreds of moons; and millions of asteroids, comets, and meteoroids. question: | User asks the following question: "Does the solar system have two stars?" As the responding agent, do you have enough information to provide a yes-or-no answer to the user's question? Answer Yes or No, then explain your answer. answer: | Yes, I have enough information to provide a yes-or-no answer to the user's question. Specifically, the answer to the user's question is "no". According to the document, the solar system has only one star - the Sun, not two stars. I reached this conclusion using only the information available in the document and in the user's question. 1 1 Specify the version of the skill qna.yaml format. Currently, the valid value is 2 . 2 2 Specify your name or git username. 3 3 Specify a description of your skill and its function. 4 Specify additional context containing information that the model needs to know for executing the skill. Required for grounded skills. 4 5 Specify a question for the model. 5 6 Specify the desired response from the model. | [
"taxonomy/compositional_skills/grounded/<add_example>/qna.yaml",
"ilab taxonomy diff",
"compositional_skills/writing/freeform/<example>/qna.yaml Taxonomy in /taxonomy/ is valid :)",
"6:11 error syntax error: mapping values are not allowed here (syntax) Reading taxonomy failed with the following error: 1 taxonomy with errors! Exiting.",
"version: 2 1 created_by: <user-name> 2 task_description: 'Teach the model how to rhyme.' 3 seed_examples: - question: What are 5 words that rhyme with horn? 4 answer: warn, torn, born, thorn, and corn. 5 - question: What are 5 words that rhyme with cat? answer: bat, gnat, rat, vat, and mat. - question: What are 5 words that rhyme with poor? answer: door, shore, core, bore, and tore. - question: What are 5 words that rhyme with bank? answer: tank, rank, prank, sank, and drank. - question: What are 5 words that rhyme with bake? answer: wake, lake, steak, make, and quake.",
"version: 2 1 created_by: <user-name> 2 task_description: | 3 Given a context document and a user's question, determine whether you have enough information to provide a yes-or-no answer to the user's question. seed_examples: - context: | 4 Document: All regular employees except those on a leave of absence (LOA) must file Form XYZ. If you are a regular employee on a LOA, you do not need to file the form. If you are a supplemental employee (type B1, type B2, or type B3), your employee type determines which form you must file, as follows: * If you are a type B1 employee, you must follow the same rules that apply to the regular employees. * If you are a type B2 employee, the rules that apply to you are different from those that apply to the regular employees. See Pub. 31416 to find out which forms you should file. * If you are a supplemental employee of neither type B1 nor type B2, you are a type B3 employee. Different rules apply for each part of the year. For information on type B3 employees, see Pub. 31416. question: | 5 User asks the following question: \"I am a supplemental employee of type B1 on a LOA. Should I file Form XYZ?\" As the responding agent, do you have enough information to provide a yes-or-no answer to the user's question? Answer Yes or No, then explain your answer. answer: | 6 Yes, I have enough information to provide a yes-or-no answer to the user's question. Specifically, the answer to the user's question is \"no\". Indeed, the user is a supplemental employee of type B1 and is on a LOA. According to the document, supplemental employees of type B1 must follow the same rules that apply to the regular employees, and a regular employee on a LOA does not need to file Form XYZ. Hence, the user does not need to file Form XYZ. I reached this conclusion using only the information available in the document and in the user's question. - context: | Document: All regular employees except those on a leave of absence (LOA) must file Form XYZ. If you are a regular employee on a LOA, you do not need to file the form. If you are a supplemental employee (type B1, type B2, or type B3), your employee type determines which form you must file, as follows: * If you are a type B1 employee, you must follow the same rules that apply to the regular employees. * If you are a type B2 employee, the rules that apply to you are different from those that apply to the regular employees. See Pub. 31416 to find out which forms you should file. * If you are a supplemental employee of neither type B1 nor type B2, you are a type B3 employee. Different rules apply for each part of the year. For information on type B3 employees, see Pub. 31416. question: | User asks the following question: \"I am a supplemental employee of type B1, not on a LOA. Should I file Form XYZ?\" As the responding agent, do you have enough information to provide a yes-or-no answer to the user's question? Answer Yes or No, then explain your answer. answer: | Yes, I have enough information to provide a yes-or-no answer to the user's question. Specifically, the answer to the user's question is \"yes\". Indeed, the user is a supplemental employee of type B1 and is not on a LOA. According to the document, supplemental employees of type B1 must follow the same rules that apply to the regular employees, and all regular employees except those on a LOA must file Form XYZ. Hence, the user must file Form XYZ. I reached this conclusion using only the information available in the document and in the user's question. - context: | Document: The Helping Hands Program enables the employees of XYZ Technologies to support nonprofit organizations within the communities where the employees live and work. All active, regular employees of XYZ Technologies are eligible to participate in the program. The following categories are ineligible to participate in the program: * Retirees who no longer work at XYZ Technologies * XYZ employees on a leave of absence * Supplemental employees and contractors who are not regular XYZ employees Please note that XYZ employees on a leave of absence can participate in the Helping Hands Program once they return to an active status. question: | User asks the following question: \"I am an active, regular employee of XYZ Technologies. I am not on a leave of absence. Can I participate in the Helping Hands Program?\" As the responding agent, do you have enough information to provide a yes-or-no answer to the user's question? Answer Yes or No, then explain your answer. answer: | Yes, I have enough information to provide a yes-or-no answer to the user's question. Specifically, the answer to the user's question is \"yes\". Indeed, the user is an active, regular employee of XYZ Technologies. According to the document, all active, regular employees of XYZ Technologies are eligible to participate in the Helping Hands Program. The document excludes retirees, XYZ employees on a leave of absence, and those who are not regular XYZ employees; however, the user is an active and regular employee who is not on a leave of absence, so the exclusions do not apply. Therefore, the user can participate in the Helping Hands Program. I reached this conclusion using only the information available in the document and in the user's question. - context: | Document: Applying for a driver's license is as easy as 1-2-3! You can apply for a driver's license if you bring documents from each of the three sections below: Section 1. Proof of Identity - one document is required from the list below: * Unexpired passport * Birth certificate * Certificate of naturalization or citizenship * Valid, unexpired Permanent Resident Card Section 2. Proof of Social Security Number - one document is required from the list below: * Social Security card * W-2 form * SSA-1099 form * Paystub Section 3. Proof of Residency - two different documents are required from the list below: * Home utility bill * Medical document * Proof of car registration * Proof of employment * Proof of insurance * Mortgage bill * Lease agreement * Tax return question: | User asks the following question: \"Can I apply for a driver's license if I bring my birth certificate, my W-2 form, and my proof of car registration?\" As the responding agent, do you have enough information to provide a yes-or-no answer to the user's question? Answer Yes or No, then explain your answer. answer: | Yes, I have enough information to provide a yes-or-no answer to the user's question. Specifically, the answer to the user's question is \"no\". Indeed, the user wants to bring a birth certificate, a W-2 form, and a proof of car registration. According to the context document, to apply for a driver's license, the user must bring one proof of Identity, one proof of Social Security Number, and two different proof of Residency documents. A birth certificate is one proof of Identity, a W-2 form is one proof of Social Security Number, and a proof of car registration is one proof of Residency document. The user cannot apply for a driver's license without bringing a second proof of Residency document. I reached this conclusion using only the information available in the context document and in the user's question. - context: | Document: XYZ Corporation released three models of phones in 2024: Model MX, Model MY, and Model MZ. Model MX is the budget-conscious choice, it has 6-inch screen, a single rear camera, and 32 GB of storage. Model MY is the most portable choice, with a 5-inch screen, a dual rear camera, and 32 GB of storage. Model MZ is the luxury choice, sporting a 7-inch screen, a dual rear camera, and 64 GB of storage. These three models are the only ones available for sale in our store. question: | User asks the following question: \"Does your store sell a phone made by XYZ Corporation that has a 6-inch screen and a dual rear camera?\" As the responding agent, do you have enough information to provide a yes-or-no answer to the user's question? Answer Yes or No, then explain your answer. answer: | Yes, I have enough information to provide a yes-or-no answer to the user's question. Specifically, the answer to the user's question is \"no\". Indeed, the user is looking for a phone made by XYZ Corporation that has a 6-inch screen and a dual rear camera. According to the document, our store sells only three models of phones made by XYZ Corporation: Model MX, Model MY, and Model MZ. Only Model MX has a 6-inch screen, but it has a single rear camera; models MY and MZ do not have a 6-inch screen. Hence, our store does not sell a phone the user is looking for. I reached this conclusion using only the information available in the document and in the user's question. - context: | Document: Solar System Overview The solar system has one star, eight planets, five dwarf planets, at least 290 moons, more than 1.3 million asteroids, and about 3,900 comets. It is located in an outer spiral arm of the Milky Way galaxy called the Orion Arm, or Orion Spur. Our solar system orbits the center of the galaxy at about 515,000 mph (828,000 kph). It takes about 230 million years to complete one orbit around the galactic center. We call it the solar system because it is made up of our star, the Sun, and everything bound to it by gravity - the planets Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune; dwarf planets Pluto, Ceres, Makemake, Haumea, and Eris - along with hundreds of moons; and millions of asteroids, comets, and meteoroids. question: | User asks the following question: \"Does the solar system have two stars?\" As the responding agent, do you have enough information to provide a yes-or-no answer to the user's question? Answer Yes or No, then explain your answer. answer: | Yes, I have enough information to provide a yes-or-no answer to the user's question. Specifically, the answer to the user's question is \"no\". According to the document, the solar system has only one star - the Sun, not two stars. I reached this conclusion using only the information available in the document and in the user's question."
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html/creating_skills_and_knowledge_yaml_files/adding_skills |
Chapter 12. Allowing JavaScript-based access to the API server from additional hosts | Chapter 12. Allowing JavaScript-based access to the API server from additional hosts 12.1. Allowing JavaScript-based access to the API server from additional hosts The default OpenShift Container Platform configuration only allows the web console to send requests to the API server. If you need to access the API server or OAuth server from a JavaScript application using a different hostname, you can configure additional hostnames to allow. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Edit the APIServer resource: USD oc edit apiserver.config.openshift.io cluster Add the additionalCORSAllowedOrigins field under the spec section and specify one or more additional hostnames: apiVersion: config.openshift.io/v1 kind: APIServer metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-07-11T17:35:37Z" generation: 1 name: cluster resourceVersion: "907" selfLink: /apis/config.openshift.io/v1/apiservers/cluster uid: 4b45a8dd-a402-11e9-91ec-0219944e0696 spec: additionalCORSAllowedOrigins: - (?i)//my\.subdomain\.domain\.com(:|\z) 1 1 The hostname is specified as a Golang regular expression that matches against CORS headers from HTTP requests against the API server and OAuth server. Note This example uses the following syntax: The (?i) makes it case-insensitive. The // pins to the beginning of the domain and matches the double slash following http: or https: . The \. escapes dots in the domain name. The (:|\z) matches the end of the domain name (\z) or a port separator (:) . Save the file to apply the changes. | [
"oc edit apiserver.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-07-11T17:35:37Z\" generation: 1 name: cluster resourceVersion: \"907\" selfLink: /apis/config.openshift.io/v1/apiservers/cluster uid: 4b45a8dd-a402-11e9-91ec-0219944e0696 spec: additionalCORSAllowedOrigins: - (?i)//my\\.subdomain\\.domain\\.com(:|\\z) 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/security_and_compliance/allowing-javascript-based-access-api-server |
Using JBoss EAP XP 5.0 | Using JBoss EAP XP 5.0 Red Hat JBoss Enterprise Application Platform 8.0 For Use with JBoss EAP XP 5.0 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_jboss_eap_xp_5.0/index |
Chapter 2. Understanding Operators | Chapter 2. Understanding Operators 2.1. What are Operators? Conceptually, Operators take human operational knowledge and encode it into software that is more easily shared with consumers. Operators are pieces of software that ease the operational complexity of running another piece of software. They act like an extension of the software vendor's engineering team, monitoring a Kubernetes environment (such as OpenShift Container Platform) and using its current state to make decisions in real time. Advanced Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, like skipping a software backup process to save time. More technically, Operators are a method of packaging, deploying, and managing a Kubernetes application. A Kubernetes application is an app that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl or oc tooling. To be able to make the most of Kubernetes, you require a set of cohesive APIs to extend in order to service and manage your apps that run on Kubernetes. Think of Operators as the runtime that manages this type of app on Kubernetes. 2.1.1. Why use Operators? Operators provide: Repeatability of installation and upgrade. Constant health checks of every system component. Over-the-air (OTA) updates for OpenShift components and ISV content. A place to encapsulate knowledge from field engineers and spread it to all users, not just one or two. Why deploy on Kubernetes? Kubernetes (and by extension, OpenShift Container Platform) contains all of the primitives needed to build complex distributed systems - secret handling, load balancing, service discovery, autoscaling - that work across on-premises and cloud providers. Why manage your app with Kubernetes APIs and kubectl tooling? These APIs are feature rich, have clients for all platforms and plug into the cluster's access control/auditing. An Operator uses the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom object, for example MongoDB , looks and acts just like the built-in, native Kubernetes objects. How do Operators compare with service brokers? A service broker is a step towards programmatic discovery and deployment of an app. However, because it is not a long running process, it cannot execute Day 2 operations like upgrade, failover, or scaling. Customizations and parameterization of tunables are provided at install time, versus an Operator that is constantly watching the current state of your cluster. Off-cluster services are a good match for a service broker, although Operators exist for these as well. 2.1.2. Operator Framework The Operator Framework is a family of tools and capabilities to deliver on the customer experience described above. It is not just about writing code; testing, delivering, and updating Operators is just as important. The Operator Framework components consist of open source tools to tackle these problems: Operator SDK The Operator SDK assists Operator authors in bootstrapping, building, testing, and packaging their own Operator based on their expertise without requiring knowledge of Kubernetes API complexities. Operator Lifecycle Manager Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. It is deployed by default in OpenShift Container Platform 4.17. Operator Registry The Operator Registry stores cluster service versions (CSVs) and custom resource definitions (CRDs) for creation in a cluster and stores Operator metadata about packages and channels. It runs in a Kubernetes or OpenShift cluster to provide this Operator catalog data to OLM. OperatorHub OperatorHub is a web console for cluster administrators to discover and select Operators to install on their cluster. It is deployed by default in OpenShift Container Platform. These tools are designed to be composable, so you can use any that are useful to you. 2.1.3. Operator maturity model The level of sophistication of the management logic encapsulated within an Operator can vary. This logic is also in general highly dependent on the type of the service represented by the Operator. One can however generalize the scale of the maturity of the encapsulated operations of an Operator for certain set of capabilities that most Operators can include. To this end, the following Operator maturity model defines five phases of maturity for generic Day 2 operations of an Operator: Figure 2.1. Operator maturity model The above model also shows how these capabilities can best be developed through the Helm, Go, and Ansible capabilities of the Operator SDK. 2.2. Operator Framework packaging format This guide outlines the packaging format for Operators supported by Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.2.1. Bundle format The bundle format for Operators is a packaging format introduced by the Operator Framework. To improve scalability and to better enable upstream users hosting their own catalogs, the bundle format specification simplifies the distribution of Operator metadata. An Operator bundle represents a single version of an Operator. On-disk bundle manifests are containerized and shipped as a bundle image , which is a non-runnable container image that stores the Kubernetes manifests and Operator metadata. Storage and distribution of the bundle image is then managed using existing container tools like podman and docker and container registries such as Quay. Operator metadata can include: Information that identifies the Operator, for example its name and version. Additional information that drives the UI, for example its icon and some example custom resources (CRs). Required and provided APIs. Related images. When loading manifests into the Operator Registry database, the following requirements are validated: The bundle must have at least one channel defined in the annotations. Every bundle has exactly one cluster service version (CSV). If a CSV owns a custom resource definition (CRD), that CRD must exist in the bundle. 2.2.1.1. Manifests Bundle manifests refer to a set of Kubernetes manifests that define the deployment and RBAC model of the Operator. A bundle includes one CSV per directory and typically the CRDs that define the owned APIs of the CSV in its /manifests directory. Example bundle format layout etcd ├── manifests │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml │ └── secret.yaml │ └── configmap.yaml └── metadata └── annotations.yaml └── dependencies.yaml Additionally supported objects The following object types can also be optionally included in the /manifests directory of a bundle: Supported optional object types ClusterRole ClusterRoleBinding ConfigMap ConsoleCLIDownload ConsoleLink ConsoleQuickStart ConsoleYamlSample PodDisruptionBudget PriorityClass PrometheusRule Role RoleBinding Secret Service ServiceAccount ServiceMonitor VerticalPodAutoscaler When these optional objects are included in a bundle, Operator Lifecycle Manager (OLM) can create them from the bundle and manage their lifecycle along with the CSV: Lifecycle for optional objects When the CSV is deleted, OLM deletes the optional object. When the CSV is upgraded: If the name of the optional object is the same, OLM updates it in place. If the name of the optional object has changed between versions, OLM deletes and recreates it. 2.2.1.2. Annotations A bundle also includes an annotations.yaml file in its /metadata directory. This file defines higher level aggregate data that helps describe the format and package information about how the bundle should be added into an index of bundles: Example annotations.yaml annotations: operators.operatorframework.io.bundle.mediatype.v1: "registry+v1" 1 operators.operatorframework.io.bundle.manifests.v1: "manifests/" 2 operators.operatorframework.io.bundle.metadata.v1: "metadata/" 3 operators.operatorframework.io.bundle.package.v1: "test-operator" 4 operators.operatorframework.io.bundle.channels.v1: "beta,stable" 5 operators.operatorframework.io.bundle.channel.default.v1: "stable" 6 1 The media type or format of the Operator bundle. The registry+v1 format means it contains a CSV and its associated Kubernetes objects. 2 The path in the image to the directory that contains the Operator manifests. This label is reserved for future use and currently defaults to manifests/ . The value manifests.v1 implies that the bundle contains Operator manifests. 3 The path in the image to the directory that contains metadata files about the bundle. This label is reserved for future use and currently defaults to metadata/ . The value metadata.v1 implies that this bundle has Operator metadata. 4 The package name of the bundle. 5 The list of channels the bundle is subscribing to when added into an Operator Registry. 6 The default channel an Operator should be subscribed to when installed from a registry. Note In case of a mismatch, the annotations.yaml file is authoritative because the on-cluster Operator Registry that relies on these annotations only has access to this file. 2.2.1.3. Dependencies The dependencies of an Operator are listed in a dependencies.yaml file in the metadata/ folder of a bundle. This file is optional and currently only used to specify explicit Operator-version dependencies. The dependency list contains a type field for each item to specify what kind of dependency this is. The following types of Operator dependencies are supported: olm.package This type indicates a dependency for a specific Operator version. The dependency information must include the package name and the version of the package in semver format. For example, you can specify an exact version such as 0.5.2 or a range of versions such as >0.5.1 . olm.gvk With this type, the author can specify a dependency with group/version/kind (GVK) information, similar to existing CRD and API-based usage in a CSV. This is a path to enable Operator authors to consolidate all dependencies, API or explicit versions, to be in the same place. olm.constraint This type declares generic constraints on arbitrary Operator properties. In the following example, dependencies are specified for a Prometheus Operator and etcd CRDs: Example dependencies.yaml file dependencies: - type: olm.package value: packageName: prometheus version: ">0.27.0" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2 Additional resources Operator Lifecycle Manager dependency resolution 2.2.1.4. About the opm CLI The opm CLI tool is provided by the Operator Framework for use with the Operator bundle format. This tool allows you to create and maintain catalogs of Operators from a list of Operator bundles that are similar to software repositories. The result is a container image which can be stored in a container registry and then installed on a cluster. A catalog contains a database of pointers to Operator manifest content that can be queried through an included API that is served when the container image is run. On OpenShift Container Platform, Operator Lifecycle Manager (OLM) can reference the image in a catalog source, defined by a CatalogSource object, which polls the image at regular intervals to enable frequent updates to installed Operators on the cluster. See CLI tools for steps on installing the opm CLI. 2.2.2. Highlights File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible. The goal of this format is to enable Operator catalog editing, composability, and extensibility. Editing With file-based catalogs, users interacting with the contents of a catalog are able to make direct changes to the format and verify that their changes are valid. Because this format is plain text JSON or YAML, catalog maintainers can easily manipulate catalog metadata by hand or with widely known and supported JSON or YAML tooling, such as the jq CLI. This editability enables the following features and user-defined extensions: Promoting an existing bundle to a new channel Changing the default channel of a package Custom algorithms for adding, updating, and removing upgrade edges Composability File-based catalogs are stored in an arbitrary directory hierarchy, which enables catalog composition. For example, consider two separate file-based catalog directories: catalogA and catalogB . A catalog maintainer can create a new combined catalog by making a new directory catalogC and copying catalogA and catalogB into it. This composability enables decentralized catalogs. The format permits Operator authors to maintain Operator-specific catalogs, and it permits maintainers to trivially build a catalog composed of individual Operator catalogs. File-based catalogs can be composed by combining multiple other catalogs, by extracting subsets of one catalog, or a combination of both of these. Note Duplicate packages and duplicate bundles within a package are not permitted. The opm validate command returns an error if any duplicates are found. Because Operator authors are most familiar with their Operator, its dependencies, and its upgrade compatibility, they are able to maintain their own Operator-specific catalog and have direct control over its contents. With file-based catalogs, Operator authors own the task of building and maintaining their packages in a catalog. Composite catalog maintainers, however, only own the task of curating the packages in their catalog and publishing the catalog to users. Extensibility The file-based catalog specification is a low-level representation of a catalog. While it can be maintained directly in its low-level form, catalog maintainers can build interesting extensions on top that can be used by their own custom tooling to make any number of mutations. For example, a tool could translate a high-level API, such as (mode=semver) , down to the low-level, file-based catalog format for upgrade edges. Or a catalog maintainer might need to customize all of the bundle metadata by adding a new property to bundles that meet a certain criteria. While this extensibility allows for additional official tooling to be developed on top of the low-level APIs for future OpenShift Container Platform releases, the major benefit is that catalog maintainers have this capability as well. Important As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format. The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune , do not work with the file-based catalog format. For more information about working with file-based catalogs, see Managing custom catalogs and Mirroring images for a disconnected installation using the oc-mirror plugin . 2.2.2.1. Directory structure File-based catalogs can be stored and loaded from directory-based file systems. The opm CLI loads the catalog by walking the root directory and recursing into subdirectories. The CLI attempts to load every file it finds and fails if any errors occur. Non-catalog files can be ignored using .indexignore files, which have the same rules for patterns and precedence as .gitignore files. Example .indexignore file # Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml Catalog maintainers have the flexibility to choose their desired layout, but it is recommended to store each package's file-based catalog blobs in separate subdirectories. Each individual file can be either JSON or YAML; it is not necessary for every file in a catalog to use the same format. Basic recommended structure catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json └── deprecations.yaml This recommended structure has the property that each subdirectory in the directory hierarchy is a self-contained catalog, which makes catalog composition, discovery, and navigation trivial file system operations. The catalog can also be included in a parent catalog by copying it into the parent catalog's root directory. 2.2.2.2. Schemas File-based catalogs use a format, based on the CUE language specification , that can be extended with arbitrary schemas. The following _Meta CUE schema defines the format that all file-based catalog blobs must adhere to: _Meta schema _Meta: { // schema is required and must be a non-empty string schema: string & !="" // package is optional, but if it's defined, it must be a non-empty string package?: string & !="" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !="" // value is required, and it must not be null value: !=null } Note No CUE schemas listed in this specification should be considered exhaustive. The opm validate command has additional validations that are difficult or impossible to express concisely in CUE. An Operator Lifecycle Manager (OLM) catalog currently uses three schemas ( olm.package , olm.channel , and olm.bundle ), which correspond to OLM's existing package and bundle concepts. Each Operator package in a catalog requires exactly one olm.package blob, at least one olm.channel blob, and one or more olm.bundle blobs. Note All olm.* schemas are reserved for OLM-defined schemas. Custom schemas must use a unique prefix, such as a domain that you own. 2.2.2.2.1. olm.package schema The olm.package schema defines package-level metadata for an Operator. This includes its name, description, default channel, and icon. Example 2.1. olm.package schema #Package: { schema: "olm.package" // Package name name: string & !="" // A description of the package description?: string // The package's default channel defaultChannel: string & !="" // An optional icon icon?: { base64data: string mediatype: string } } 2.2.2.2.2. olm.channel schema The olm.channel schema defines a channel within a package, the bundle entries that are members of the channel, and the upgrade edges for those bundles. If a bundle entry represents an edge in multiple olm.channel blobs, it can only appear once per channel. It is valid for an entry's replaces value to reference another bundle name that cannot be found in this catalog or another catalog. However, all other channel invariants must hold true, such as a channel not having multiple heads. Example 2.2. olm.channel schema #Channel: { schema: "olm.channel" package: string & !="" name: string & !="" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !="" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !="" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=""] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !="" } Warning When using the skipRange field, the skipped Operator versions are pruned from the update graph and are longer installable by users with the spec.startingCSV property of Subscription objects. You can update an Operator incrementally while keeping previously installed versions available to users for future installation by using both the skipRange and replaces field. Ensure that the replaces field points to the immediate version of the Operator version in question. 2.2.2.2.3. olm.bundle schema Example 2.3. olm.bundle schema #Bundle: { schema: "olm.bundle" package: string & !="" name: string & !="" image: string & !="" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !="" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !="" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !="" } 2.2.2.2.4. olm.deprecations schema The optional olm.deprecations schema defines deprecation information for packages, bundles, and channels in a catalog. Operator authors can use this schema to provide relevant messages about their Operators, such as support status and recommended upgrade paths, to users running those Operators from a catalog. When this schema is defined, the OpenShift Container Platform web console displays warning badges for the affected elements of the Operator, including any custom deprecation messages, on both the pre- and post-installation pages of the OperatorHub. An olm.deprecations schema entry contains one or more of the following reference types, which indicates the deprecation scope. After the Operator is installed, any specified messages can be viewed as status conditions on the related Subscription object. Table 2.1. Deprecation reference types Type Scope Status condition olm.package Represents the entire package PackageDeprecated olm.channel Represents one channel ChannelDeprecated olm.bundle Represents one bundle version BundleDeprecated Each reference type has their own requirements, as detailed in the following example. Example 2.4. Example olm.deprecations schema with each reference type schema: olm.deprecations package: my-operator 1 entries: - reference: schema: olm.package 2 message: | 3 The 'my-operator' package is end of life. Please use the 'my-operator-new' package for support. - reference: schema: olm.channel name: alpha 4 message: | The 'alpha' channel is no longer supported. Please switch to the 'stable' channel. - reference: schema: olm.bundle name: my-operator.v1.68.0 5 message: | my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and install my-operator.v1.72.0 for support. 1 Each deprecation schema must have a package value, and that package reference must be unique across the catalog. There must not be an associated name field. 2 The olm.package schema must not include a name field, because it is determined by the package field defined earlier in the schema. 3 All message fields, for any reference type, must be a non-zero length and represented as an opaque text blob. 4 The name field for the olm.channel schema is required. 5 The name field for the olm.bundle schema is required. Note The deprecation feature does not consider overlapping deprecation, for example package versus channel versus bundle. Operator authors can save olm.deprecations schema entries as a deprecations.yaml file in the same directory as the package's index.yaml file: Example directory structure for a catalog with deprecations my-catalog └── my-operator ├── index.yaml └── deprecations.yaml Additional resources Updating or filtering a file-based catalog image 2.2.2.3. Properties Properties are arbitrary pieces of metadata that can be attached to file-based catalog schemas. The type field is a string that effectively specifies the semantic and syntactic meaning of the value field. The value can be any arbitrary JSON or YAML. OLM defines a handful of property types, again using the reserved olm.* prefix. 2.2.2.3.1. olm.package property The olm.package property defines the package name and version. This is a required property on bundles, and there must be exactly one of these properties. The packageName field must match the bundle's first-class package field, and the version field must be a valid semantic version. Example 2.5. olm.package property #PropertyPackage: { type: "olm.package" value: { packageName: string & !="" version: string & !="" } } 2.2.2.3.2. olm.gvk property The olm.gvk property defines the group/version/kind (GVK) of a Kubernetes API that is provided by this bundle. This property is used by OLM to resolve a bundle with this property as a dependency for other bundles that list the same GVK as a required API. The GVK must adhere to Kubernetes GVK validations. Example 2.6. olm.gvk property #PropertyGVK: { type: "olm.gvk" value: { group: string & !="" version: string & !="" kind: string & !="" } } 2.2.2.3.3. olm.package.required The olm.package.required property defines the package name and version range of another package that this bundle requires. For every required package property a bundle lists, OLM ensures there is an Operator installed on the cluster for the listed package and in the required version range. The versionRange field must be a valid semantic version (semver) range. Example 2.7. olm.package.required property #PropertyPackageRequired: { type: "olm.package.required" value: { packageName: string & !="" versionRange: string & !="" } } 2.2.2.3.4. olm.gvk.required The olm.gvk.required property defines the group/version/kind (GVK) of a Kubernetes API that this bundle requires. For every required GVK property a bundle lists, OLM ensures there is an Operator installed on the cluster that provides it. The GVK must adhere to Kubernetes GVK validations. Example 2.8. olm.gvk.required property #PropertyGVKRequired: { type: "olm.gvk.required" value: { group: string & !="" version: string & !="" kind: string & !="" } } 2.2.2.4. Example catalog With file-based catalogs, catalog maintainers can focus on Operator curation and compatibility. Because Operator authors have already produced Operator-specific catalogs for their Operators, catalog maintainers can build their catalog by rendering each Operator catalog into a subdirectory of the catalog's root directory. There are many possible ways to build a file-based catalog; the following steps outline a simple approach: Maintain a single configuration file for the catalog, containing image references for each Operator in the catalog: Example catalog configuration file name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317 Run a script that parses the configuration file and creates a new catalog from its references: Example script name=USD(yq eval '.name' catalog.yaml) mkdir "USDname" yq eval '.name + "/" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + "|" + USDcatalog + "/" + .name + "/index.yaml"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render "USDimage" > "USDfile" done opm generate dockerfile "USDname" indexImage=USD(yq eval '.repo + ":" + .tag' catalog.yaml) docker build -t "USDindexImage" -f "USDname.Dockerfile" . docker push "USDindexImage" 2.2.2.5. Guidelines Consider the following guidelines when maintaining file-based catalogs. 2.2.2.5.1. Immutable bundles The general advice with Operator Lifecycle Manager (OLM) is that bundle images and their metadata should be treated as immutable. If a broken bundle has been pushed to a catalog, you must assume that at least one of your users has upgraded to that bundle. Based on that assumption, you must release another bundle with an upgrade edge from the broken bundle to ensure users with the broken bundle installed receive an upgrade. OLM will not reinstall an installed bundle if the contents of that bundle are updated in the catalog. However, there are some cases where a change in the catalog metadata is preferred: Channel promotion: If you already released a bundle and later decide that you would like to add it to another channel, you can add an entry for your bundle in another olm.channel blob. New upgrade edges: If you release a new 1.2.z bundle version, for example 1.2.4 , but 1.3.0 is already released, you can update the catalog metadata for 1.3.0 to skip 1.2.4 . 2.2.2.5.2. Source control Catalog metadata should be stored in source control and treated as the source of truth. Updates to catalog images should include the following steps: Update the source-controlled catalog directory with a new commit. Build and push the catalog image. Use a consistent tagging taxonomy, such as :latest or :<target_cluster_version> , so that users can receive updates to a catalog as they become available. 2.2.2.6. CLI usage For instructions about creating file-based catalogs by using the opm CLI, see Managing custom catalogs . For reference documentation about the opm CLI commands related to managing file-based catalogs, see CLI tools . 2.2.2.7. Automation Operator authors and catalog maintainers are encouraged to automate their catalog maintenance with CI/CD workflows. Catalog maintainers can further improve on this by building GitOps automation to accomplish the following tasks: Check that pull request (PR) authors are permitted to make the requested changes, for example by updating their package's image reference. Check that the catalog updates pass the opm validate command. Check that the updated bundle or catalog image references exist, the catalog images run successfully in a cluster, and Operators from that package can be successfully installed. Automatically merge PRs that pass the checks. Automatically rebuild and republish the catalog image. 2.3. Operator Framework glossary of common terms This topic provides a glossary of common terms related to the Operator Framework, including Operator Lifecycle Manager (OLM) and the Operator SDK. 2.3.1. Bundle In the bundle format, a bundle is a collection of an Operator CSV, manifests, and metadata. Together, they form a unique version of an Operator that can be installed onto the cluster. 2.3.2. Bundle image In the bundle format, a bundle image is a container image that is built from Operator manifests and that contains one bundle. Bundle images are stored and distributed by Open Container Initiative (OCI) spec container registries, such as Quay.io or DockerHub. 2.3.3. Catalog source A catalog source represents a store of metadata that OLM can query to discover and install Operators and their dependencies. 2.3.4. Channel A channel defines a stream of updates for an Operator and is used to roll out updates for subscribers. The head points to the latest version of that channel. For example, a stable channel would have all stable versions of an Operator arranged from the earliest to the latest. An Operator can have several channels, and a subscription binding to a certain channel would only look for updates in that channel. 2.3.5. Channel head A channel head refers to the latest known update in a particular channel. 2.3.6. Cluster service version A cluster service version (CSV) is a YAML manifest created from Operator metadata that assists OLM in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which custom resources (CRs) it manages or depends on. 2.3.7. Dependency An Operator may have a dependency on another Operator being present in the cluster. For example, the Vault Operator has a dependency on the etcd Operator for its data persistence layer. OLM resolves dependencies by ensuring that all specified versions of Operators and CRDs are installed on the cluster during the installation phase. This dependency is resolved by finding and installing an Operator in a catalog that satisfies the required CRD API, and is not related to packages or bundles. 2.3.8. Index image In the bundle format, an index image refers to an image of a database (a database snapshot) that contains information about Operator bundles including CSVs and CRDs of all versions. This index can host a history of Operators on a cluster and be maintained by adding or removing Operators using the opm CLI tool. 2.3.9. Install plan An install plan is a calculated list of resources to be created to automatically install or upgrade a CSV. 2.3.10. Multitenancy A tenant in OpenShift Container Platform is a user or group of users that share common access and privileges for a set of deployed workloads, typically represented by a namespace or project. You can use tenants to provide a level of isolation between different groups or teams. When a cluster is shared by multiple users or groups, it is considered a multitenant cluster. 2.3.11. Operator group An Operator group configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their CR in a list of namespaces or cluster-wide. 2.3.12. Package In the bundle format, a package is a directory that encloses all released history of an Operator with each version. A released version of an Operator is described in a CSV manifest alongside the CRDs. 2.3.13. Registry A registry is a database that stores bundle images of Operators, each with all of its latest and historical versions in all channels. 2.3.14. Subscription A subscription keeps CSVs up to date by tracking a channel in a package. 2.3.15. Update graph An update graph links versions of CSVs together, similar to the update graph of any other packaged software. Operators can be installed sequentially, or certain versions can be skipped. The update graph is expected to grow only at the head with newer versions being added. 2.4. Operator Lifecycle Manager (OLM) 2.4.1. Operator Lifecycle Manager concepts and resources This guide provides an overview of the concepts that drive Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.1.1. What is Operator Lifecycle Manager? Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their OpenShift Container Platform clusters. It is part of the Operator Framework , an open source toolkit designed to manage Operators in an effective, automated, and scalable way. Figure 2.2. Operator Lifecycle Manager workflow OLM runs by default in OpenShift Container Platform 4.17, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster. For developers, a self-service experience allows provisioning and configuring instances of databases, monitoring, and big data services without having to be subject matter experts, because the Operator has that knowledge baked into it. 2.4.1.2. OLM resources The following custom resource definitions (CRDs) are defined and managed by Operator Lifecycle Manager (OLM): Table 2.2. CRDs managed by OLM and Catalog Operators Resource Short name Description ClusterServiceVersion (CSV) csv Application metadata. For example: name, version, icon, required resources. CatalogSource catsrc A repository of CSVs, CRDs, and packages that define an application. Subscription sub Keeps CSVs up to date by tracking a channel in a package. InstallPlan ip Calculated list of resources to be created to automatically install or upgrade a CSV. OperatorGroup og Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide. OperatorConditions - Creates a communication channel between OLM and an Operator it manages. Operators can write to the Status.Conditions array to communicate complex states to OLM. 2.4.1.2.1. Cluster service version A cluster service version (CSV) represents a specific version of a running Operator on an OpenShift Container Platform cluster. It is a YAML manifest created from Operator metadata that assists Operator Lifecycle Manager (OLM) in running the Operator in the cluster. OLM requires this metadata about an Operator to ensure that it can be kept running safely on a cluster, and to provide information about how updates should be applied as new versions of the Operator are published. This is similar to packaging software for a traditional operating system; think of the packaging step for OLM as the stage at which you make your rpm , deb , or apk bundle. A CSV includes the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its name, version, description, labels, repository link, and logo. A CSV is also a source of technical information required to run the Operator, such as which custom resources (CRs) it manages or depends on, RBAC rules, cluster requirements, and install strategies. This information tells OLM how to create required resources and set up the Operator as a deployment. 2.4.1.2.2. Catalog source A catalog source represents a store of metadata, typically by referencing an index image stored in a container registry. Operator Lifecycle Manager (OLM) queries catalog sources to discover and install Operators and their dependencies. OperatorHub in the OpenShift Container Platform web console also displays the Operators provided by catalog sources. Tip Cluster administrators can view the full list of Operators provided by an enabled catalog source on a cluster by using the Administration Cluster Settings Configuration OperatorHub page in the web console. The spec of a CatalogSource object indicates how to construct a pod or how to communicate with a service that serves the Operator Registry gRPC API. Example 2.9. Example CatalogSource object \ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 annotations: olm.catalogImageTemplate: 3 "quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}" spec: displayName: Example Catalog 4 image: quay.io/example-org/example-catalog:v1 5 priority: -400 6 publisher: Example Org sourceType: grpc 7 grpcPodConfig: securityContextConfig: <security_mode> 8 nodeSelector: 9 custom_label: <label> priorityClassName: system-cluster-critical 10 tolerations: 11 - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" updateStrategy: registryPoll: 12 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 13 latestImageRegistryPoll: 2021-08-26T18:46:25Z 14 registryService: 15 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace 1 Name for the CatalogSource object. This value is also used as part of the name for the related pod that is created in the requested namespace. 2 Namespace to create the catalog in. To make the catalog available cluster-wide in all namespaces, set this value to openshift-marketplace . The default Red Hat-provided catalog sources also use the openshift-marketplace namespace. Otherwise, set the value to a specific namespace to make the Operator only available in that namespace. 3 Optional: To avoid cluster upgrades potentially leaving Operator installations in an unsupported state or without a continued update path, you can enable automatically changing your Operator catalog's index image version as part of cluster upgrades. Set the olm.catalogImageTemplate annotation to your index image name and use one or more of the Kubernetes cluster version variables as shown when constructing the template for the image tag. The annotation overwrites the spec.image field at run time. See the "Image template for custom catalog sources" section for more details. 4 Display name for the catalog in the web console and CLI. 5 Index image for the catalog. Optionally, can be omitted when using the olm.catalogImageTemplate annotation, which sets the pull spec at run time. 6 Weight for the catalog source. OLM uses the weight for prioritization during dependency resolution. A higher weight indicates the catalog is preferred over lower-weighted catalogs. 7 Source types include the following: grpc with an image reference: OLM pulls the image and runs the pod, which is expected to serve a compliant API. grpc with an address field: OLM attempts to contact the gRPC API at the given address. This should not be used in most cases. configmap : OLM parses config map data and runs a pod that can serve the gRPC API over it. 8 Specify the value of legacy or restricted . If the field is not set, the default value is legacy . In a future OpenShift Container Platform release, it is planned that the default value will be restricted . If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy . 9 Optional: For grpc type catalog sources, overrides the default node selector for the pod serving the content in spec.image , if defined. 10 Optional: For grpc type catalog sources, overrides the default priority class name for the pod serving the content in spec.image , if defined. Kubernetes provides system-cluster-critical and system-node-critical priority classes by default. Setting the field to empty ( "" ) assigns the pod the default priority. Other priority classes can be defined manually. 11 Optional: For grpc type catalog sources, overrides the default tolerations for the pod serving the content in spec.image , if defined. 12 Automatically check for new versions at a given interval to stay up-to-date. 13 Last observed state of the catalog connection. For example: READY : A connection is successfully established. CONNECTING : A connection is attempting to establish. TRANSIENT_FAILURE : A temporary problem has occurred while attempting to establish a connection, such as a timeout. The state will eventually switch back to CONNECTING and try again. See States of Connectivity in the gRPC documentation for more details. 14 Latest time the container registry storing the catalog image was polled to ensure the image is up-to-date. 15 Status information for the catalog's Operator Registry service. Referencing the name of a CatalogSource object in a subscription instructs OLM where to search to find a requested Operator: Example 2.10. Example Subscription object referencing a catalog source apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace Additional resources Understanding OperatorHub Red Hat-provided Operator catalogs Adding a catalog source to a cluster Catalog priority Viewing Operator catalog source status by using the CLI Understanding and managing pod security admission Catalog source pod scheduling 2.4.1.2.2.1. Image template for custom catalog sources Operator compatibility with the underlying cluster can be expressed by a catalog source in various ways. One way, which is used for the default Red Hat-provided catalog sources, is to identify image tags for index images that are specifically created for a particular platform release, for example OpenShift Container Platform 4.17. During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from OpenShift Container Platform 4.16 to 4.17, the spec.image field in the CatalogSource object for the redhat-operators catalog is updated from: registry.redhat.io/redhat/redhat-operator-index:v4.16 to: registry.redhat.io/redhat/redhat-operator-index:v4.17 However, the CVO does not automatically update image tags for custom catalogs. To ensure users are left with a compatible and supported Operator installation after a cluster upgrade, custom catalogs should also be kept updated to reference an updated index image. Starting in OpenShift Container Platform 4.9, cluster administrators can add the olm.catalogImageTemplate annotation in the CatalogSource object for custom catalogs to an image reference that includes a template. The following Kubernetes version variables are supported for use in the template: kube_major_version kube_minor_version kube_patch_version Note You must specify the Kubernetes cluster version and not an OpenShift Container Platform cluster version, as the latter is not currently available for templating. Provided that you have created and pushed an index image with a tag specifying the updated Kubernetes version, setting this annotation enables the index image versions in custom catalogs to be automatically changed after a cluster upgrade. The annotation value is used to set or update the image reference in the spec.image field of the CatalogSource object. This helps avoid cluster upgrades leaving Operator installations in unsupported states or without a continued update path. Important You must ensure that the index image with the updated tag, in whichever registry it is stored in, is accessible by the cluster at the time of the cluster upgrade. Example 2.11. Example catalog source with an image template apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog namespace: openshift-marketplace annotations: olm.catalogImageTemplate: "quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}" spec: displayName: Example Catalog image: quay.io/example-org/example-catalog:v1.30 priority: -400 publisher: Example Org Note If the spec.image field and the olm.catalogImageTemplate annotation are both set, the spec.image field is overwritten by the resolved value from the annotation. If the annotation does not resolve to a usable pull spec, the catalog source falls back to the set spec.image value. If the spec.image field is not set and the annotation does not resolve to a usable pull spec, OLM stops reconciliation of the catalog source and sets it into a human-readable error condition. For an OpenShift Container Platform 4.17 cluster, which uses Kubernetes 1.30, the olm.catalogImageTemplate annotation in the preceding example resolves to the following image reference: quay.io/example-org/example-catalog:v1.30 For future releases of OpenShift Container Platform, you can create updated index images for your custom catalogs that target the later Kubernetes version that is used by the later OpenShift Container Platform version. With the olm.catalogImageTemplate annotation set before the upgrade, upgrading the cluster to the later OpenShift Container Platform version would then automatically update the catalog's index image as well. 2.4.1.2.2.2. Catalog health requirements Operator catalogs on a cluster are interchangeable from the perspective of installation resolution; a Subscription object might reference a specific catalog, but dependencies are resolved using all catalogs on the cluster. For example, if Catalog A is unhealthy, a subscription referencing Catalog A could resolve a dependency in Catalog B, which the cluster administrator might not have been expecting, because B normally had a lower catalog priority than A. As a result, OLM requires that all catalogs with a given global namespace (for example, the default openshift-marketplace namespace or a custom global namespace) are healthy. When a catalog is unhealthy, all Operator installation or update operations within its shared global namespace will fail with a CatalogSourcesUnhealthy condition. If these operations were permitted in an unhealthy state, OLM might make resolution and installation decisions that were unexpected to the cluster administrator. As a cluster administrator, if you observe an unhealthy catalog and want to consider the catalog as invalid and resume Operator installations, see the "Removing custom catalogs" or "Disabling the default OperatorHub catalog sources" sections for information about removing the unhealthy catalog. Additional resources Removing custom catalogs Disabling the default OperatorHub catalog sources 2.4.1.2.3. Subscription A subscription , defined by a Subscription object, represents an intention to install an Operator. It is the custom resource that relates an Operator to a catalog source. Subscriptions describe which channel of an Operator package to subscribe to, and whether to perform updates automatically or manually. If set to automatic, the subscription ensures Operator Lifecycle Manager (OLM) manages and upgrades the Operator to ensure that the latest version is always running in the cluster. Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace This Subscription object defines the name and namespace of the Operator, as well as the catalog from which the Operator data can be found. The channel, such as alpha , beta , or stable , helps determine which Operator stream should be installed from the catalog source. The names of channels in a subscription can differ between Operators, but the naming scheme should follow a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). In addition to being easily visible from the OpenShift Container Platform web console, it is possible to identify when there is a newer version of an Operator available by inspecting the status of the related subscription. The value associated with the currentCSV field is the newest version that is known to OLM, and installedCSV is the version that is installed on the cluster. Additional resources Multitenancy and Operator colocation Viewing Operator subscription status by using the CLI 2.4.1.2.4. Install plan An install plan , defined by an InstallPlan object, describes a set of resources that Operator Lifecycle Manager (OLM) creates to install or upgrade to a specific version of an Operator. The version is defined by a cluster service version (CSV). To install an Operator, a cluster administrator, or a user who has been granted Operator installation permissions, must first create a Subscription object. A subscription represents the intent to subscribe to a stream of available versions of an Operator from a catalog source. The subscription then creates an InstallPlan object to facilitate the installation of the resources for the Operator. The install plan must then be approved according to one of the following approval strategies: If the subscription's spec.installPlanApproval field is set to Automatic , the install plan is approved automatically. If the subscription's spec.installPlanApproval field is set to Manual , the install plan must be manually approved by a cluster administrator or user with proper permissions. After the install plan is approved, OLM creates the specified resources and installs the Operator in the namespace that is specified by the subscription. Example 2.12. Example InstallPlan object apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: ... catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- ... name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- ... name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- ... name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- ... name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- ... name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created ... Additional resources Multitenancy and Operator colocation Allowing non-cluster administrators to install Operators 2.4.1.2.5. Operator groups An Operator group , defined by the OperatorGroup resource, provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its member Operators. The set of target namespaces is provided by a comma-delimited string stored in the olm.targetNamespaces annotation of a cluster service version (CSV). This annotation is applied to the CSV instances of member Operators and is projected into their deployments. Additional resources Operator groups 2.4.1.2.6. Operator conditions As part of its role in managing the lifecycle of an Operator, Operator Lifecycle Manager (OLM) infers the state of an Operator from the state of Kubernetes resources that define the Operator. While this approach provides some level of assurance that an Operator is in a given state, there are many instances where an Operator might need to communicate information to OLM that could not be inferred otherwise. This information can then be used by OLM to better manage the lifecycle of the Operator. OLM provides a custom resource definition (CRD) called OperatorCondition that allows Operators to communicate conditions to OLM. There are a set of supported conditions that influence management of the Operator by OLM when present in the Spec.Conditions array of an OperatorCondition resource. Note By default, the Spec.Conditions array is not present in an OperatorCondition object until it is either added by a user or as a result of custom Operator logic. Additional resources Operator conditions 2.4.2. Operator Lifecycle Manager architecture This guide outlines the component architecture of Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.2.1. Component responsibilities Operator Lifecycle Manager (OLM) is composed of two Operators: the OLM Operator and the Catalog Operator. Each of these Operators is responsible for managing the custom resource definitions (CRDs) that are the basis for the OLM framework: Table 2.3. CRDs managed by OLM and Catalog Operators Resource Short name Owner Description ClusterServiceVersion (CSV) csv OLM Application metadata: name, version, icon, required resources, installation, and so on. InstallPlan ip Catalog Calculated list of resources to be created to automatically install or upgrade a CSV. CatalogSource catsrc Catalog A repository of CSVs, CRDs, and packages that define an application. Subscription sub Catalog Used to keep CSVs up to date by tracking a channel in a package. OperatorGroup og OLM Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide. Each of these Operators is also responsible for creating the following resources: Table 2.4. Resources created by OLM and Catalog Operators Resource Owner Deployments OLM ServiceAccounts (Cluster)Roles (Cluster)RoleBindings CustomResourceDefinitions (CRDs) Catalog ClusterServiceVersions 2.4.2.2. OLM Operator The OLM Operator is responsible for deploying applications defined by CSV resources after the required resources specified in the CSV are present in the cluster. The OLM Operator is not concerned with the creation of the required resources; you can choose to manually create these resources using the CLI or using the Catalog Operator. This separation of concern allows users incremental buy-in in terms of how much of the OLM framework they choose to leverage for their application. The OLM Operator uses the following workflow: Watch for cluster service versions (CSVs) in a namespace and check that requirements are met. If requirements are met, run the install strategy for the CSV. Note A CSV must be an active member of an Operator group for the install strategy to run. 2.4.2.3. Catalog Operator The Catalog Operator is responsible for resolving and installing cluster service versions (CSVs) and the required resources they specify. It is also responsible for watching catalog sources for updates to packages in channels and upgrading them, automatically if desired, to the latest available versions. To track a package in a channel, you can create a Subscription object configuring the desired package, channel, and the CatalogSource object you want to use for pulling updates. When updates are found, an appropriate InstallPlan object is written into the namespace on behalf of the user. The Catalog Operator uses the following workflow: Connect to each catalog source in the cluster. Watch for unresolved install plans created by a user, and if found: Find the CSV matching the name requested and add the CSV as a resolved resource. For each managed or required CRD, add the CRD as a resolved resource. For each required CRD, find the CSV that manages it. Watch for resolved install plans and create all of the discovered resources for it, if approved by a user or automatically. Watch for catalog sources and subscriptions and create install plans based on them. 2.4.2.4. Catalog Registry The Catalog Registry stores CSVs and CRDs for creation in a cluster and stores metadata about packages and channels. A package manifest is an entry in the Catalog Registry that associates a package identity with sets of CSVs. Within a package, channels point to a particular CSV. Because CSVs explicitly reference the CSV that they replace, a package manifest provides the Catalog Operator with all of the information that is required to update a CSV to the latest version in a channel, stepping through each intermediate version. 2.4.3. Operator Lifecycle Manager workflow This guide outlines the workflow of Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.3.1. Operator installation and upgrade workflow in OLM In the Operator Lifecycle Manager (OLM) ecosystem, the following resources are used to resolve Operator installations and upgrades: ClusterServiceVersion (CSV) CatalogSource Subscription Operator metadata, defined in CSVs, can be stored in a collection called a catalog source. OLM uses catalog sources, which use the Operator Registry API , to query for available Operators as well as upgrades for installed Operators. Figure 2.3. Catalog source overview Within a catalog source, Operators are organized into packages and streams of updates called channels , which should be a familiar update pattern from OpenShift Container Platform or other software on a continuous release cycle like web browsers. Figure 2.4. Packages and channels in a Catalog source A user indicates a particular package and channel in a particular catalog source in a subscription , for example an etcd package and its alpha channel. If a subscription is made to a package that has not yet been installed in the namespace, the latest Operator for that package is installed. Note OLM deliberately avoids version comparisons, so the "latest" or "newest" Operator available from a given catalog channel package path does not necessarily need to be the highest version number. It should be thought of more as the head reference of a channel, similar to a Git repository. Each CSV has a replaces parameter that indicates which Operator it replaces. This builds a graph of CSVs that can be queried by OLM, and updates can be shared between channels. Channels can be thought of as entry points into the graph of updates: Figure 2.5. OLM graph of available channel updates Example channels in a package packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha For OLM to successfully query for updates, given a catalog source, package, channel, and CSV, a catalog must be able to return, unambiguously and deterministically, a single CSV that replaces the input CSV. 2.4.3.1.1. Example upgrade path For an example upgrade scenario, consider an installed Operator corresponding to CSV version 0.1.1 . OLM queries the catalog source and detects an upgrade in the subscribed channel with new CSV version 0.1.3 that replaces an older but not-installed CSV version 0.1.2 , which in turn replaces the older and installed CSV version 0.1.1 . OLM walks back from the channel head to versions via the replaces field specified in the CSVs to determine the upgrade path 0.1.3 0.1.2 0.1.1 ; the direction of the arrow indicates that the former replaces the latter. OLM upgrades the Operator one version at the time until it reaches the channel head. For this given scenario, OLM installs Operator version 0.1.2 to replace the existing Operator version 0.1.1 . Then, it installs Operator version 0.1.3 to replace the previously installed Operator version 0.1.2 . At this point, the installed operator version 0.1.3 matches the channel head and the upgrade is completed. 2.4.3.1.2. Skipping upgrades The basic path for upgrades in OLM is: A catalog source is updated with one or more updates to an Operator. OLM traverses every version of the Operator until reaching the latest version the catalog source contains. However, sometimes this is not a safe operation to perform. There will be cases where a published version of an Operator should never be installed on a cluster if it has not already, for example because a version introduces a serious vulnerability. In those cases, OLM must consider two cluster states and provide an update graph that supports both: The "bad" intermediate Operator has been seen by the cluster and installed. The "bad" intermediate Operator has not yet been installed onto the cluster. By shipping a new catalog and adding a skipped release, OLM is ensured that it can always get a single unique update regardless of the cluster state and whether it has seen the bad update yet. Example CSV with skipped release apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1 Consider the following example of Old CatalogSource and New CatalogSource . Figure 2.6. Skipping updates This graph maintains that: Any Operator found in Old CatalogSource has a single replacement in New CatalogSource . Any Operator found in New CatalogSource has a single replacement in New CatalogSource . If the bad update has not yet been installed, it will never be. 2.4.3.1.3. Replacing multiple Operators Creating New CatalogSource as described requires publishing CSVs that replace one Operator, but can skip several. This can be accomplished using the skipRange annotation: olm.skipRange: <semver_range> where <semver_range> has the version range format supported by the semver library . When searching catalogs for updates, if the head of a channel has a skipRange annotation and the currently installed Operator has a version field that falls in the range, OLM updates to the latest entry in the channel. The order of precedence is: Channel head in the source specified by sourceName on the subscription, if the other criteria for skipping are met. The Operator that replaces the current one, in the source specified by sourceName . Channel head in another source that is visible to the subscription, if the other criteria for skipping are met. The Operator that replaces the current one in any source visible to the subscription. Example CSV with skipRange apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2' 2.4.3.1.4. Z-stream support A z-stream , or patch release, must replace all z-stream releases for the same minor version. OLM does not consider major, minor, or patch versions, it just needs to build the correct graph in a catalog. In other words, OLM must be able to take a graph as in Old CatalogSource and, similar to before, generate a graph as in New CatalogSource : Figure 2.7. Replacing several Operators This graph maintains that: Any Operator found in Old CatalogSource has a single replacement in New CatalogSource . Any Operator found in New CatalogSource has a single replacement in New CatalogSource . Any z-stream release in Old CatalogSource will update to the latest z-stream release in New CatalogSource . Unavailable releases can be considered "virtual" graph nodes; their content does not need to exist, the registry just needs to respond as if the graph looks like this. 2.4.4. Operator Lifecycle Manager dependency resolution This guide outlines dependency resolution and custom resource definition (CRD) upgrade lifecycles with Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.4.1. About dependency resolution Operator Lifecycle Manager (OLM) manages the dependency resolution and upgrade lifecycle of running Operators. In many ways, the problems OLM faces are similar to other system or language package managers, such as yum and rpm . However, there is one constraint that similar systems do not generally have that OLM does: because Operators are always running, OLM attempts to ensure that you are never left with a set of Operators that do not work with each other. As a result, OLM must never create the following scenarios: Install a set of Operators that require APIs that cannot be provided Update an Operator in a way that breaks another that depends upon it This is made possible with two types of data: Properties Typed metadata about the Operator that constitutes the public interface for it in the dependency resolver. Examples include the group/version/kind (GVK) of the APIs provided by the Operator and the semantic version (semver) of the Operator. Constraints or dependencies An Operator's requirements that should be satisfied by other Operators that might or might not have already been installed on the target cluster. These act as queries or filters over all available Operators and constrain the selection during dependency resolution and installation. Examples include requiring a specific API to be available on the cluster or expecting a particular Operator with a particular version to be installed. OLM converts these properties and constraints into a system of Boolean formulas and passes them to a SAT solver, a program that establishes Boolean satisfiability, which does the work of determining what Operators should be installed. 2.4.4.2. Operator properties All Operators in a catalog have the following properties: olm.package Includes the name of the package and the version of the Operator olm.gvk A single property for each provided API from the cluster service version (CSV) Additional properties can also be directly declared by an Operator author by including a properties.yaml file in the metadata/ directory of the Operator bundle. Example arbitrary property properties: - type: olm.kubeversion value: version: "1.16.0" 2.4.4.2.1. Arbitrary properties Operator authors can declare arbitrary properties in a properties.yaml file in the metadata/ directory of the Operator bundle. These properties are translated into a map data structure that is used as an input to the Operator Lifecycle Manager (OLM) resolver at runtime. These properties are opaque to the resolver as it does not understand the properties, but it can evaluate the generic constraints against those properties to determine if the constraints can be satisfied given the properties list. Example arbitrary properties properties: - property: type: color value: red - property: type: shape value: square - property: type: olm.gvk value: group: olm.coreos.io version: v1alpha1 kind: myresource This structure can be used to construct a Common Expression Language (CEL) expression for generic constraints. Additional resources Common Expression Language (CEL) constraints 2.4.4.3. Operator dependencies The dependencies of an Operator are listed in a dependencies.yaml file in the metadata/ folder of a bundle. This file is optional and currently only used to specify explicit Operator-version dependencies. The dependency list contains a type field for each item to specify what kind of dependency this is. The following types of Operator dependencies are supported: olm.package This type indicates a dependency for a specific Operator version. The dependency information must include the package name and the version of the package in semver format. For example, you can specify an exact version such as 0.5.2 or a range of versions such as >0.5.1 . olm.gvk With this type, the author can specify a dependency with group/version/kind (GVK) information, similar to existing CRD and API-based usage in a CSV. This is a path to enable Operator authors to consolidate all dependencies, API or explicit versions, to be in the same place. olm.constraint This type declares generic constraints on arbitrary Operator properties. In the following example, dependencies are specified for a Prometheus Operator and etcd CRDs: Example dependencies.yaml file dependencies: - type: olm.package value: packageName: prometheus version: ">0.27.0" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2 2.4.4.4. Generic constraints An olm.constraint property declares a dependency constraint of a particular type, differentiating non-constraint and constraint properties. Its value field is an object containing a failureMessage field holding a string-representation of the constraint message. This message is surfaced as an informative comment to users if the constraint is not satisfiable at runtime. The following keys denote the available constraint types: gvk Type whose value and interpretation is identical to the olm.gvk type package Type whose value and interpretation is identical to the olm.package type cel A Common Expression Language (CEL) expression evaluated at runtime by the Operator Lifecycle Manager (OLM) resolver over arbitrary bundle properties and cluster information all , any , not Conjunction, disjunction, and negation constraints, respectively, containing one or more concrete constraints, such as gvk or a nested compound constraint 2.4.4.4.1. Common Expression Language (CEL) constraints The cel constraint type supports Common Expression Language (CEL) as the expression language. The cel struct has a rule field which contains the CEL expression string that is evaluated against Operator properties at runtime to determine if the Operator satisfies the constraint. Example cel constraint type: olm.constraint value: failureMessage: 'require to have "certified"' cel: rule: 'properties.exists(p, p.type == "certified")' The CEL syntax supports a wide range of logical operators, such as AND and OR . As a result, a single CEL expression can have multiple rules for multiple conditions that are linked together by these logical operators. These rules are evaluated against a dataset of multiple different properties from a bundle or any given source, and the output is solved into a single bundle or Operator that satisfies all of those rules within a single constraint. Example cel constraint with multiple rules type: olm.constraint value: failureMessage: 'require to have "certified" and "stable" properties' cel: rule: 'properties.exists(p, p.type == "certified") && properties.exists(p, p.type == "stable")' 2.4.4.4.2. Compound constraints (all, any, not) Compound constraint types are evaluated following their logical definitions. The following is an example of a conjunctive constraint ( all ) of two packages and one GVK. That is, they must all be satisfied by installed bundles: Example all constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: All are required for Red because... all: constraints: - failureMessage: Package blue is needed for... package: name: blue versionRange: '>=1.0.0' - failureMessage: GVK Green/v1 is needed for... gvk: group: greens.example.com version: v1 kind: Green The following is an example of a disjunctive constraint ( any ) of three versions of the same GVK. That is, at least one must be satisfied by installed bundles: Example any constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Any are required for Red because... any: constraints: - gvk: group: blues.example.com version: v1beta1 kind: Blue - gvk: group: blues.example.com version: v1beta2 kind: Blue - gvk: group: blues.example.com version: v1 kind: Blue The following is an example of a negation constraint ( not ) of one version of a GVK. That is, this GVK cannot be provided by any bundle in the result set: Example not constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: all: constraints: - failureMessage: Package blue is needed for... package: name: blue versionRange: '>=1.0.0' - failureMessage: Cannot be required for Red because... not: constraints: - gvk: group: greens.example.com version: v1alpha1 kind: greens The negation semantics might appear unclear in the not constraint context. To clarify, the negation is really instructing the resolver to remove any possible solution that includes a particular GVK, package at a version, or satisfies some child compound constraint from the result set. As a corollary, the not compound constraint should only be used within all or any constraints, because negating without first selecting a possible set of dependencies does not make sense. 2.4.4.4.3. Nested compound constraints A nested compound constraint, one that contains at least one child compound constraint along with zero or more simple constraints, is evaluated from the bottom up following the procedures for each previously described constraint type. The following is an example of a disjunction of conjunctions, where one, the other, or both can satisfy the constraint: Example nested compound constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Required for Red because... any: constraints: - all: constraints: - package: name: blue versionRange: '>=1.0.0' - gvk: group: blues.example.com version: v1 kind: Blue - all: constraints: - package: name: blue versionRange: '<1.0.0' - gvk: group: blues.example.com version: v1beta1 kind: Blue Note The maximum raw size of an olm.constraint type is 64KB to limit resource exhaustion attacks. 2.4.4.5. Dependency preferences There can be many options that equally satisfy a dependency of an Operator. The dependency resolver in Operator Lifecycle Manager (OLM) determines which option best fits the requirements of the requested Operator. As an Operator author or user, it can be important to understand how these choices are made so that dependency resolution is clear. 2.4.4.5.1. Catalog priority On OpenShift Container Platform cluster, OLM reads catalog sources to know which Operators are available for installation. Example CatalogSource object apiVersion: "operators.coreos.com/v1alpha1" kind: "CatalogSource" metadata: name: "my-operators" namespace: "operators" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: example.com/my/operator-index:v1 displayName: "My Operators" priority: 100 1 Specify the value of legacy or restricted . If the field is not set, the default value is legacy . In a future OpenShift Container Platform release, it is planned that the default value will be restricted . If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy . A CatalogSource object has a priority field, which is used by the resolver to know how to prefer options for a dependency. There are two rules that govern catalog preference: Options in higher-priority catalogs are preferred to options in lower-priority catalogs. Options in the same catalog as the dependent are preferred to any other catalogs. 2.4.4.5.2. Channel ordering An Operator package in a catalog is a collection of update channels that a user can subscribe to in an OpenShift Container Platform cluster. Channels can be used to provide a particular stream of updates for a minor release ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). It is likely that a dependency might be satisfied by Operators in the same package, but different channels. For example, version 1.2 of an Operator might exist in both the stable and fast channels. Each package has a default channel, which is always preferred to non-default channels. If no option in the default channel can satisfy a dependency, options are considered from the remaining channels in lexicographic order of the channel name. 2.4.4.5.3. Order within a channel There are almost always multiple options to satisfy a dependency within a single channel. For example, Operators in one package and channel provide the same set of APIs. When a user creates a subscription, they indicate which channel to receive updates from. This immediately reduces the search to just that one channel. But within the channel, it is likely that many Operators satisfy a dependency. Within a channel, newer Operators that are higher up in the update graph are preferred. If the head of a channel satisfies a dependency, it will be tried first. 2.4.4.5.4. Other constraints In addition to the constraints supplied by package dependencies, OLM includes additional constraints to represent the desired user state and enforce resolution invariants. 2.4.4.5.4.1. Subscription constraint A subscription constraint filters the set of Operators that can satisfy a subscription. Subscriptions are user-supplied constraints for the dependency resolver. They declare the intent to either install a new Operator if it is not already on the cluster, or to keep an existing Operator updated. 2.4.4.5.4.2. Package constraint Within a namespace, no two Operators may come from the same package. 2.4.4.5.5. Additional resources Catalog health requirements 2.4.4.6. CRD upgrades OLM upgrades a custom resource definition (CRD) immediately if it is owned by a singular cluster service version (CSV). If a CRD is owned by multiple CSVs, then the CRD is upgraded when it has satisfied all of the following backward compatible conditions: All existing serving versions in the current CRD are present in the new CRD. All existing instances, or custom resources, that are associated with the serving versions of the CRD are valid when validated against the validation schema of the new CRD. Additional resources Adding a new CRD version Deprecating or removing a CRD version 2.4.4.7. Dependency best practices When specifying dependencies, there are best practices you should consider. Depend on APIs or a specific version range of Operators Operators can add or remove APIs at any time; always specify an olm.gvk dependency on any APIs your Operators requires. The exception to this is if you are specifying olm.package constraints instead. Set a minimum version The Kubernetes documentation on API changes describes what changes are allowed for Kubernetes-style Operators. These versioning conventions allow an Operator to update an API without bumping the API version, as long as the API is backwards-compatible. For Operator dependencies, this means that knowing the API version of a dependency might not be enough to ensure the dependent Operator works as intended. For example: TestOperator v1.0.0 provides v1alpha1 API version of the MyObject resource. TestOperator v1.0.1 adds a new field spec.newfield to MyObject , but still at v1alpha1. Your Operator might require the ability to write spec.newfield into the MyObject resource. An olm.gvk constraint alone is not enough for OLM to determine that you need TestOperator v1.0.1 and not TestOperator v1.0.0. Whenever possible, if a specific Operator that provides an API is known ahead of time, specify an additional olm.package constraint to set a minimum. Omit a maximum version or allow a very wide range Because Operators provide cluster-scoped resources such as API services and CRDs, an Operator that specifies a small window for a dependency might unnecessarily constrain updates for other consumers of that dependency. Whenever possible, do not set a maximum version. Alternatively, set a very wide semantic range to prevent conflicts with other Operators. For example, >1.0.0 <2.0.0 . Unlike with conventional package managers, Operator authors explicitly encode that updates are safe through channels in OLM. If an update is available for an existing subscription, it is assumed that the Operator author is indicating that it can update from the version. Setting a maximum version for a dependency overrides the update stream of the author by unnecessarily truncating it at a particular upper bound. Note Cluster administrators cannot override dependencies set by an Operator author. However, maximum versions can and should be set if there are known incompatibilities that must be avoided. Specific versions can be omitted with the version range syntax, for example > 1.0.0 !1.2.1 . Additional resources Kubernetes documentation: Changing the API 2.4.4.8. Dependency caveats When specifying dependencies, there are caveats you should consider. No compound constraints (AND) There is currently no method for specifying an AND relationship between constraints. In other words, there is no way to specify that one Operator depends on another Operator that both provides a given API and has version >1.1.0 . This means that when specifying a dependency such as: dependencies: - type: olm.package value: packageName: etcd version: ">3.1.0" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2 It would be possible for OLM to satisfy this with two Operators: one that provides EtcdCluster and one that has version >3.1.0 . Whether that happens, or whether an Operator is selected that satisfies both constraints, depends on the ordering that potential options are visited. Dependency preferences and ordering options are well-defined and can be reasoned about, but to exercise caution, Operators should stick to one mechanism or the other. Cross-namespace compatibility OLM performs dependency resolution at the namespace scope. It is possible to get into an update deadlock if updating an Operator in one namespace would be an issue for an Operator in another namespace, and vice-versa. 2.4.4.9. Example dependency resolution scenarios In the following examples, a provider is an Operator which "owns" a CRD or API service. Example: Deprecating dependent APIs A and B are APIs (CRDs): The provider of A depends on B. The provider of B has a subscription. The provider of B updates to provide C but deprecates B. This results in: B no longer has a provider. A no longer works. This is a case OLM prevents with its upgrade strategy. Example: Version deadlock A and B are APIs: The provider of A requires B. The provider of B requires A. The provider of A updates to (provide A2, require B2) and deprecate A. The provider of B updates to (provide B2, require A2) and deprecate B. If OLM attempts to update A without simultaneously updating B, or vice-versa, it is unable to progress to new versions of the Operators, even though a new compatible set can be found. This is another case OLM prevents with its upgrade strategy. 2.4.5. Operator groups This guide outlines the use of Operator groups with Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.5.1. About Operator groups An Operator group , defined by the OperatorGroup resource, provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its member Operators. The set of target namespaces is provided by a comma-delimited string stored in the olm.targetNamespaces annotation of a cluster service version (CSV). This annotation is applied to the CSV instances of member Operators and is projected into their deployments. 2.4.5.2. Operator group membership An Operator is considered a member of an Operator group if the following conditions are true: The CSV of the Operator exists in the same namespace as the Operator group. The install modes in the CSV of the Operator support the set of namespaces targeted by the Operator group. An install mode in a CSV consists of an InstallModeType field and a boolean Supported field. The spec of a CSV can contain a set of install modes of four distinct InstallModeTypes : Table 2.5. Install modes and supported Operator groups InstallModeType Description OwnNamespace The Operator can be a member of an Operator group that selects its own namespace. SingleNamespace The Operator can be a member of an Operator group that selects one namespace. MultiNamespace The Operator can be a member of an Operator group that selects more than one namespace. AllNamespaces The Operator can be a member of an Operator group that selects all namespaces (target namespace set is the empty string "" ). Note If the spec of a CSV omits an entry of InstallModeType , then that type is considered unsupported unless support can be inferred by an existing entry that implicitly supports it. 2.4.5.3. Target namespace selection You can explicitly name the target namespace for an Operator group using the spec.targetNamespaces parameter: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace You can alternatively specify a namespace using a label selector with the spec.selector parameter: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: "true" Important Listing multiple namespaces via spec.targetNamespaces or use of a label selector via spec.selector is not recommended, as the support for more than one target namespace in an Operator group will likely be removed in a future release. If both spec.targetNamespaces and spec.selector are defined, spec.selector is ignored. Alternatively, you can omit both spec.selector and spec.targetNamespaces to specify a global Operator group, which selects all namespaces: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace The resolved set of selected namespaces is shown in the status.namespaces parameter of an Opeator group. The status.namespace of a global Operator group contains the empty string ( "" ), which signals to a consuming Operator that it should watch all namespaces. 2.4.5.4. Operator group CSV annotations Member CSVs of an Operator group have the following annotations: Annotation Description olm.operatorGroup=<group_name> Contains the name of the Operator group. olm.operatorNamespace=<group_namespace> Contains the namespace of the Operator group. olm.targetNamespaces=<target_namespaces> Contains a comma-delimited string that lists the target namespace selection of the Operator group. Note All annotations except olm.targetNamespaces are included with copied CSVs. Omitting the olm.targetNamespaces annotation on copied CSVs prevents the duplication of target namespaces between tenants. 2.4.5.5. Provided APIs annotation A group/version/kind (GVK) is a unique identifier for a Kubernetes API. Information about what GVKs are provided by an Operator group are shown in an olm.providedAPIs annotation. The value of the annotation is a string consisting of <kind>.<version>.<group> delimited with commas. The GVKs of CRDs and API services provided by all active member CSVs of an Operator group are included. Review the following example of an OperatorGroup object with a single active member CSV that provides the PackageManifest resource: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local ... spec: selector: {} serviceAccountName: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local 2.4.5.6. Role-based access control When an Operator group is created, three cluster roles are generated. Each contains a single aggregation rule with a cluster role selector set to match a label, as shown below: Cluster role Label to match olm.og.<operatorgroup_name>-admin-<hash_value> olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name> olm.og.<operatorgroup_name>-edit-<hash_value> olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name> olm.og.<operatorgroup_name>-view-<hash_value> olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> The following RBAC resources are generated when a CSV becomes an active member of an Operator group, as long as the CSV is watching all namespaces with the AllNamespaces install mode and is not in a failed state with reason InterOperatorGroupOwnerConflict : Cluster roles for each API resource from a CRD Cluster roles for each API resource from an API service Additional roles and role bindings Table 2.6. Cluster roles generated for each API resource from a CRD Cluster role Settings <kind>.<group>-<version>-admin Verbs on <kind> : * Aggregation labels: rbac.authorization.k8s.io/aggregate-to-admin: true olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name> <kind>.<group>-<version>-edit Verbs on <kind> : create update patch delete Aggregation labels: rbac.authorization.k8s.io/aggregate-to-edit: true olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name> <kind>.<group>-<version>-view Verbs on <kind> : get list watch Aggregation labels: rbac.authorization.k8s.io/aggregate-to-view: true olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> <kind>.<group>-<version>-view-crdview Verbs on apiextensions.k8s.io customresourcedefinitions <crd-name> : get Aggregation labels: rbac.authorization.k8s.io/aggregate-to-view: true olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> Table 2.7. Cluster roles generated for each API resource from an API service Cluster role Settings <kind>.<group>-<version>-admin Verbs on <kind> : * Aggregation labels: rbac.authorization.k8s.io/aggregate-to-admin: true olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name> <kind>.<group>-<version>-edit Verbs on <kind> : create update patch delete Aggregation labels: rbac.authorization.k8s.io/aggregate-to-edit: true olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name> <kind>.<group>-<version>-view Verbs on <kind> : get list watch Aggregation labels: rbac.authorization.k8s.io/aggregate-to-view: true olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> Additional roles and role bindings If the CSV defines exactly one target namespace that contains * , then a cluster role and corresponding cluster role binding are generated for each permission defined in the permissions field of the CSV. All resources generated are given the olm.owner: <csv_name> and olm.owner.namespace: <csv_namespace> labels. If the CSV does not define exactly one target namespace that contains * , then all roles and role bindings in the Operator namespace with the olm.owner: <csv_name> and olm.owner.namespace: <csv_namespace> labels are copied into the target namespace. 2.4.5.7. Copied CSVs OLM creates copies of all active member CSVs of an Operator group in each of the target namespaces of that Operator group. The purpose of a copied CSV is to tell users of a target namespace that a specific Operator is configured to watch resources created there. Copied CSVs have a status reason Copied and are updated to match the status of their source CSV. The olm.targetNamespaces annotation is stripped from copied CSVs before they are created on the cluster. Omitting the target namespace selection avoids the duplication of target namespaces between tenants. Copied CSVs are deleted when their source CSV no longer exists or the Operator group that their source CSV belongs to no longer targets the namespace of the copied CSV. Note By default, the disableCopiedCSVs field is disabled. After enabling a disableCopiedCSVs field, the OLM deletes existing copied CSVs on a cluster. When a disableCopiedCSVs field is disabled, the OLM adds copied CSVs again. Disable the disableCopiedCSVs field: USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: false EOF Enable the disableCopiedCSVs field: USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true EOF 2.4.5.8. Static Operator groups An Operator group is static if its spec.staticProvidedAPIs field is set to true . As a result, OLM does not modify the olm.providedAPIs annotation of an Operator group, which means that it can be set in advance. This is useful when a user wants to use an Operator group to prevent resource contention in a set of namespaces but does not have active member CSVs that provide the APIs for those resources. Below is an example of an Operator group that protects Prometheus resources in all namespaces with the something.cool.io/cluster-monitoring: "true" annotation: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: "true" 2.4.5.9. Operator group intersection Two Operator groups are said to have intersecting provided APIs if the intersection of their target namespace sets is not an empty set and the intersection of their provided API sets, defined by olm.providedAPIs annotations, is not an empty set. A potential issue is that Operator groups with intersecting provided APIs can compete for the same resources in the set of intersecting namespaces. Note When checking intersection rules, an Operator group namespace is always included as part of its selected target namespaces. Rules for intersection Each time an active member CSV synchronizes, OLM queries the cluster for the set of intersecting provided APIs between the Operator group of the CSV and all others. OLM then checks if that set is an empty set: If true and the CSV's provided APIs are a subset of the Operator group's: Continue transitioning. If true and the CSV's provided APIs are not a subset of the Operator group's: If the Operator group is static: Clean up any deployments that belong to the CSV. Transition the CSV to a failed state with status reason CannotModifyStaticOperatorGroupProvidedAPIs . If the Operator group is not static: Replace the Operator group's olm.providedAPIs annotation with the union of itself and the CSV's provided APIs. If false and the CSV's provided APIs are not a subset of the Operator group's: Clean up any deployments that belong to the CSV. Transition the CSV to a failed state with status reason InterOperatorGroupOwnerConflict . If false and the CSV's provided APIs are a subset of the Operator group's: If the Operator group is static: Clean up any deployments that belong to the CSV. Transition the CSV to a failed state with status reason CannotModifyStaticOperatorGroupProvidedAPIs . If the Operator group is not static: Replace the Operator group's olm.providedAPIs annotation with the difference between itself and the CSV's provided APIs. Note Failure states caused by Operator groups are non-terminal. The following actions are performed each time an Operator group synchronizes: The set of provided APIs from active member CSVs is calculated from the cluster. Note that copied CSVs are ignored. The cluster set is compared to olm.providedAPIs , and if olm.providedAPIs contains any extra APIs, then those APIs are pruned. All CSVs that provide the same APIs across all namespaces are requeued. This notifies conflicting CSVs in intersecting groups that their conflict has possibly been resolved, either through resizing or through deletion of the conflicting CSV. 2.4.5.10. Limitations for multitenant Operator management OpenShift Container Platform provides limited support for simultaneously installing different versions of an Operator on the same cluster. Operator Lifecycle Manager (OLM) installs Operators multiple times in different namespaces. One constraint of this is that the Operator's API versions must be the same. Operators are control plane extensions due to their usage of CustomResourceDefinition objects (CRDs), which are global resources in Kubernetes. Different major versions of an Operator often have incompatible CRDs. This makes them incompatible to install simultaneously in different namespaces on a cluster. All tenants, or namespaces, share the same control plane of a cluster. Therefore, tenants in a multitenant cluster also share global CRDs, which limits the scenarios in which different instances of the same Operator can be used in parallel on the same cluster. The supported scenarios include the following: Operators of different versions that ship the exact same CRD definition (in case of versioned CRDs, the exact same set of versions) Operators of different versions that do not ship a CRD, and instead have their CRD available in a separate bundle on the OperatorHub All other scenarios are not supported, because the integrity of the cluster data cannot be guaranteed if there are multiple competing or overlapping CRDs from different Operator versions to be reconciled on the same cluster. Additional resources Operator Lifecycle Manager (OLM) Multitenancy and Operator colocation Operators in multitenant clusters Allowing non-cluster administrators to install Operators 2.4.5.11. Troubleshooting Operator groups Membership An install plan's namespace must contain only one Operator group. When attempting to generate a cluster service version (CSV) in a namespace, an install plan considers an Operator group invalid in the following scenarios: No Operator groups exist in the install plan's namespace. Multiple Operator groups exist in the install plan's namespace. An incorrect or non-existent service account name is specified in the Operator group. If an install plan encounters an invalid Operator group, the CSV is not generated and the InstallPlan resource continues to install with a relevant message. For example, the following message is provided if more than one Operator group exists in the same namespace: attenuated service account query failed - more than one operator group(s) are managing this namespace count=2 where count= specifies the number of Operator groups in the namespace. If the install modes of a CSV do not support the target namespace selection of the Operator group in its namespace, the CSV transitions to a failure state with the reason UnsupportedOperatorGroup . CSVs in a failed state for this reason transition to pending after either the target namespace selection of the Operator group changes to a supported configuration, or the install modes of the CSV are modified to support the target namespace selection. 2.4.6. Multitenancy and Operator colocation This guide outlines multitenancy and Operator colocation in Operator Lifecycle Manager (OLM). 2.4.6.1. Colocation of Operators in a namespace Operator Lifecycle Manager (OLM) handles OLM-managed Operators that are installed in the same namespace, meaning their Subscription resources are colocated in the same namespace, as related Operators. Even if they are not actually related, OLM considers their states, such as their version and update policy, when any one of them is updated. This default behavior manifests in two ways: InstallPlan resources of pending updates include ClusterServiceVersion (CSV) resources of all other Operators that are in the same namespace. All Operators in the same namespace share the same update policy. For example, if one Operator is set to manual updates, all other Operators' update policies are also set to manual. These scenarios can lead to the following issues: It becomes hard to reason about install plans for Operator updates, because there are many more resources defined in them than just the updated Operator. It becomes impossible to have some Operators in a namespace update automatically while other are updated manually, which is a common desire for cluster administrators. These issues usually surface because, when installing Operators with the OpenShift Container Platform web console, the default behavior installs Operators that support the All namespaces install mode into the default openshift-operators global namespace. As a cluster administrator, you can bypass this default behavior manually by using the following workflow: Create a namespace for the installation of the Operator. Create a custom global Operator group , which is an Operator group that watches all namespaces. By associating this Operator group with the namespace you just created, it makes the installation namespace a global namespace, which makes Operators installed there available in all namespaces. Install the desired Operator in the installation namespace. If the Operator has dependencies, the dependencies are automatically installed in the pre-created namespace. As a result, it is then valid for the dependency Operators to have the same update policy and shared install plans. For a detailed procedure, see "Installing global Operators in custom namespaces". Additional resources Installing global Operators in custom namespaces Operators in multitenant clusters 2.4.7. Operator conditions This guide outlines how Operator Lifecycle Manager (OLM) uses Operator conditions. 2.4.7.1. About Operator conditions As part of its role in managing the lifecycle of an Operator, Operator Lifecycle Manager (OLM) infers the state of an Operator from the state of Kubernetes resources that define the Operator. While this approach provides some level of assurance that an Operator is in a given state, there are many instances where an Operator might need to communicate information to OLM that could not be inferred otherwise. This information can then be used by OLM to better manage the lifecycle of the Operator. OLM provides a custom resource definition (CRD) called OperatorCondition that allows Operators to communicate conditions to OLM. There are a set of supported conditions that influence management of the Operator by OLM when present in the Spec.Conditions array of an OperatorCondition resource. Note By default, the Spec.Conditions array is not present in an OperatorCondition object until it is either added by a user or as a result of custom Operator logic. 2.4.7.2. Supported conditions Operator Lifecycle Manager (OLM) supports the following Operator conditions. 2.4.7.2.1. Upgradeable condition The Upgradeable Operator condition prevents an existing cluster service version (CSV) from being replaced by a newer version of the CSV. This condition is useful when: An Operator is about to start a critical process and should not be upgraded until the process is completed. An Operator is performing a migration of custom resources (CRs) that must be completed before the Operator is ready to be upgraded. Important Setting the Upgradeable Operator condition to the False value does not avoid pod disruption. If you must ensure your pods are not disrupted, see "Using pod disruption budgets to specify the number of pods that must be up" and "Graceful termination" in the "Additional resources" section. Example Upgradeable Operator condition apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: conditions: - type: Upgradeable 1 status: "False" 2 reason: "migration" message: "The Operator is performing a migration." lastTransitionTime: "2020-08-24T23:15:55Z" 1 Name of the condition. 2 A False value indicates the Operator is not ready to be upgraded. OLM prevents a CSV that replaces the existing CSV of the Operator from leaving the Pending phase. A False value does not block cluster upgrades. 2.4.7.3. Additional resources Managing Operator conditions Enabling Operator conditions Using pod disruption budgets to specify the number of pods that must be up Graceful termination 2.4.8. Operator Lifecycle Manager metrics 2.4.8.1. Exposed metrics Operator Lifecycle Manager (OLM) exposes certain OLM-specific resources for use by the Prometheus-based OpenShift Container Platform cluster monitoring stack. Table 2.8. Metrics exposed by OLM Name Description catalog_source_count Number of catalog sources. catalogsource_ready State of a catalog source. The value 1 indicates that the catalog source is in a READY state. The value of 0 indicates that the catalog source is not in a READY state. csv_abnormal When reconciling a cluster service version (CSV), present whenever a CSV version is in any state other than Succeeded , for example when it is not installed. Includes the name , namespace , phase , reason , and version labels. A Prometheus alert is created when this metric is present. csv_count Number of CSVs successfully registered. csv_succeeded When reconciling a CSV, represents whether a CSV version is in a Succeeded state (value 1 ) or not (value 0 ). Includes the name , namespace , and version labels. csv_upgrade_count Monotonic count of CSV upgrades. install_plan_count Number of install plans. installplan_warnings_total Monotonic count of warnings generated by resources, such as deprecated resources, included in an install plan. olm_resolution_duration_seconds The duration of a dependency resolution attempt. subscription_count Number of subscriptions. subscription_sync_total Monotonic count of subscription syncs. Includes the channel , installed CSV, and subscription name labels. 2.4.9. Webhook management in Operator Lifecycle Manager Webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator. See Defining cluster service versions (CSVs) for details on how an Operator developer can define webhooks for their Operator, as well as considerations when running on OLM. 2.4.9.1. Additional resources Types of webhook admission plugins Kubernetes documentation: Validating admission webhooks Mutating admission webhooks Conversion webhooks 2.5. Understanding OperatorHub 2.5.1. About OperatorHub OperatorHub is the web console interface in OpenShift Container Platform that cluster administrators use to discover and install Operators. With one click, an Operator can be pulled from its off-cluster source, installed and subscribed on the cluster, and made ready for engineering teams to self-service manage the product across deployment environments using Operator Lifecycle Manager (OLM). Cluster administrators can choose from catalogs grouped into the following categories: Category Description Red Hat Operators Red Hat products packaged and shipped by Red Hat. Supported by Red Hat. Certified Operators Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV. Red Hat Marketplace Certified software that can be purchased from Red Hat Marketplace . Community Operators Optionally-visible software maintained by relevant representatives in the redhat-openshift-ecosystem/community-operators-prod/operators GitHub repository. No official support. Custom Operators Operators you add to the cluster yourself. If you have not added any custom Operators, the Custom category does not appear in the web console on your OperatorHub. Operators on OperatorHub are packaged to run on OLM. This includes a YAML file called a cluster service version (CSV) containing all of the CRDs, RBAC rules, deployments, and container images required to install and securely run the Operator. It also contains user-visible information like a description of its features and supported Kubernetes versions. The Operator SDK can be used to assist developers packaging their Operators for use on OLM and OperatorHub. If you have a commercial application that you want to make accessible to your customers, get it included using the certification workflow provided on the Red Hat Partner Connect portal at connect.redhat.com . 2.5.2. OperatorHub architecture The OperatorHub UI component is driven by the Marketplace Operator by default on OpenShift Container Platform in the openshift-marketplace namespace. 2.5.2.1. OperatorHub custom resource The Marketplace Operator manages an OperatorHub custom resource (CR) named cluster that manages the default CatalogSource objects provided with OperatorHub. You can modify this resource to enable or disable the default catalogs, which is useful when configuring OpenShift Container Platform in restricted network environments. Example OperatorHub custom resource apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true 1 sources: [ 2 { name: "community-operators", disabled: false } ] 1 disableAllDefaultSources is an override that controls availability of all default catalogs that are configured by default during an OpenShift Container Platform installation. 2 Disable default catalogs individually by changing the disabled parameter value per source. 2.5.3. Additional resources Catalog source About the Operator SDK Defining cluster service versions (CSVs) Operator installation and upgrade workflow in OLM Red Hat Partner Connect Red Hat Marketplace 2.6. Red Hat-provided Operator catalogs Red Hat provides several Operator catalogs that are included with OpenShift Container Platform by default. Important As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format. The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune , do not work with the file-based catalog format. For more information about working with file-based catalogs, see Managing custom catalogs , Operator Framework packaging format , and Mirroring images for a disconnected installation using the oc-mirror plugin . 2.6.1. About Operator catalogs An Operator catalog is a repository of metadata that Operator Lifecycle Manager (OLM) can query to discover and install Operators and their dependencies on a cluster. OLM always installs Operators from the latest version of a catalog. An index image, based on the Operator bundle format, is a containerized snapshot of a catalog. It is an immutable artifact that contains the database of pointers to a set of Operator manifest content. A catalog can reference an index image to source its content for OLM on the cluster. As catalogs are updated, the latest versions of Operators change, and older versions may be removed or altered. In addition, when OLM runs on an OpenShift Container Platform cluster in a restricted network environment, it is unable to access the catalogs directly from the internet to pull the latest content. As a cluster administrator, you can create your own custom index image, either based on a Red Hat-provided catalog or from scratch, which can be used to source the catalog content on the cluster. Creating and updating your own index image provides a method for customizing the set of Operators available on the cluster, while also avoiding the aforementioned restricted network environment issues. Important Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of OpenShift Container Platform that uses the Kubernetes version that removed the API. If your cluster is using custom catalogs, see Controlling Operator compatibility with OpenShift Container Platform versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades. Note Support for the legacy package manifest format for Operators, including custom catalogs that were using the legacy format, is removed in OpenShift Container Platform 4.8 and later. When creating custom catalog images, versions of OpenShift Container Platform 4 required using the oc adm catalog build command, which was deprecated for several releases and is now removed. With the availability of Red Hat-provided index images starting in OpenShift Container Platform 4.6, catalog builders must use the opm index command to manage index images. Additional resources Managing custom catalogs Packaging format Using Operator Lifecycle Manager in disconnected environments 2.6.2. About Red Hat-provided Operator catalogs The Red Hat-provided catalog sources are installed by default in the openshift-marketplace namespace, which makes the catalogs available cluster-wide in all namespaces. The following Operator catalogs are distributed by Red Hat: Catalog Index image Description redhat-operators registry.redhat.io/redhat/redhat-operator-index:v4.17 Red Hat products packaged and shipped by Red Hat. Supported by Red Hat. certified-operators registry.redhat.io/redhat/certified-operator-index:v4.17 Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV. redhat-marketplace registry.redhat.io/redhat/redhat-marketplace-index:v4.17 Certified software that can be purchased from Red Hat Marketplace . community-operators registry.redhat.io/redhat/community-operator-index:v4.17 Software maintained by relevant representatives in the redhat-openshift-ecosystem/community-operators-prod/operators GitHub repository. No official support. During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from OpenShift Container Platform 4.8 to 4.9, the spec.image field in the CatalogSource object for the redhat-operators catalog is updated from: registry.redhat.io/redhat/redhat-operator-index:v4.8 to: registry.redhat.io/redhat/redhat-operator-index:v4.9 2.7. Operators in multitenant clusters The default behavior for Operator Lifecycle Manager (OLM) aims to provide simplicity during Operator installation. However, this behavior can lack flexibility, especially in multitenant clusters. In order for multiple tenants on a OpenShift Container Platform cluster to use an Operator, the default behavior of OLM requires that administrators install the Operator in All namespaces mode, which can be considered to violate the principle of least privilege. Consider the following scenarios to determine which Operator installation workflow works best for your environment and requirements. Additional resources Common terms: Multitenant Limitations for multitenant Operator management 2.7.1. Default Operator install modes and behavior When installing Operators with the web console as an administrator, you typically have two choices for the install mode, depending on the Operator's capabilities: Single namespace Installs the Operator in the chosen single namespace, and makes all permissions that the Operator requests available in that namespace. All namespaces Installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. Makes all permissions that the Operator requests available in all namespaces. In some cases, an Operator author can define metadata to give the user a second option for that Operator's suggested namespace. This choice also means that users in the affected namespaces get access to the Operators APIs, which can leverage the custom resources (CRs) they own, depending on their role in the namespace: The namespace-admin and namespace-edit roles can read/write to the Operator APIs, meaning they can use them. The namespace-view role can read CR objects of that Operator. For Single namespace mode, because the Operator itself installs in the chosen namespace, its pod and service account are also located there. For All namespaces mode, the Operator's privileges are all automatically elevated to cluster roles, meaning the Operator has those permissions in all namespaces. Additional resources Adding Operators to a cluster Install modes types Setting a suggested namespace 2.7.2. Recommended solution for multitenant clusters While a Multinamespace install mode does exist, it is supported by very few Operators. As a middle ground solution between the standard All namespaces and Single namespace install modes, you can install multiple instances of the same Operator, one for each tenant, by using the following workflow: Create a namespace for the tenant Operator that is separate from the tenant's namespace. Create an Operator group for the tenant Operator scoped only to the tenant's namespace. Install the Operator in the tenant Operator namespace. As a result, the Operator resides in the tenant Operator namespace and watches the tenant namespace, but neither the Operator's pod nor its service account are visible or usable by the tenant. This solution provides better tenant separation, least privilege principle at the cost of resource usage, and additional orchestration to ensure the constraints are met. For a detailed procedure, see "Preparing for multiple instances of an Operator for multitenant clusters". Limitations and considerations This solution only works when the following constraints are met: All instances of the same Operator must be the same version. The Operator cannot have dependencies on other Operators. The Operator cannot ship a CRD conversion webhook. Important You cannot use different versions of the same Operator on the same cluster. Eventually, the installation of another instance of the Operator would be blocked when it meets the following conditions: The instance is not the newest version of the Operator. The instance ships an older revision of the CRDs that lack information or versions that newer revisions have that are already in use on the cluster. Warning As an administrator, use caution when allowing non-cluster administrators to install Operators self-sufficiently, as explained in "Allowing non-cluster administrators to install Operators". These tenants should only have access to a curated catalog of Operators that are known to not have dependencies. These tenants must also be forced to use the same version line of an Operator, to ensure the CRDs do not change. This requires the use of namespace-scoped catalogs and likely disabling the global default catalogs. Additional resources Preparing for multiple instances of an Operator for multitenant clusters Allowing non-cluster administrators to install Operators Disabling the default OperatorHub catalog sources 2.7.3. Operator colocation and Operator groups Operator Lifecycle Manager (OLM) handles OLM-managed Operators that are installed in the same namespace, meaning their Subscription resources are colocated in the same namespace, as related Operators. Even if they are not actually related, OLM considers their states, such as their version and update policy, when any one of them is updated. For more information on Operator colocation and using Operator groups effectively, see Operator Lifecycle Manager (OLM) Multitenancy and Operator colocation . 2.8. CRDs 2.8.1. Extending the Kubernetes API with custom resource definitions Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so that custom objects managed by the Operator look and act just like the built-in, native Kubernetes objects. This guide describes how cluster administrators can extend their OpenShift Container Platform cluster by creating and managing CRDs. 2.8.1.1. Custom resource definitions In the Kubernetes API, a resource is an endpoint that stores a collection of API objects of a certain kind. For example, the built-in Pods resource contains a collection of Pod objects. A custom resource definition (CRD) object defines a new, unique object type, called a kind , in the cluster and lets the Kubernetes API server handle its entire lifecycle. Custom resource (CR) objects are created from CRDs that have been added to the cluster by a cluster administrator, allowing all cluster users to add the new resource type into projects. When a cluster administrator adds a new CRD to the cluster, the Kubernetes API server reacts by creating a new RESTful resource path that can be accessed by the entire cluster or a single project (namespace) and begins serving the specified CR. Cluster administrators that want to grant access to the CRD to other users can use cluster role aggregation to grant access to users with the admin , edit , or view default cluster roles. Cluster role aggregation allows the insertion of custom policy rules into these cluster roles. This behavior integrates the new resource into the RBAC policy of the cluster as if it was a built-in resource. Operators in particular make use of CRDs by packaging them with any required RBAC policy and other software-specific logic. Cluster administrators can also add CRDs manually to the cluster outside of the lifecycle of an Operator, making them available to all users. Note While only cluster administrators can create CRDs, developers can create the CR from an existing CRD if they have read and write permission to it. 2.8.1.2. Creating a custom resource definition To create custom resource (CR) objects, cluster administrators must first create a custom resource definition (CRD). Prerequisites Access to an OpenShift Container Platform cluster with cluster-admin user privileges. Procedure To create a CRD: Create a YAML file that contains the following field types: Example YAML file for a CRD apiVersion: apiextensions.k8s.io/v1 1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com 2 spec: group: stable.example.com 3 versions: - name: v1 4 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: cronSpec: type: string image: type: string replicas: type: integer scope: Namespaced 5 names: plural: crontabs 6 singular: crontab 7 kind: CronTab 8 shortNames: - ct 9 1 Use the apiextensions.k8s.io/v1 API. 2 Specify a name for the definition. This must be in the <plural-name>.<group> format using the values from the group and plural fields. 3 Specify a group name for the API. An API group is a collection of objects that are logically related. For example, all batch objects like Job or ScheduledJob could be in the batch API group (such as batch.api.example.com ). A good practice is to use a fully-qualified-domain name (FQDN) of your organization. 4 Specify a version name to be used in the URL. Each API group can exist in multiple versions, for example v1alpha , v1beta , v1 . 5 Specify whether the custom objects are available to a project ( Namespaced ) or all projects in the cluster ( Cluster ). 6 Specify the plural name to use in the URL. The plural field is the same as a resource in an API URL. 7 Specify a singular name to use as an alias on the CLI and for display. 8 Specify the kind of objects that can be created. The type can be in CamelCase. 9 Specify a shorter string to match your resource on the CLI. Note By default, a CRD is cluster-scoped and available to all projects. Create the CRD object: USD oc create -f <file_name>.yaml A new RESTful API endpoint is created at: /apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/... For example, using the example file, the following endpoint is created: /apis/stable.example.com/v1/namespaces/*/crontabs/... You can now use this endpoint URL to create and manage CRs. The object kind is based on the spec.kind field of the CRD object you created. 2.8.1.3. Creating cluster roles for custom resource definitions Cluster administrators can grant permissions to existing cluster-scoped custom resource definitions (CRDs). If you use the admin , edit , and view default cluster roles, you can take advantage of cluster role aggregation for their rules. Important You must explicitly assign permissions to each of these roles. The roles with more permissions do not inherit rules from roles with fewer permissions. If you assign a rule to a role, you must also assign that verb to roles that have more permissions. For example, if you grant the get crontabs permission to the view role, you must also grant it to the edit and admin roles. The admin or edit role is usually assigned to the user that created a project through the project template. Prerequisites Create a CRD. Procedure Create a cluster role definition file for the CRD. The cluster role definition is a YAML file that contains the rules that apply to each cluster role. An OpenShift Container Platform controller adds the rules that you specify to the default cluster roles. Example YAML file for a cluster role definition kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 1 metadata: name: aggregate-cron-tabs-admin-edit 2 labels: rbac.authorization.k8s.io/aggregate-to-admin: "true" 3 rbac.authorization.k8s.io/aggregate-to-edit: "true" 4 rules: - apiGroups: ["stable.example.com"] 5 resources: ["crontabs"] 6 verbs: ["get", "list", "watch", "create", "update", "patch", "delete", "deletecollection"] 7 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: aggregate-cron-tabs-view 8 labels: # Add these permissions to the "view" default role. rbac.authorization.k8s.io/aggregate-to-view: "true" 9 rbac.authorization.k8s.io/aggregate-to-cluster-reader: "true" 10 rules: - apiGroups: ["stable.example.com"] 11 resources: ["crontabs"] 12 verbs: ["get", "list", "watch"] 13 1 Use the rbac.authorization.k8s.io/v1 API. 2 8 Specify a name for the definition. 3 Specify this label to grant permissions to the admin default role. 4 Specify this label to grant permissions to the edit default role. 5 11 Specify the group name of the CRD. 6 12 Specify the plural name of the CRD that these rules apply to. 7 13 Specify the verbs that represent the permissions that are granted to the role. For example, apply read and write permissions to the admin and edit roles and only read permission to the view role. 9 Specify this label to grant permissions to the view default role. 10 Specify this label to grant permissions to the cluster-reader default role. Create the cluster role: USD oc create -f <file_name>.yaml 2.8.1.4. Creating custom resources from a file After a custom resource definition (CRD) has been added to the cluster, custom resources (CRs) can be created with the CLI from a file using the CR specification. Prerequisites CRD added to the cluster by a cluster administrator. Procedure Create a YAML file for the CR. In the following example definition, the cronSpec and image custom fields are set in a CR of Kind: CronTab . The Kind comes from the spec.kind field of the CRD object: Example YAML file for a CR apiVersion: "stable.example.com/v1" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: "* * * * /5" image: my-awesome-cron-image 1 Specify the group name and API version (name/version) from the CRD. 2 Specify the type in the CRD. 3 Specify a name for the object. 4 Specify the finalizers for the object, if any. Finalizers allow controllers to implement conditions that must be completed before the object can be deleted. 5 Specify conditions specific to the type of object. After you create the file, create the object: USD oc create -f <file_name>.yaml 2.8.1.5. Inspecting custom resources You can inspect custom resource (CR) objects that exist in your cluster using the CLI. Prerequisites A CR object exists in a namespace to which you have access. Procedure To get information on a specific kind of a CR, run: USD oc get <kind> For example: USD oc get crontab Example output NAME KIND my-new-cron-object CronTab.v1.stable.example.com Resource names are not case-sensitive, and you can use either the singular or plural forms defined in the CRD, as well as any short name. For example: USD oc get crontabs USD oc get crontab USD oc get ct You can also view the raw YAML data for a CR: USD oc get <kind> -o yaml For example: USD oc get ct -o yaml Example output apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: "" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: "285" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2 1 2 Custom data from the YAML that you used to create the object displays. 2.8.2. Managing resources from custom resource definitions This guide describes how developers can manage custom resources (CRs) that come from custom resource definitions (CRDs). 2.8.2.1. Custom resource definitions In the Kubernetes API, a resource is an endpoint that stores a collection of API objects of a certain kind. For example, the built-in Pods resource contains a collection of Pod objects. A custom resource definition (CRD) object defines a new, unique object type, called a kind , in the cluster and lets the Kubernetes API server handle its entire lifecycle. Custom resource (CR) objects are created from CRDs that have been added to the cluster by a cluster administrator, allowing all cluster users to add the new resource type into projects. Operators in particular make use of CRDs by packaging them with any required RBAC policy and other software-specific logic. Cluster administrators can also add CRDs manually to the cluster outside of the lifecycle of an Operator, making them available to all users. Note While only cluster administrators can create CRDs, developers can create the CR from an existing CRD if they have read and write permission to it. 2.8.2.2. Creating custom resources from a file After a custom resource definition (CRD) has been added to the cluster, custom resources (CRs) can be created with the CLI from a file using the CR specification. Prerequisites CRD added to the cluster by a cluster administrator. Procedure Create a YAML file for the CR. In the following example definition, the cronSpec and image custom fields are set in a CR of Kind: CronTab . The Kind comes from the spec.kind field of the CRD object: Example YAML file for a CR apiVersion: "stable.example.com/v1" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: "* * * * /5" image: my-awesome-cron-image 1 Specify the group name and API version (name/version) from the CRD. 2 Specify the type in the CRD. 3 Specify a name for the object. 4 Specify the finalizers for the object, if any. Finalizers allow controllers to implement conditions that must be completed before the object can be deleted. 5 Specify conditions specific to the type of object. After you create the file, create the object: USD oc create -f <file_name>.yaml 2.8.2.3. Inspecting custom resources You can inspect custom resource (CR) objects that exist in your cluster using the CLI. Prerequisites A CR object exists in a namespace to which you have access. Procedure To get information on a specific kind of a CR, run: USD oc get <kind> For example: USD oc get crontab Example output NAME KIND my-new-cron-object CronTab.v1.stable.example.com Resource names are not case-sensitive, and you can use either the singular or plural forms defined in the CRD, as well as any short name. For example: USD oc get crontabs USD oc get crontab USD oc get ct You can also view the raw YAML data for a CR: USD oc get <kind> -o yaml For example: USD oc get ct -o yaml Example output apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: "" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: "285" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2 1 2 Custom data from the YAML that you used to create the object displays. | [
"etcd ├── manifests │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml │ └── secret.yaml │ └── configmap.yaml └── metadata └── annotations.yaml └── dependencies.yaml",
"annotations: operators.operatorframework.io.bundle.mediatype.v1: \"registry+v1\" 1 operators.operatorframework.io.bundle.manifests.v1: \"manifests/\" 2 operators.operatorframework.io.bundle.metadata.v1: \"metadata/\" 3 operators.operatorframework.io.bundle.package.v1: \"test-operator\" 4 operators.operatorframework.io.bundle.channels.v1: \"beta,stable\" 5 operators.operatorframework.io.bundle.channel.default.v1: \"stable\" 6",
"dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml",
"catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json └── deprecations.yaml",
"_Meta: { // schema is required and must be a non-empty string schema: string & !=\"\" // package is optional, but if it's defined, it must be a non-empty string package?: string & !=\"\" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null }",
"#Package: { schema: \"olm.package\" // Package name name: string & !=\"\" // A description of the package description?: string // The package's default channel defaultChannel: string & !=\"\" // An optional icon icon?: { base64data: string mediatype: string } }",
"#Channel: { schema: \"olm.channel\" package: string & !=\"\" name: string & !=\"\" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !=\"\" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !=\"\" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=\"\"] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !=\"\" }",
"#Bundle: { schema: \"olm.bundle\" package: string & !=\"\" name: string & !=\"\" image: string & !=\"\" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !=\"\" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !=\"\" }",
"schema: olm.deprecations package: my-operator 1 entries: - reference: schema: olm.package 2 message: | 3 The 'my-operator' package is end of life. Please use the 'my-operator-new' package for support. - reference: schema: olm.channel name: alpha 4 message: | The 'alpha' channel is no longer supported. Please switch to the 'stable' channel. - reference: schema: olm.bundle name: my-operator.v1.68.0 5 message: | my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and install my-operator.v1.72.0 for support.",
"my-catalog └── my-operator ├── index.yaml └── deprecations.yaml",
"#PropertyPackage: { type: \"olm.package\" value: { packageName: string & !=\"\" version: string & !=\"\" } }",
"#PropertyGVK: { type: \"olm.gvk\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"#PropertyPackageRequired: { type: \"olm.package.required\" value: { packageName: string & !=\"\" versionRange: string & !=\"\" } }",
"#PropertyGVKRequired: { type: \"olm.gvk.required\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317",
"name=USD(yq eval '.name' catalog.yaml) mkdir \"USDname\" yq eval '.name + \"/\" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + \"|\" + USDcatalog + \"/\" + .name + \"/index.yaml\"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render \"USDimage\" > \"USDfile\" done opm generate dockerfile \"USDname\" indexImage=USD(yq eval '.repo + \":\" + .tag' catalog.yaml) docker build -t \"USDindexImage\" -f \"USDname.Dockerfile\" . docker push \"USDindexImage\"",
"\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 annotations: olm.catalogImageTemplate: 3 \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: displayName: Example Catalog 4 image: quay.io/example-org/example-catalog:v1 5 priority: -400 6 publisher: Example Org sourceType: grpc 7 grpcPodConfig: securityContextConfig: <security_mode> 8 nodeSelector: 9 custom_label: <label> priorityClassName: system-cluster-critical 10 tolerations: 11 - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" updateStrategy: registryPoll: 12 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 13 latestImageRegistryPoll: 2021-08-26T18:46:25Z 14 registryService: 15 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace",
"registry.redhat.io/redhat/redhat-operator-index:v4.16",
"registry.redhat.io/redhat/redhat-operator-index:v4.17",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog namespace: openshift-marketplace annotations: olm.catalogImageTemplate: \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}\" spec: displayName: Example Catalog image: quay.io/example-org/example-catalog:v1.30 priority: -400 publisher: Example Org",
"quay.io/example-org/example-catalog:v1.30",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created",
"packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1",
"olm.skipRange: <semver_range>",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2'",
"properties: - type: olm.kubeversion value: version: \"1.16.0\"",
"properties: - property: type: color value: red - property: type: shape value: square - property: type: olm.gvk value: group: olm.coreos.io version: v1alpha1 kind: myresource",
"dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"type: olm.constraint value: failureMessage: 'require to have \"certified\"' cel: rule: 'properties.exists(p, p.type == \"certified\")'",
"type: olm.constraint value: failureMessage: 'require to have \"certified\" and \"stable\" properties' cel: rule: 'properties.exists(p, p.type == \"certified\") && properties.exists(p, p.type == \"stable\")'",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: All are required for Red because all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: GVK Green/v1 is needed for gvk: group: greens.example.com version: v1 kind: Green",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Any are required for Red because any: constraints: - gvk: group: blues.example.com version: v1beta1 kind: Blue - gvk: group: blues.example.com version: v1beta2 kind: Blue - gvk: group: blues.example.com version: v1 kind: Blue",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: Cannot be required for Red because not: constraints: - gvk: group: greens.example.com version: v1alpha1 kind: greens",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Required for Red because any: constraints: - all: constraints: - package: name: blue versionRange: '>=1.0.0' - gvk: group: blues.example.com version: v1 kind: Blue - all: constraints: - package: name: blue versionRange: '<1.0.0' - gvk: group: blues.example.com version: v1beta1 kind: Blue",
"apiVersion: \"operators.coreos.com/v1alpha1\" kind: \"CatalogSource\" metadata: name: \"my-operators\" namespace: \"operators\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: example.com/my/operator-index:v1 displayName: \"My Operators\" priority: 100",
"dependencies: - type: olm.package value: packageName: etcd version: \">3.1.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: \"true\"",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local spec: selector: {} serviceAccountName: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: false EOF",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true EOF",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: \"true\"",
"attenuated service account query failed - more than one operator group(s) are managing this namespace count=2",
"apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: conditions: - type: Upgradeable 1 status: \"False\" 2 reason: \"migration\" message: \"The Operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"",
"apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true 1 sources: [ 2 { name: \"community-operators\", disabled: false } ]",
"registry.redhat.io/redhat/redhat-operator-index:v4.8",
"registry.redhat.io/redhat/redhat-operator-index:v4.9",
"apiVersion: apiextensions.k8s.io/v1 1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com 2 spec: group: stable.example.com 3 versions: - name: v1 4 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: cronSpec: type: string image: type: string replicas: type: integer scope: Namespaced 5 names: plural: crontabs 6 singular: crontab 7 kind: CronTab 8 shortNames: - ct 9",
"oc create -f <file_name>.yaml",
"/apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/",
"/apis/stable.example.com/v1/namespaces/*/crontabs/",
"kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 1 metadata: name: aggregate-cron-tabs-admin-edit 2 labels: rbac.authorization.k8s.io/aggregate-to-admin: \"true\" 3 rbac.authorization.k8s.io/aggregate-to-edit: \"true\" 4 rules: - apiGroups: [\"stable.example.com\"] 5 resources: [\"crontabs\"] 6 verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\", \"deletecollection\"] 7 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: aggregate-cron-tabs-view 8 labels: # Add these permissions to the \"view\" default role. rbac.authorization.k8s.io/aggregate-to-view: \"true\" 9 rbac.authorization.k8s.io/aggregate-to-cluster-reader: \"true\" 10 rules: - apiGroups: [\"stable.example.com\"] 11 resources: [\"crontabs\"] 12 verbs: [\"get\", \"list\", \"watch\"] 13",
"oc create -f <file_name>.yaml",
"apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image",
"oc create -f <file_name>.yaml",
"oc get <kind>",
"oc get crontab",
"NAME KIND my-new-cron-object CronTab.v1.stable.example.com",
"oc get crontabs",
"oc get crontab",
"oc get ct",
"oc get <kind> -o yaml",
"oc get ct -o yaml",
"apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2",
"apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image",
"oc create -f <file_name>.yaml",
"oc get <kind>",
"oc get crontab",
"NAME KIND my-new-cron-object CronTab.v1.stable.example.com",
"oc get crontabs",
"oc get crontab",
"oc get ct",
"oc get <kind> -o yaml",
"oc get ct -o yaml",
"apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operators/understanding-operators |
4.137. libhbalinux | 4.137. libhbalinux 4.137.1. RHBA-2011:1606 - libhbalinux bug fix and enhancement update An updated libhbalinux package that fixes multiple bugs and adds various enhancements is now available for Red Hat Enterprise Linux 6. The libhbalinux package contains the Host Bus Adapter API (HBAAPI) vendor library which uses standard kernel interfaces to obtain information about Fiber Channel Host Buses (FC HBA) in the system. The package has been upgraded to upstream version 1.0.12, which provides a number of bug fixes and enhancements over the version. (BZ# 719584 ) Users are advised to upgrade to this updated libhbalinux package, which fixes these bugs and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libhbalinux |
Chapter 7. StorageClass [storage.k8s.io/v1] | Chapter 7. StorageClass [storage.k8s.io/v1] Description StorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned. StorageClasses are non-namespaced; the name of the storage class according to etcd is in ObjectMeta.Name. Type object Required provisioner 7.1. Specification Property Type Description allowVolumeExpansion boolean allowVolumeExpansion shows whether the storage class allow volume expand. allowedTopologies array (TopologySelectorTerm) allowedTopologies restrict the node topologies where volumes can be dynamically provisioned. Each volume plugin defines its own supported topology specifications. An empty TopologySelectorTerm list means there is no topology restriction. This field is only honored by servers that enable the VolumeScheduling feature. apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata mountOptions array (string) mountOptions controls the mountOptions for dynamically provisioned PersistentVolumes of this storage class. e.g. ["ro", "soft"]. Not validated - mount of the PVs will simply fail if one is invalid. parameters object (string) parameters holds the parameters for the provisioner that should create volumes of this storage class. provisioner string provisioner indicates the type of the provisioner. reclaimPolicy string reclaimPolicy controls the reclaimPolicy for dynamically provisioned PersistentVolumes of this storage class. Defaults to Delete. Possible enum values: - "Delete" means the volume will be deleted from Kubernetes on release from its claim. The volume plugin must support Deletion. - "Recycle" means the volume will be recycled back into the pool of unbound persistent volumes on release from its claim. The volume plugin must support Recycling. - "Retain" means the volume will be left in its current phase (Released) for manual reclamation by the administrator. The default policy is Retain. volumeBindingMode string volumeBindingMode indicates how PersistentVolumeClaims should be provisioned and bound. When unset, VolumeBindingImmediate is used. This field is only honored by servers that enable the VolumeScheduling feature. Possible enum values: - "Immediate" indicates that PersistentVolumeClaims should be immediately provisioned and bound. This is the default mode. - "WaitForFirstConsumer" indicates that PersistentVolumeClaims should not be provisioned and bound until the first Pod is created that references the PeristentVolumeClaim. The volume provisioning and binding will occur during Pod scheduing. 7.2. API endpoints The following API endpoints are available: /apis/storage.k8s.io/v1/storageclasses DELETE : delete collection of StorageClass GET : list or watch objects of kind StorageClass POST : create a StorageClass /apis/storage.k8s.io/v1/watch/storageclasses GET : watch individual changes to a list of StorageClass. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/storageclasses/{name} DELETE : delete a StorageClass GET : read the specified StorageClass PATCH : partially update the specified StorageClass PUT : replace the specified StorageClass /apis/storage.k8s.io/v1/watch/storageclasses/{name} GET : watch changes to an object of kind StorageClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 7.2.1. /apis/storage.k8s.io/v1/storageclasses HTTP method DELETE Description delete collection of StorageClass Table 7.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind StorageClass Table 7.3. HTTP responses HTTP code Reponse body 200 - OK StorageClassList schema 401 - Unauthorized Empty HTTP method POST Description create a StorageClass Table 7.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.5. Body parameters Parameter Type Description body StorageClass schema Table 7.6. HTTP responses HTTP code Reponse body 200 - OK StorageClass schema 201 - Created StorageClass schema 202 - Accepted StorageClass schema 401 - Unauthorized Empty 7.2.2. /apis/storage.k8s.io/v1/watch/storageclasses HTTP method GET Description watch individual changes to a list of StorageClass. deprecated: use the 'watch' parameter with a list operation instead. Table 7.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.3. /apis/storage.k8s.io/v1/storageclasses/{name} Table 7.8. Global path parameters Parameter Type Description name string name of the StorageClass HTTP method DELETE Description delete a StorageClass Table 7.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.10. HTTP responses HTTP code Reponse body 200 - OK StorageClass schema 202 - Accepted StorageClass schema 401 - Unauthorized Empty HTTP method GET Description read the specified StorageClass Table 7.11. HTTP responses HTTP code Reponse body 200 - OK StorageClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified StorageClass Table 7.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.13. HTTP responses HTTP code Reponse body 200 - OK StorageClass schema 201 - Created StorageClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified StorageClass Table 7.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.15. Body parameters Parameter Type Description body StorageClass schema Table 7.16. HTTP responses HTTP code Reponse body 200 - OK StorageClass schema 201 - Created StorageClass schema 401 - Unauthorized Empty 7.2.4. /apis/storage.k8s.io/v1/watch/storageclasses/{name} Table 7.17. Global path parameters Parameter Type Description name string name of the StorageClass HTTP method GET Description watch changes to an object of kind StorageClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/storage_apis/storageclass-storage-k8s-io-v1 |
11.3. Understanding the Predictable Network Interface Device Names | 11.3. Understanding the Predictable Network Interface Device Names The names have two-character prefixes based on the type of interface: en for Ethernet, wl for wireless LAN (WLAN), ww for wireless wide area network (WWAN). The names have the following types: o< index > on-board device index number s< slot> [f< function> ][d< dev_id >] hotplug slot index number. All multi-function PCI devices will carry the [f< function >] number in the device name, including the function 0 device. x< MAC > MAC address [P< domain >]p< bus >s< slot >[f< function >][d< dev_id >] PCI geographical location. In PCI geographical location, the [P< domain >] number is only mentioned if the value is not 0 . For example: ID_NET_NAME_PATH=P1enp5s0 [P< domain >]p< bus >s< slot >[f< function >][u< port >][..][c< config >][i< interface >] USB port number chain. For USB devices, the full chain of port numbers of hubs is composed. If the name gets longer than the maximum number of 15 characters, the name is not exported. If there are multiple USB devices in the chain, the default values for USB configuration descriptors (c1) and USB interface descriptors (i0) are suppressed. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-understanding_the_predictable_network_interface_device_names |
Getting Started Guide | Getting Started Guide Red Hat build of Keycloak 24.0 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/getting_started_guide/index |
Preface | Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/migrating_fuse_7_applications_to_red_hat_build_of_apache_camel_for_quarkus/pr01 |
Chapter 2. Configuring Red Hat build of Keycloak for production | Chapter 2. Configuring Red Hat build of Keycloak for production A Red Hat build of Keycloak production environment provides secure authentication and authorization for deployments that range from on-premise deployments that support a few thousand users to deployments that serve millions of users. This chapter describes the general areas of configuration required for a production ready Red Hat build of Keycloak environment. This information focuses on the general concepts instead of the actual implementation, which depends on your environment. The key aspects covered in this chapter apply to all environments, whether it is containerized, on-premise, GitOps, or Ansible. 2.1. TLS for secure communication Red Hat build of Keycloak continually exchanges sensitive data, which means that all communication to and from Red Hat build of Keycloak requires a secure communication channel. To prevent several attack vectors, you enable HTTP over TLS, or HTTPS, for that channel. To configure secure communication channels for Red Hat build of Keycloak, see Configuring TLS and Configuring outgoing HTTP requests . To secure the cache communication for Red Hat build of Keycloak, see Configuring distributed caches . 2.2. The hostname for Red Hat build of Keycloak In a production environment, Red Hat build of Keycloak instances usually run in a private network, but Red Hat build of Keycloak needs to expose certain public facing endpoints to communicate with the applications to be secured. For details on the endpoint categories and instructions on how to configure the public hostname for them, see Configuring the hostname (v2) . 2.2.1. Exposing the Red Hat build of Keycloak Administration APIs and UI on a different hostname It is considered a best practice to expose the Red Hat build of Keycloak Administration REST API and Console on a different hostname or context-path than the one used for the public frontend URLs that are used e.g. by login flows. This separation ensures that the Administration interfaces are not exposed to the public internet, which reduces the attack surface. Warning Access to REST APIs needs to be blocked on the reverse proxy level, if they are not intended to be publicly exposed. For details, see Configuring the hostname (v2) . 2.3. Reverse proxy in a distributed environment Apart from Configuring the hostname (v2) , production environments usually include a reverse proxy / load balancer component. It separates and unifies access to the network used by your company or organization. For a Red Hat build of Keycloak production environment, this component is recommended. For details on configuring proxy communication modes in Red Hat build of Keycloak, see Using a reverse proxy . That chapter also recommends which paths should be hidden from public access and which paths should be exposed so that Red Hat build of Keycloak can secure your applications. 2.4. Limit the number of queued requests A production environment should protect itself from an overload situation, so that it responds to as many valid requests as possible, and to continue regular operations once the situation returns to normal again. One way of doing this is rejecting additional requests once a certain threshold is reached. Load shedding should be implemented on all levels, including the load balancers in your environment. In addition to that, there is a feature in Red Hat build of Keycloak to limit the number of requests that can't be processed right away and need to be queued. By default, there is no limit set. Set the option http-max-queued-requests to limit the number of queued requests to a given threshold matching your environment. Any request that exceeds this limit would return with an immediate 503 Server not Available response. 2.5. Production grade database The database used by Red Hat build of Keycloak is crucial for the overall performance, availability, reliability and integrity of Red Hat build of Keycloak. For details on how to configure a supported database, see Configuring the database . 2.6. Support for Red Hat build of Keycloak in a cluster To ensure that users can continue to log in when a Red Hat build of Keycloak instance goes down, a typical production environment contains two or more Red Hat build of Keycloak instances. Red Hat build of Keycloak runs on top of JGroups and Infinispan, which provide a reliable, high-availability stack for a clustered scenario. When deployed to a cluster, the embedded Infinispan server communication should be secured. You secure this communication either by enabling authentication and encryption or by isolating the network used for cluster communication. To find out more about using multiple nodes, the different caches and an appropriate stack for your environment, see Configuring distributed caches . 2.7. Configure Red Hat build of Keycloak Server with IPv4 or IPv6 The system properties java.net.preferIPv4Stack and java.net.preferIPv6Addresses are used to configure the JVM for use with IPv4 or IPv6 addresses. By default, Red Hat build of Keycloak is accessible via IPv4 and IPv6 addresses at the same time. In order to run only with IPv4 addresses, you need to specify the property java.net.preferIPv4Stack=true . The latter ensures that any hostname to IP address conversions always return IPv4 address variants. These system properties are conveniently set by the JAVA_OPTS_APPEND environment variable. For example, to change the IP stack preference to IPv4, set an environment variable as follows: export JAVA_OPTS_APPEND="-Djava.net.preferIPv4Stack=true" | [
"export JAVA_OPTS_APPEND=\"-Djava.net.preferIPv4Stack=true\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_configuration_guide/configuration-production- |
23.2. XML Representation of the MAC Address Pools Collection | 23.2. XML Representation of the MAC Address Pools Collection Example 23.1. An XML representation of the MAC address pools collection | [
"<mac_pools> <mac_pool href=\"/ovirt-engine/api/macpools/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"> <name>Default</name> <description>Default MAC pool</description> <allow_duplicates>false</allow_duplicates> <default_pool>true</default_pool> <ranges> <range> <from>00:1A:4A:16:01:51</from> <to>00:1A:4A:16:01:e6</to> </range> </ranges> </mac_pool> </mac_pools>"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/xml_representation_of_the_mac_address_pool_collection |
C.5. Selection Criteria Display Examples | C.5. Selection Criteria Display Examples This section provides a series of examples showing how to use selection criteria for LVM display commands. The examples in this section use a system configured with LVM volumes that yield the following output when selection criteria are not used. The following command displays all logical volumes with "lvol[13]" in their name, using a regular expression to specify this. The following command displays all logical volumes greater than 500 megabytes in size. The following command displays all logical volumes that include thin as a logical volume role, indicating that the logical volume is used in constructing a thin pool. This example uses braces ({}) to indicate a subset in the display. The following command displays all usable top-level logical volumes, which are the logical volumes with a role of "public". If you do not specify braces ({}) in a string list to indicate a subset, it is assumed by default; specifying lv_role=public is equivalent to specifying lv_role={public} . The following command displays all logical volumes with a thin layout. The following command displays all logical volumes with a layout field that matches "sparse,thin" exactly. Note that it is not necessary to specify the string list members for the match to be positive. The following command displays the logical volume names of the logical volumes that are thin, sparse logical volumes. Note that the list of fields used for selection criteria do not need to be the same as the list of fields to display. | [
"lvs -a -o+layout,role LV VG Attr LSize Pool Origin Data% Meta% Layout Role root f1 -wi-ao---- 9.01g linear public swap f1 -wi-ao---- 512.00m linear public [lvol0_pmspare] vg ewi------- 4.00m linear private, pool,spare lvol1 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public lvol2 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public, origin, thinorigin lvol3 vg Vwi---tz-k 1.00g pool lvol2 thin,sparse public, snapshot, thinsnapshot pool vg twi-aotz-- 100.00m 0.00 1.07 thin,pool private [pool_tdata] vg Twi-ao---- 100.00m linear private, thin,pool, data [pool_tmeta] vg ewi-ao---- 4.00m linear private, thin,pool, metadata",
"lvs -a -o+layout,role -S 'lv_name=~lvol[13]' LV VG Attr LSize Pool Origin Data% Layout Role lvol1 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public lvol3 vg Vwi---tz-k 1.00g pool lvol2 thin,sparse public,snapshot,thinsnapshot",
"lvs -a -o+layout,role -S 'lv_size>500m' LV VG Attr LSize Pool Origin Data% Layout Role root f1 -wi-ao---- 9.01g linear public swap f1 -wi-ao---- 512.00m linear public lvol1 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public lvol2 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public,origin,thinorigin lvol3 vg Vwi---tz-k 1.00g pool lvol2 thin,sparse public,snapshot, thinsnapshot",
"lvs -a -o+layout,role -S 'lv_role={thin}' LV VG Attr LSize Layout Role [pool_tdata] vg Twi-ao---- 100.00m linear private,thin,pool,data [pool_tmeta] vg ewi-ao---- 4.00m linear private,thin,pool,metadata",
"lvs -a -o+layout,role -S 'lv_role=public' LV VG Attr LSize Pool Origin Data% Layout Role root f1 -wi-ao---- 9.01g linear public swap f1 -wi-ao---- 512.00m linear public lvol1 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public lvol2 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public,origin,thinorigin lvol3 vg Vwi---tz-k 1.00g pool lvol2 thin,sparse public,snapshot,thinsnapshot",
"lvs -a -o+layout,role -S 'lv_layout={thin}' LV VG Attr LSize Pool Origin Data% Meta% Layout Role lvol1 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public lvol2 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public,origin, thinorigin lvol3 vg Vwi---tz-k 1.00g pool lvol2 thin,sparse public,snapshot, thinsnapshot pool vg twi-aotz-- 100.00m 0.00 1.07 thin,pool private",
"lvs -a -o+layout,role -S 'lv_layout=[sparse,thin]' LV VG Attr LSize Pool Origin Data% Layout Role lvol1 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public lvol2 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public,origin,thinorigin lvol3 vg Vwi---tz-k 1.00g pool lvol2 thin,sparse public,snapshot,thinsnapshot",
"lvs -a -o lv_name -S 'lv_layout=[sparse,thin]' LV lvol1 lvol2 lvol3"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/selection_display_examples |
10.3.4. Deprecated Packages | 10.3.4. Deprecated Packages qt3 GFS1 gcj continues to be included in Red Hat Enterprise Linux 6 for performance reasons, however gcj is not likely to be included in future releases. cryptoloop mingw32-qpid-cpp python-qmf python-qpid qpid-cpp qpid-qmf qpid-tests qpid-tools ruby-qpid saslwrapper | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-packages-deprecated-pkg |
8.243. xorg-x11-drv-wacom | 8.243. xorg-x11-drv-wacom 8.243.1. RHBA-2013:1568 - xorg-x11-drv-wacom bug fix update Updated xorg-x11-drv-wacom packages that fix one bug are now available for Red Hat Enterprise Linux 6. The xorg-x11-drv-wacom packages provide an X Window System input device driver that allows the X server to handle Wacom tablets with extended functionality. Bug Fix BZ# 920385 Previously, the xsetwacom command was unable to map the Esc key to one of the buttons on a Wacom Cintiq 22HD tablet using the xsetwacom command. As a consequence, the command failed without displaying an error message, and the key was not mapped. This bug has now been fixed, and, as a result, the driver now maps the Esc key as expected. Users of xorg-x11-drv-wacom are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/xorg-x11-drv-wacom |
Security and compliance | Security and compliance OpenShift Container Platform 4.11 Learning about and managing security for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/security_and_compliance/index |
Chapter 10. Using config maps with applications | Chapter 10. Using config maps with applications Config maps allow you to decouple configuration artifacts from image content to keep containerized applications portable. The following sections define config maps and how to create and use them. 10.1. Understanding config maps Many applications require configuration by using some combination of configuration files, command line arguments, and environment variables. In OpenShift Container Platform, these configuration artifacts are decoupled from image content to keep containerized applications portable. The ConfigMap object provides mechanisms to inject containers with configuration data while keeping containers agnostic of OpenShift Container Platform. A config map can be used to store fine-grained information like individual properties or coarse-grained information like entire configuration files or JSON blobs. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. For example: ConfigMap Object Definition kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2 1 Contains the configuration data. 2 Points to a file that contains non-UTF8 data, for example, a binary Java keystore file. Enter the file data in Base 64. Note You can use the binaryData field when you create a config map from a binary file, such as an image. Configuration data can be consumed in pods in a variety of ways. A config map can be used to: Populate environment variable values in containers Set command-line arguments in a container Populate configuration files in a volume Users and system components can store configuration data in a config map. A config map is similar to a secret, but designed to more conveniently support working with strings that do not contain sensitive information. Config map restrictions A config map must be created before its contents can be consumed in pods. Controllers can be written to tolerate missing configuration data. Consult individual components configured by using config maps on a case-by-case basis. ConfigMap objects reside in a project. They can only be referenced by pods in the same project. The Kubelet only supports the use of a config map for pods it gets from the API server. This includes any pods created by using the CLI, or indirectly from a replication controller. It does not include pods created by using the OpenShift Container Platform node's --manifest-url flag, its --config flag, or its REST API because these are not common ways to create pods. Additional resources Creating and using config maps 10.2. Use cases: Consuming config maps in pods The following sections describe some uses cases when consuming ConfigMap objects in pods. 10.2.1. Populating environment variables in containers by using config maps You can use config maps to populate individual environment variables in containers or to populate environment variables in containers from all keys that form valid environment variable names. As an example, consider the following config map: ConfigMap with two environment variables apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4 1 Name of the config map. 2 The project in which the config map resides. Config maps can only be referenced by pods in the same project. 3 4 Environment variables to inject. ConfigMap with one environment variable apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2 1 Name of the config map. 2 Environment variable to inject. Procedure You can consume the keys of this ConfigMap in a pod using configMapKeyRef sections. Sample Pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Stanza to pull the specified environment variables from a ConfigMap . 2 Name of a pod environment variable that you are injecting a key's value into. 3 5 Name of the ConfigMap to pull specific environment variables from. 4 6 Environment variable to pull from the ConfigMap . 7 Makes the environment variable optional. As optional, the pod will be started even if the specified ConfigMap and keys do not exist. 8 Stanza to pull all environment variables from a ConfigMap . 9 Name of the ConfigMap to pull all environment variables from. When this pod is run, the pod logs will include the following output: Note SPECIAL_TYPE_KEY=charm is not listed in the example output because optional: true is set. 10.2.2. Setting command-line arguments for container commands with config maps You can use a config map to set the value of the commands or arguments in a container by using the Kubernetes substitution syntax USD(VAR_NAME) . As an example, consider the following config map: apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure To inject values into a command in a container, you must consume the keys you want to use as environment variables. Then you can refer to them in a container's command using the USD(VAR_NAME) syntax. Sample pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Inject the values into a command in a container using the keys you want to use as environment variables. When this pod is run, the output from the echo command run in the test-container container is as follows: 10.2.3. Injecting content into a volume by using config maps You can inject content into a volume by using config maps. Example ConfigMap custom resource (CR) apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure You have a couple different options for injecting content into a volume by using config maps. The most basic way to inject content into a volume by using a config map is to populate the volume with files where the key is the file name and the content of the file is the value of the key: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/special.how" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never 1 File containing key. When this pod is run, the output of the cat command will be: You can also control the paths within the volume where config map keys are projected: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/path/to/special-key" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never 1 Path to config map key. When this pod is run, the output of the cat command will be: | [
"kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4",
"apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"SPECIAL_LEVEL_KEY=very log_level=INFO",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"very charm",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never",
"very"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/building_applications/config-maps |
A.5. Test Your HAProxy Configuration | A.5. Test Your HAProxy Configuration On your HAProxy nodes, check to ensure the virtual IP address from your keepalived configuration appears. On your calamari node, see if you can reach the gateway nodes by means of the load balancer configuration. For example: This should return the same result as: If it returns an index.html file with the following contents then your configuration is working properly. | [
"ip addr show",
"wget haproxy",
"wget rgw1",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <ListAllMyBucketsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <Owner> <ID>anonymous</ID> <DisplayName></DisplayName> </Owner> <Buckets> </Buckets> </ListAllMyBucketsResult>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/test_haproxy_example1 |
Chapter 2. The Ceph File System Metadata Server | Chapter 2. The Ceph File System Metadata Server As a storage administrator, you can learn about the different states of the Ceph File System (CephFS) Metadata Server (MDS), along with learning about CephFS MDS ranking mechanic, configuring the MDS standby daemon, and cache size limits. Knowing these concepts can enable you to configure the MDS daemons for a storage environment. 2.1. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Installation of the Ceph Metadata Server daemons ( ceph-mds ). 2.2. Metadata Server daemon states The Metadata Server (MDS) daemons operate in two states: Active - manages metadata for files and directories stores on the Ceph File System. Standby - serves as a backup, and becomes active when an active MDS daemon becomes unresponsive. By default, a Ceph File System uses only one active MDS daemon. However, systems with many clients benefit from multiple active MDS daemons. You can configure the file system to use multiple active MDS daemons so that you can scale metadata performance for larger workloads. The active MDS daemons dynamically share the metadata workload when metadata load patterns change. Note that systems with multiple active MDS daemons still require standby MDS daemons to remain highly available. What Happens When the Active MDS Daemon Fails When the active MDS becomes unresponsive, a Ceph Monitor daemon waits a number of seconds equal to the value specified in the mds_beacon_grace option. If the active MDS is still unresponsive after the specified time period has passed, the Ceph Monitor marks the MDS daemon as laggy . One of the standby daemons becomes active, depending on the configuration. Note To change the value of mds_beacon_grace , add this option to the Ceph configuration file and specify the new value. 2.3. Metadata Server ranks Each Ceph File System (CephFS) has a number of ranks, one by default, which starts at zero. Ranks define the way how the metadata workload is shared between multiple Metadata Server (MDS) daemons. The number of ranks is the maximum number of MDS daemons that can be active at one time. Each MDS daemon handles a subset of the CephFS metadata that is assigned to that rank. Each MDS daemon initially starts without a rank. The Ceph Monitor assigns a rank to the daemon. The MDS daemon can only hold one rank at a time. Daemons only lose ranks when they are stopped. The max_mds setting controls how many ranks will be created. The actual number of ranks in the CephFS is only increased if a spare daemon is available to accept the new rank. Rank States Ranks can be: Up - A rank that is assigned to the MDS daemon. Failed - A rank that is not associated with any MDS daemon. Damaged - A rank that is damaged; its metadata is corrupted or missing. Damaged ranks are not assigned to any MDS daemons until the operator fixes the problem, and uses the ceph mds repaired command on the damaged rank. 2.4. Metadata Server cache size limits You can limit the size of the Ceph File System (CephFS) Metadata Server (MDS) cache by: A memory limit : Use the mds_cache_memory_limit option. Red Hat recommends a value between 8 GB and 64 GB for mds_cache_memory_limit . Setting more cache can cause issues with recovery. This limit is approximately 66% of the desired maximum memory use of the MDS. Important Red Hat recommends to use memory limits instead of inode count limits. Inode count : Use the mds_cache_size option. By default, limiting the MDS cache by inode count is disabled. In addition, you can specify a cache reservation by using the mds_cache_reservation option for MDS operations. The cache reservation is limited as a percentage of the memory or inode limit and is set to 5% by default. The intent of this parameter is to have the MDS maintain an extra reserve of memory for its cache for new metadata operations to use. As a consequence, the MDS should in general operate below its memory limit because it will recall old state from clients in order to drop unused metadata in its cache. The mds_cache_reservation option replaces the mds_health_cache_threshold option in all situations, except when MDS nodes sends a health alert to the Ceph Monitors indicating the cache is too large. By default, mds_health_cache_threshold is 150% of the maximum cache size. Be aware that the cache limit is not a hard limit. Potential bugs in the CephFS client or MDS or misbehaving applications might cause the MDS to exceed its cache size. The mds_health_cache_threshold option configures the storage cluster health warning message, so that operators can investigate why the MDS cannot shrink its cache. Additional Resources See the Metadata Server daemon configuration reference section in the Red Hat Ceph Storage File System Guide for more information. 2.5. Configuring multiple active Metadata Server daemons Configure multiple active Metadata Server (MDS) daemons to scale metadata performance for large systems. Important Do not convert all standby MDS daemons to active ones. A Ceph File System (CephFS)requires at least one standby MDS daemon to remain highly available. Important The scrubbing process is not currently supported when multiple active MDS daemons are configured. Prerequisites Ceph administration capabilities on the MDS node. Procedure Set the max_mds parameter to the desired number of active MDS daemons: Syntax Example This example increases the number of active MDS daemons to two in the CephFS called cephfs Note Ceph only increases the actual number of ranks in the CephFS if a spare MDS daemon is available to take the new rank. Verify the number of active MDS daemons: Syntax Example Additional Resources See the Metadata Server daemons states section in the Red Hat Ceph Storage File System Guide for more details. See the Decreasing the Number of Active MDS Daemons section in the Red Hat Ceph Storage File System Guide for more details. See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide for more details. 2.6. Configuring the number of standby daemons Each Ceph File System (CephFS) can specify the required number of standby daemons to be considered healthy. This number also includes the standby-replay daemon waiting for a rank failure. Prerequisites User access to the Ceph Monitor node. Procedure Set the expected number of standby daemons for a particular CephFS: Syntax Note Setting the NUMBER to zero disables the daemon health check. Example This example sets the expected standby daemon count to two. 2.7. Configuring the standby-replay Metadata Server Configure each Ceph File System (CephFS) by adding a standby-replay Metadata Server (MDS) daemon. Doing this reduces failover time if the active MDS becomes unavailable. This specific standby-replay daemon follows the active MDS's metadata journal. The standby-replay daemon is only used by the active MDS of the same rank, and is not available to other ranks. Important If using standby-replay, then every active MDS must have a standby-replay daemon. Prerequisites User access to the Ceph Monitor node. Procedure Set the standby-replay for a particular CephFS: Syntax Example In this example, the Boolean value is 1 , which enables the standby-replay daemons to be assigned to the active Ceph MDS daemons. Note Setting the allow_standby_replay Boolean value back to 0 only prevents new standby-replay daemons from being assigned. To also stop the running daemons, mark them as failed with the ceph mds fail command. Additional Resources See the Using the ceph mds fail command section in the Red Hat Ceph Storage File System Guide for details. 2.8. Decreasing the number of active Metadata Server daemons How to decrease the number of active Ceph File System (CephFS) Metadata Server (MDS) daemons. Prerequisites The rank that you will remove must be active first, meaning that you must have the same number of MDS daemons as specified by the max_mds parameter. Procedure Set the same number of MDS daemons as specified by the max_mds parameter: Syntax Example On a node with administration capabilities, change the max_mds parameter to the desired number of active MDS daemons: Syntax Example Wait for the storage cluster to stabilize to the new max_mds value by watching the Ceph File System status. Verify the number of active MDS daemons: Syntax Example Additional Resources See the Metadata Server daemons states section in the Red Hat Ceph Storage File System Guide . See the Configuring multiple active Metadata Server daemons section in the Red Hat Ceph Storage File System Guide . 2.9. Additional Resources See the Installing Metadata servers section of the Red Hat Ceph Storage Installation Guide for details. See the Red Hat Ceph Storage Installation Guide for details on installing a Red Hat Ceph Storage cluster. | [
"ceph fs set NAME max_mds NUMBER",
"ceph fs set cephfs max_mds 2",
"ceph fs status NAME",
"ceph fs status cephfs cephfs - 0 clients ====== +------+--------+-------+---------------+-------+-------+ | Rank | State | MDS | Activity | dns | inos | +------+--------+-------+---------------+-------+-------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | | 1 | active | node2 | Reqs: 0 /s | 10 | 12 | +------+--------+-------+---------------+-------+-------+ +-----------------+----------+-------+-------+ | Pool | type | used | avail | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | node3 | +-------------+",
"ceph fs set FS_NAME standby_count_wanted NUMBER",
"ceph fs set cephfs standby_count_wanted 2",
"ceph fs set FS_NAME allow_standby_replay 1",
"ceph fs set cephfs allow_standby_replay 1",
"ceph fs status NAME",
"ceph fs status cephfs cephfs - 0 clients +------+--------+-------+---------------+-------+-------+ | Rank | State | MDS | Activity | dns | inos | +------+--------+-------+---------------+-------+-------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | | 1 | active | node2 | Reqs: 0 /s | 10 | 12 | +------+--------+-------+---------------+-------+-------+ +-----------------+----------+-------+-------+ | Pool | type | used | avail | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | node3 | +-------------+",
"ceph fs set NAME max_mds NUMBER",
"ceph fs set cephfs max_mds 1",
"ceph fs status NAME",
"ceph fs status cephfs cephfs - 0 clients +------+--------+-------+---------------+-------+-------+ | Rank | State | MDS | Activity | dns | inos | +------+--------+-------+---------------+-------+-------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | +------+--------+-------+---------------+-------+-------+ +-----------------+----------+-------+-------+ | Pool | type | used | avail | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | node3 | | node2 | +-------------+"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/file_system_guide/the-ceph-file-system-metadata-server |
23.5. CPU tuning | 23.5. CPU tuning <domain> ... <cputune> <vcpupin vcpu="0" cpuset="1-4,^2"/> <vcpupin vcpu="1" cpuset="0,1"/> <vcpupin vcpu="2" cpuset="2,3"/> <vcpupin vcpu="3" cpuset="0,4"/> <emulatorpin cpuset="1-3"/> <shares>2048</shares> <period>1000000</period> <quota>-1</quota> <emulator_period>1000000</emulator_period> <emulator_quota>-1</emulator_quota> </cputune> ... </domain> Figure 23.7. CPU Tuning Although all are optional, the components of this section of the domain XML are as follows: Table 23.4. CPU tuning elements Element Description <cputune> Provides details regarding the CPU tunable parameters for the domain. This is optional. <vcpupin> Specifies which of host physical machine's physical CPUs the domain vCPU will be pinned to. If this is omitted, and the cpuset attribute of the <vcpu> element is not specified, the vCPU is pinned to all the physical CPUs by default. It contains two required attributes: the <vcpu> attribute specifies id , and the cpuset attribute is same as the cpuset attribute in the <vcpu> element. <emulatorpin> Specifies which of the host physical machine CPUs the "emulator" (a subset of a domains not including <vcpu> ) will be pinned to. If this is omitted, and the cpuset attribute in the <vcpu> element is not specified, the "emulator" is pinned to all the physical CPUs by default. It contains one required cpuset attribute specifying which physical CPUs to pin to. emulatorpin is not allowed if the placement attribute in the <vcpu> element is set as auto . <shares> Specifies the proportional weighted share for the domain. If this is omitted, it defaults to the operating system provided defaults. If there is no unit for the value, it is calculated relative to the setting of the other guest virtual machine. For example, a guest virtual machine configured with a <shares> value of 2048 will get twice as much CPU time as a guest virtual machine configured with a <shares> value of 1024. <period> Specifies the enforcement interval in microseconds. By using <period> , each of the domain's vCPUs will not be allowed to consume more than its allotted quota worth of run time. This value should be within the following range: 1000-1000000 . A <period> with a value of 0 means no value. <quota> Specifies the maximum allowed bandwidth in microseconds. A domain with <quota> as any negative value indicates that the domain has infinite bandwidth, which means that it is not bandwidth controlled. The value should be within the following range: 1000 - 18446744073709551 or less than 0 . A quota with value of 0 means no value. You can use this feature to ensure that all vCPUs run at the same speed. <emulator_period> Specifies the enforcement interval in microseconds. Within an <emulator_period> , emulator threads (those excluding vCPUs) of the domain will not be allowed to consume more than the <emulator_quota> worth of run time. The <emulator_period> value should be in the following range: 1000 - 1000000 . An <emulator_period> with value of 0 means no value. <emulator_quota> Specifies the maximum allowed bandwidth in microseconds for the domain's emulator threads (those excluding vCPUs). A domain with an <emulator_quota> as a negative value indicates that the domain has infinite bandwidth for emulator threads (those excluding vCPUs), which means that it is not bandwidth controlled. The value should be in the following range: 1000 - 18446744073709551 , or less than 0 . An <emulator_quota> with value 0 means no value. | [
"<domain> <cputune> <vcpupin vcpu=\"0\" cpuset=\"1-4,^2\"/> <vcpupin vcpu=\"1\" cpuset=\"0,1\"/> <vcpupin vcpu=\"2\" cpuset=\"2,3\"/> <vcpupin vcpu=\"3\" cpuset=\"0,4\"/> <emulatorpin cpuset=\"1-3\"/> <shares>2048</shares> <period>1000000</period> <quota>-1</quota> <emulator_period>1000000</emulator_period> <emulator_quota>-1</emulator_quota> </cputune> </domain>"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-manipulating_the_domain_xml-cpu_tuning |
Chapter 12. Eliminating typographical errors in local SSSD configuration | Chapter 12. Eliminating typographical errors in local SSSD configuration You can test if the /etc/sssd/sssd.conf file on your host contains any typographical errors using the sssctl config-check command. Prerequisites You are logged in as root. The sssd-tools package is installed. Procedure Enter the sssctl config-check command: Open the /etc/sssd/sssd.conf file and correct the typo. If you, for example, received the error message in the step, replace ldap_search with ldap_search_base : Save the file. Restart SSSD: Verification Enter the sssctl config-check command: The /etc/sssd/sssd.conf file now has no typographical errors. | [
"sssctl config-check Issues identified by validators: 1 [rule/allowed_domain_options]: Attribute 'ldap_search' is not allowed in section 'domain/example1'. Check for typos. Messages generated during configuration merging: 0 Used configuration snippet files: 0",
"[...] [domain/example1] ldap_search_base = dc=example,dc=com [...]",
"systemctl restart sssd",
"sssctl config-check Issues identified by validators: 0 Messages generated during configuration merging: 0 Used configuration snippet files: 0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_authentication_and_authorization_in_rhel/eliminating-typographical-errors-in-local-sssd-configuration_configuring-authentication-and-authorization-in-rhel |
Chapter 1. Schedule and quota APIs | Chapter 1. Schedule and quota APIs 1.1. AppliedClusterResourceQuota [quota.openshift.io/v1] Description AppliedClusterResourceQuota mirrors ClusterResourceQuota at a project scope, for projection into a project. It allows a project-admin to know which ClusterResourceQuotas are applied to his project and their associated usage. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. ClusterResourceQuota [quota.openshift.io/v1] Description ClusterResourceQuota mirrors ResourceQuota at a cluster scope. This object is easily convertible to synthetic ResourceQuota object to allow quota evaluation re-use. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. FlowSchema [flowcontrol.apiserver.k8s.io/v1beta1] Description Type `` 1.4. LimitRange [v1] Description LimitRange sets resource usage limits for each kind of resource in a Namespace. Type object 1.5. PriorityClass [scheduling.k8s.io/v1] Description PriorityClass defines mapping from a priority class name to the priority integer value. The value can be any valid integer. Type object 1.6. PriorityLevelConfiguration [flowcontrol.apiserver.k8s.io/v1beta1] Description Type `` 1.7. ResourceQuota [v1] Description ResourceQuota sets aggregate quota restrictions enforced per namespace Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/schedule_and_quota_apis/schedule-and-quota-apis |
18.8. Managing a Virtual Network | 18.8. Managing a Virtual Network To configure a virtual network on your system: From the Edit menu, select Connection Details . Figure 18.10. Selecting a host physical machine's details This will open the Connection Details menu. Click the Virtual Networks tab. Figure 18.11. Virtual network configuration All available virtual networks are listed on the left-hand box of the menu. You can edit the configuration of a virtual network by selecting it from this box and editing as you see fit. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-virtual_networking-managing_a_virtual_network |
Chapter 56. Scheduler | Chapter 56. Scheduler Only consumer is supported The Scheduler component is used to generate message exchanges when a scheduler fires. This component is similar to the Timer component, but it offers more functionality in terms of scheduling. Also this component uses JDK ScheduledExecutorService . Where as the timer uses a JDK Timer . You can only consume events from this endpoint. 56.1. URI format Where name is the name of the scheduler, which is created and shared across endpoints. So if you use the same name for all your scheduler endpoints, only one scheduler thread pool and thread will be used - but you can configure the thread pool to allow more concurrent threads. Note The IN body of the generated exchange is null . So exchange.getIn().getBody() returns null . 56.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 56.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 56.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 56.3. Component Options The Scheduler component supports 3 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean poolSize (scheduler) Number of core threads in the thread pool used by the scheduling thread pool. Is by default using a single thread. 1 int 56.4. Endpoint Options The Scheduler endpoint is configured using URI syntax: with the following path and query parameters: 56.4.1. Path Parameters (1 parameters) Name Description Default Type name (consumer) Required The name of the scheduler. String 56.4.2. Query Parameters (21 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long poolSize (scheduler) Number of core threads in the thread pool used by the scheduling thread pool. Is by default using a single thread. 1 int repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 56.5. More information This component is a scheduler Polling Consumer where you can find more information about the options above, and examples at the Polling Consumer page. 56.6. Exchange Properties When the timer is fired, it adds the following information as properties to the Exchange : Name Type Description Exchange.TIMER_NAME String The value of the name option. Exchange.TIMER_FIRED_TIME Date The time when the consumer fired. 56.7. Sample To set up a route that generates an event every 60 seconds: from("scheduler://foo?delay=60000").to("bean:myBean?method=someMethodName"); The above route will generate an event and then invoke the someMethodName method on the bean called myBean in the Registry such as JNDI or Spring. And the route in Spring DSL: <route> <from uri="scheduler://foo?delay=60000"/> <to uri="bean:myBean?method=someMethodName"/> </route> 56.8. Forcing the scheduler to trigger immediately when completed To let the scheduler trigger as soon as the task is complete, you can set the option greedy=true . But beware then the scheduler will keep firing all the time. So use this with caution. 56.9. Forcing the scheduler to be idle There can be use cases where you want the scheduler to trigger and be greedy. But sometimes you want "tell the scheduler" that there was no task to poll, so the scheduler can change into idle mode using the backoff options. To do this you would need to set a property on the exchange with the key Exchange.SCHEDULER_POLLED_MESSAGES to a boolean value of false. This will cause the consumer to indicate that there was no messages polled. The consumer will otherwise as by default return 1 message polled to the scheduler, every time the consumer has completed processing the exchange. 56.10. Spring Boot Auto-Configuration When using scheduler with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-scheduler-starter</artifactId> </dependency> The component supports 4 options, which are listed below. Name Description Default Type camel.component.scheduler.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.scheduler.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.scheduler.enabled Whether to enable auto configuration of the scheduler component. This is enabled by default. Boolean camel.component.scheduler.pool-size Number of core threads in the thread pool used by the scheduling thread pool. Is by default using a single thread. 1 Integer | [
"scheduler:name[?options]",
"scheduler:name",
"from(\"scheduler://foo?delay=60000\").to(\"bean:myBean?method=someMethodName\");",
"<route> <from uri=\"scheduler://foo?delay=60000\"/> <to uri=\"bean:myBean?method=someMethodName\"/> </route>",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-scheduler-starter</artifactId> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-scheduler-component-starter |
Part IV. Configuring Web Service Endpoints | Part IV. Configuring Web Service Endpoints This guide describes how to create Apache CXF endpoints in Red Hat Fuse. | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/CXFDeployGuide |
Getting started with .NET on RHEL 9 | Getting started with .NET on RHEL 9 .NET 8.0 Installing and running .NET 8.0 on RHEL 9 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/net/8.0/html/getting_started_with_.net_on_rhel_9/index |
Chapter 3. Scaling storage capacity of AWS OpenShift Data Foundation cluster | Chapter 3. Scaling storage capacity of AWS OpenShift Data Foundation cluster 3.1. Scaling up storage capacity of AWS OpenShift Data Foundation cluster To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. You can scale up storage capacity of an AWS Red Hat OpenShift Data Foundation cluster in two ways: Scaling up storage capacity on an AWS cluster by adding a new set of OSDs . Scaling up storage capacity on an AWS cluster by resizing existing OSDs . 3.1.1. Scaling up storage capacity on a cluster by adding a new set of OSDs To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new object storage devices (OSDs) and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 3.1.2. Scaling up storage capacity on an AWS cluster by resizing existing OSDs To increase the storage capacity on a cluster, you can add storage capacity by resizing existing OSDs. Important Resizing existing OSDs can only be done once every 6 hours due to AWS limitations. If you exceed this amount within the 6 hour time frame, you will receive a warning that includes the note You've reached the maximum modification rate per volume limit. Wait at least 6 hours between modifications per EBS volume. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Update the dataPVCTemplate size for the storageDeviceSets with the new desired size using the oc patch command. In this example YAML, the storage parameter under storageDeviceSets reflects the current size of 512Gi . Using the oc patch command: Get the current OSD storage for the storageDeviceSets you are increasing storage for: Increase the storage with the desired value (the following example reflect the size change of 2Ti): Wait for the OSDs to restart. Confirm that the resize took effect: Verify that for all the resized OSDs, resize is completed and reflected correctly in the CAPACITY column of the command output. If the resize did not take effect, restart the OSD pods again. It may take multiple restarts for the resize to complete. 3.2. Scaling out storage capacity on a AWS cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 3.2.1. Adding a node You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains. While it is recommended to add nodes in the multiple of three, you still have the flexibility to add one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled . Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment. 3.2.1.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* To scale up storage capacity: For dynamic storage devices, see Scaling up storage capacity on a cluster . 3.2.1.2. Adding a node to an user-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* To scale up storage capacity: For dynamic storage devices, see Scaling up storage capacity on a cluster . 3.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage capacity on a cluster . | [
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"storageDeviceSets: - name: example-deviceset count: 3 resources: {} placement: {} dataPVCTemplate: spec: storageClassName: accessModes: - ReadWriteOnce volumeMode: Block resources: requests: storage: 512Gi",
"get storagecluster ocs-storagecluster -n openshift-storage -o jsonpath=' {.spec.storageDeviceSets[0].dataPVCTemplate.spec.resources.requests.storage} ' 512Gi",
"patch storagecluster ocs-storagecluster -n openshift-storage --type merge --patch \"USD(oc get storagecluster ocs-storagecluster -n openshift-storage -o jsonpath=' {.spec.storageDeviceSets[0]} ' | jq '.dataPVCTemplate.spec.resources.requests.storage=\"2Ti\"' | jq -c '{spec: {storageDeviceSets: [.]}}')\" storagecluster.ocs.openshift.io/ocs-storagecluster patched",
"oc get pvc -l ceph.rook.io/DeviceSet -n openshift-storage",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get csr",
"oc adm certificate approve <Certificate_Name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/scaling_storage/scaling_storage_capacity_of_aws_openshift_data_foundation_cluster |
Installing on IBM Power | Installing on IBM Power OpenShift Container Platform 4.16 Installing OpenShift Container Platform on IBM Power Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_ibm_power/index |
4.3. Additional Resources | 4.3. Additional Resources For details about Samba, see the corresponding section in the Red Hat System Administrator's Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/samba_additional_resources |
Chapter 7. Choosing and obtaining a Red Hat Ansible Automation Platform installer | Chapter 7. Choosing and obtaining a Red Hat Ansible Automation Platform installer Choose the Red Hat Ansible Automation Platform installer you need based on your Red Hat Enterprise Linux environment internet connectivity. Review the scenarios below and determine which Red Hat Ansible Automation Platform installer meets your needs. 7.1. Installing with internet access Choose the Red Hat Ansible Automation Platform installer if your Red Hat Enterprise Linux environment is connected to the internet. Installing with internet access retrieves the latest required repositories, packages, and dependencies. Choose one of the following ways to set up your Ansible Automation Platform installer. Tarball install Navigate to the Red Hat Ansible Automation Platform download page . Click Download Now for the Ansible Automation Platform <latest-version> Setup . Extract the files: USD tar xvzf ansible-automation-platform-setup-<latest-version>.tar.gz RPM install Install the Ansible Automation Platform Installer Package v.2.3 for RHEL 8 for x86_64 USD sudo dnf install --enablerepo=ansible-automation-platform-2.3-for-rhel-8-x86_64-rpms ansible-automation-platform-installer v.2.3 for RHEL 9 for x86-64 USD sudo dnf install --enablerepo=ansible-automation-platform-2.3-for-rhel-9-x86_64-rpms ansible-automation-platform-installer Note dnf install enables the repo as the repo is disabled by default. When you use the RPM installer, the files are placed under the /opt/ansible-automation-platform/installer directory. 7.2. Installing without internet access Use the Red Hat Ansible Automation Platform Bundle installer if you are unable to access the internet, or would prefer not to install separate components and dependencies from online repositories. Access to Red Hat Enterprise Linux repositories is still needed. All other dependencies are included in the tar archive. Procedure Navigate to https://access.redhat.com/downloads/content/480 Click Download Now for the Ansible Automation Platform <latest-version> Setup Bundle . Extract the files: USD tar xvzf ansible-automation-platform-setup-bundle-<latest-version>.tar.gz | [
"tar xvzf ansible-automation-platform-setup-<latest-version>.tar.gz",
"sudo dnf install --enablerepo=ansible-automation-platform-2.3-for-rhel-8-x86_64-rpms ansible-automation-platform-installer",
"sudo dnf install --enablerepo=ansible-automation-platform-2.3-for-rhel-9-x86_64-rpms ansible-automation-platform-installer",
"tar xvzf ansible-automation-platform-setup-bundle-<latest-version>.tar.gz"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_planning_guide/choosing_and_obtaining_a_red_hat_ansible_automation_platform_installer |
Chapter 1. Understanding networking | Chapter 1. Understanding networking Cluster Administrators have several options for exposing applications that run inside a cluster to external traffic and securing network connections: Service types, such as node ports or load balancers API resources, such as Ingress and Route By default, Kubernetes allocates each pod an internal IP address for applications running within the pod. Pods and their containers can network, but clients outside the cluster do not have networking access. When you expose your application to external traffic, giving each pod its own IP address means that pods can be treated like physical hosts or virtual machines in terms of port allocation, networking, naming, service discovery, load balancing, application configuration, and migration. Note Some cloud platforms offer metadata APIs that listen on the 169.254.169.254 IP address, a link-local IP address in the IPv4 169.254.0.0/16 CIDR block. This CIDR block is not reachable from the pod network. Pods that need access to these IP addresses must be given host network access by setting the spec.hostNetwork field in the pod spec to true . If you allow a pod host network access, you grant the pod privileged access to the underlying network infrastructure. 1.1. OpenShift Container Platform DNS If you are running multiple services, such as front-end and back-end services for use with multiple pods, environment variables are created for user names, service IPs, and more so the front-end pods can communicate with the back-end services. If the service is deleted and recreated, a new IP address can be assigned to the service, and requires the front-end pods to be recreated to pick up the updated values for the service IP environment variable. Additionally, the back-end service must be created before any of the front-end pods to ensure that the service IP is generated properly, and that it can be provided to the front-end pods as an environment variable. For this reason, OpenShift Container Platform has a built-in DNS so that the services can be reached by the service DNS as well as the service IP/port. 1.2. OpenShift Container Platform Ingress Operator When you create your OpenShift Container Platform cluster, pods and services running on the cluster are each allocated their own IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to outside clients. The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OpenShift Container Platform cluster services. The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing. You can use the Ingress Operator to route traffic by specifying OpenShift Container Platform Route and Kubernetes Ingress resources. Configurations within the Ingress Controller, such as the ability to define endpointPublishingStrategy type and internal load balancing, provide ways to publish Ingress Controller endpoints. 1.2.1. Comparing routes and Ingress The Kubernetes Ingress resource in OpenShift Container Platform implements the Ingress Controller with a shared router service that runs as a pod inside the cluster. The most common way to manage Ingress traffic is with the Ingress Controller. You can scale and replicate this pod like any other regular pod. This router service is based on HAProxy , which is an open source load balancer solution. The OpenShift Container Platform route provides Ingress traffic to services in the cluster. Routes provide advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments. Ingress traffic accesses services in the cluster through a route. Routes and Ingress are the main resources for handling Ingress traffic. Ingress provides features similar to a route, such as accepting external requests and delegating them based on the route. However, with Ingress you can only allow certain types of connections: HTTP/2, HTTPS and server name identification (SNI), and TLS with certificate. In OpenShift Container Platform, routes are generated to meet the conditions specified by the Ingress resource. 1.3. Glossary of common terms for OpenShift Container Platform networking This glossary defines common terms that are used in the networking content. authentication To control access to an OpenShift Container Platform cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster. To interact with an OpenShift Container Platform cluster, you must authenticate to the OpenShift Container Platform API. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the OpenShift Container Platform API. AWS Load Balancer Operator The AWS Load Balancer (ALB) Operator deploys and manages an instance of the aws-load-balancer-controller . Cluster Network Operator The Cluster Network Operator (CNO) deploys and manages the cluster network components in an OpenShift Container Platform cluster. This includes deployment of the Container Network Interface (CNI) default network provider plug-in selected for the cluster during installation. config map A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. custom resource (CR) A CR is extension of the Kubernetes API. You can create custom resources. DNS Cluster DNS is a DNS server which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches. DNS Operator The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods. This enables DNS-based Kubernetes Service discovery in OpenShift Container Platform. deployment A Kubernetes resource object that maintains the life cycle of an application. domain Domain is a DNS name serviced by the Ingress Controller. egress The process of data sharing externally through a network's outbound traffic from a pod. External DNS Operator The External DNS Operator deploys and manages ExternalDNS to provide the name resolution for services and routes from the external DNS provider to OpenShift Container Platform. HTTP-based route An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port. Ingress The Kubernetes Ingress resource in OpenShift Container Platform implements the Ingress Controller with a shared router service that runs as a pod inside the cluster. Ingress Controller The Ingress Operator manages Ingress Controllers. Using an Ingress Controller is the most common way to allow external access to an OpenShift Container Platform cluster. installer-provisioned infrastructure The installation program deploys and configures the infrastructure that the cluster runs on. kubelet A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod. Kubernetes NMState Operator The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the OpenShift Container Platform cluster's nodes with NMState. kube-proxy Kube-proxy is a proxy service which runs on each node and helps in making services available to the external host. It helps in forwarding the request to correct containers and is capable of performing primitive load balancing. load balancers OpenShift Container Platform uses load balancers for communicating from outside the cluster with services running in the cluster. MetalLB Operator As a cluster administrator, you can add the MetalLB Operator to your cluster so that when a service of type LoadBalancer is added to the cluster, MetalLB can add an external IP address for the service. multicast With IP multicast, data is broadcast to many IP addresses simultaneously. namespaces A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources. networking Network information of a OpenShift Container Platform cluster. node A worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. OpenShift Container Platform Ingress Operator The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OpenShift Container Platform services. pod One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed. PTP Operator The PTP Operator creates and manages the linuxptp services. route The OpenShift Container Platform route provides Ingress traffic to services in the cluster. Routes provide advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments. scaling Increasing or decreasing the resource capacity. service Exposes a running application on a set of pods. Single Root I/O Virtualization (SR-IOV) Network Operator The Single Root I/O Virtualization (SR-IOV) Network Operator manages the SR-IOV network devices and network attachments in your cluster. software-defined networking (SDN) OpenShift Container Platform uses a software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the OpenShift Container Platform cluster. Stream Control Transmission Protocol (SCTP) SCTP is a reliable message based protocol that runs on top of an IP network. taint Taints and tolerations ensure that pods are scheduled onto appropriate nodes. You can apply one or more taints on a node. toleration You can apply tolerations to pods. Tolerations allow the scheduler to schedule pods with matching taints. web console A user interface (UI) to manage OpenShift Container Platform. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/networking/understanding-networking |
Chapter 5. Managing integrations in the Hybrid Cloud Console | Chapter 5. Managing integrations in the Hybrid Cloud Console From the Integrations page, you can edit integrations, pause or resume data collection, or remove existing integrations from the Hybrid Cloud Console. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console as an Organization Administrator or as a user with Cloud Administrator permissions. You have a cloud or Red Hat integration in the Hybrid Cloud Console. Procedure Go to Settings > Integrations . Find the integration you want to edit under the Cloud or Red Hat tab. Click the options icon (...) to the integration and select one of the following tasks: Pause : Temporarily disable data collection for connected Hybrid Cloud Console services. If the integration is currently paused, click Resume to continue collecting data for connected services. Remove : Permanently delete the integration and all collected data from the Hybrid Cloud Console. Edit : View integration details and make changes to the integration configuration in the Hybrid Cloud Console. You can rename the integration, check its availability, connect additional services, or edit the configuration or remove connected services. Verification Go to the Integrations page and confirm that your desired changes have been made. | null | https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/configuring_cloud_integrations_for_red_hat_services/managing-integrations_crc-cloud-integrations |
5.2. Installing in Domain Mode | 5.2. Installing in Domain Mode The Domain mode is for multiple server instances. It requires load balancing capable for handling TCP for JDBC connections. (Apache will not work in this case.) When you have more than one Red Hat JBoss EAP instance in your server farm and you start them all in domain mode, all of the configuration options for this server farm can be centrally managed. For example, you can deploy an artifact or create a data source across all the instances, with one single CLI-based call. Red Hat JBoss Data Virtualization extends this configuration concept to allow you to deploy your VDBs and translators across the whole server farm. When domain mode is combined with the HA (high availability) profile, you can cluster the Red Hat JBoss Data Virtualization server instances that are deployed. (The HA profile is set as the default in the domain.xml file.) When you start the server using the domain.xml file, the distributed caching that is used for ResultSet caching and Internal Materialized caching is automatically configured. You use the Admin API the same way in both the standalone mode and the domain mode. When multiple Red Hat JBoss Data Virtualization instances are available in a cluster, you can make use load balancing and fail-over features. You must create a server group, then create a server and then deploy all the available resources to the server group you have created. The following instructions teach you how to install a systemd service on Linux. For other use cases, please refer to https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/6.4/html-single/administration_and_configuration_guide/index . To start the server in "Domain" mode, install Red Hat JBoss Data Virtualization on all the servers that are going to be part of the cluster. Select one of the servers as the "master" domain controller. (The rest of the servers will be slaves that connect to the "master" domain controller for all the administrative operations.) On the domain controller, copy the domain-controller/init.d directory to JBOSS_HOME/bin . On the host controller, copy the host-controller/init.d directory to JBOSS_HOME/bin . Prepare the directory structure: Configure the domain: Configure the host controller: Add this Red Hat JBoss Data Virtualization script to systemd: Configure the firewall ports on both hosts: If you are using port offsets, adjust the values by adding the offset. Once you have configured all the servers, start the "master" node: Start the "slave" nodes: The slave nodes fetch their domain configuration settings from the "master" node. Once VDBs have been deployed, users can connect their JDBC applications to Red Hat JBoss Data Virtualization. You can enable Red Hat JBoss Data Virtualization as a service with the operating system, using this command: You can now start the service like this: To stop it, run this command: To restart, run this command: Note Teiid Designer cannot connect to the Red Hat JBoss Data Virtualization Server in "domain" mode. Red Hat recommends using other types of deployment strategies (such as CLI or web-console) for deploying and testing, as it is expected you will be using domain mode in production environments. Teiid Designer is to aid development-time activities only and should only be used in testing environments. | [
"adduser jboss mkdir /opt/jdv mkdir /etc/jboss-as mkdir /var/log/jboss mkdir /var/run/jboss chown -R jboss: /opt/jdv chown -R jboss: /etc/jboss-as chown -R jboss: /var/log/jboss chown -R jboss: /var/run/jboss",
"cat > /etc/jboss-as/jboss-as.conf <<EOF JBOSS_HOME=/opt/jdv/jboss-data-virtualization JBOSS_PIDFILE=/var/run/jboss/jboss-as-domain.pid JBOSS_HOST_CONFIG=host.xml JBOSS_ADDRESS_MANAGEMENT=10.255.0.82 JBOSS_USER=jboss JBOSS_CONSOLE_LOG=/var/log/jboss/console.log EOF",
"cat > /etc/jboss-as/jboss-as.conf <<EOF JBOSS_HOME=/opt/jdv/jboss-data-virtualization JBOSS_PIDFILE=/var/run/jboss/jboss-as-domain.pid JBOSS_HOST_CONFIG=host-slave.xml JBOSS_MASTER_ADDRESS=10.255.0.82 JBOSS_PUBLIC_ADDRESS=10.255.0.83 JBOSS_USER=jboss JBOSS_CONSOLE_LOG=/var/log/jboss/console.log EOF",
"JBOSS_HOME=/opt/jdv/jboss-data-virtualization cat > /tmp/teste <<EOF [Unit] Description=JBoss Data Virtualization Systemctl script After=NetworkManager.service [Service] Type=forking ExecStart=USDJBOSS_HOME/bin/init.d/jboss-as-domain.sh start ExecStop=USDJBOSS_HOME/bin/init.d/jboss-as-domain.sh stop ExecReload=USDJBOSS_HOME/bin/init.d/jboss-as-domain.sh restart PIDFile=/var/run/jboss/jboss-as-domain.pid [Install] WantedBy=multi-user.target EOF",
"cat > /etc/firewalld/services/jboss-data-virtualization.xml <?xml version=\"1.0\" encoding=\"utf-8\"?> <service version=\"1.0\"> <short>jboss-data-virtualization</short> <port port=\"31000\" protocol=\"tcp\"/> <port port=\"8080\" protocol=\"tcp\"/> <port port=\"8443\" protocol=\"udp\"/> <port port=\"8009\" protocol=\"tcp\"/> <port port=\"4447\" protocol=\"tcp\"/> <port port=\"9990\" protocol=\"udp\"/> <port port=\"9999\" protocol=\"tcp\"/> </service> EOF",
"firewall-cmd --zone=public --add-service=jboss-data-virtualization firewall-cmd --permanent --zone=public --add-service=jboss-data-virtualization firewall-cmd --zone=public --list-services firewall-cmd --permanent --zone=public --list-services",
"/bin/domain.sh",
"/bin/domain.sh",
"systemctl enable jboss-data-virtualization",
"systemctl start jdv",
"systemctl stop jdv",
"systemctl restart jdv"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/installation_guide/ch05s02 |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/amq_streams_on_openshift_overview/making-open-source-more-inclusive |
C.16. RecoveryAdmin | C.16. RecoveryAdmin org.infinispan.transaction.xa.recovery.RecoveryAdminOperations The RecoveryAdmin component exposes tooling for handling transaction recovery. Table C.26. Operations Name Description Signature forceCommit Forces the commit of an in-doubt transaction. String forceCommit(long p0) forceCommit Forces the commit of an in-doubt transaction String forceCommit(int p0, byte[] p1, byte[] p2) forceRollback Forces the rollback of an in-doubt transaction. String forceRollback(long p0) forceRollback Forces the rollback of an in-doubt transaction String forceRollback(int p0, byte[] p1, byte[] p2) forget Removes recovery info for the given transaction. String forget(long p0) forget Removes recovery info for the given transaction. String forget(int p0, byte[] p1, byte[] p2) showInDoubtTransactions Shows all the prepared transactions for which the originating node crashed. String showInDoubtTransactions() 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/recoveryadmin |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/providing-feedback-on-red-hat-documentation_osp |
Chapter 9. Gathering the observability data from multiple clusters | Chapter 9. Gathering the observability data from multiple clusters For a multicluster configuration, you can create one OpenTelemetry Collector instance in each one of the remote clusters and then forward all the telemetry data to one OpenTelemetry Collector instance. Prerequisites The Red Hat build of OpenTelemetry Operator is installed. The Tempo Operator is installed. A TempoStack instance is deployed on the cluster. The following mounted certificates: Issuer, self-signed certificate, CA issuer, client and server certificates. To create any of these certificates, see step 1. Procedure Mount the following certificates in the OpenTelemetry Collector instance, skipping already mounted certificates. An Issuer to generate the certificates by using the cert-manager Operator for Red Hat OpenShift. apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer spec: selfSigned: {} A self-signed certificate. apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ca spec: isCA: true commonName: ca subject: organizations: - <your_organization_name> organizationalUnits: - Widgets secretName: ca-secret privateKey: algorithm: ECDSA size: 256 issuerRef: name: selfsigned-issuer kind: Issuer group: cert-manager.io A CA issuer. apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: test-ca-issuer spec: ca: secretName: ca-secret The client and server certificates. apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: server spec: secretName: server-tls isCA: false usages: - server auth - client auth dnsNames: - "otel.observability.svc.cluster.local" 1 issuerRef: name: ca-issuer --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: client spec: secretName: client-tls isCA: false usages: - server auth - client auth dnsNames: - "otel.observability.svc.cluster.local" 2 issuerRef: name: ca-issuer 1 List of exact DNS names to be mapped to a solver in the server OpenTelemetry Collector instance. 2 List of exact DNS names to be mapped to a solver in the client OpenTelemetry Collector instance. Create a service account for the OpenTelemetry Collector instance. Example ServiceAccount apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment Create a cluster role for the service account. Example ClusterRole apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 The k8sattributesprocessor requires permissions for pods and namespace resources. 2 The resourcedetectionprocessor requires permissions for infrastructures and status. Bind the cluster role to the service account. Example ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-<example> roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Create the YAML file to define the OpenTelemetryCollector custom resource (CR) in the edge clusters. Example OpenTelemetryCollector custom resource for the edge clusters apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: otel-collector-<example> spec: mode: daemonset serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlphttp: endpoint: https://observability-cluster.com:443 1 tls: insecure: false cert_file: /certs/server.crt key_file: /certs/server.key ca_file: /certs/ca.crt service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs 1 The Collector exporter is configured to export OTLP HTTP and points to the OpenTelemetry Collector from the central cluster. Create the YAML file to define the OpenTelemetryCollector custom resource (CR) in the central cluster. Example OpenTelemetryCollector custom resource for the central cluster apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otlp-receiver namespace: observability spec: mode: "deployment" ingress: type: route route: termination: "passthrough" config: receivers: otlp: protocols: http: tls: 1 cert_file: /certs/server.crt key_file: /certs/server.key client_ca_file: /certs/ca.crt exporters: otlp: endpoint: "tempo-<simplest>-distributor:4317" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs 1 The Collector receiver requires the certificates listed in the first step. 2 The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint, which in this example is "tempo-simplest-distributor:4317" and already created. | [
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer spec: selfSigned: {}",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ca spec: isCA: true commonName: ca subject: organizations: - <your_organization_name> organizationalUnits: - Widgets secretName: ca-secret privateKey: algorithm: ECDSA size: 256 issuerRef: name: selfsigned-issuer kind: Issuer group: cert-manager.io",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: test-ca-issuer spec: ca: secretName: ca-secret",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: server spec: secretName: server-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 1 issuerRef: name: ca-issuer --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: client spec: secretName: client-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 2 issuerRef: name: ca-issuer",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-<example> roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: otel-collector-<example> spec: mode: daemonset serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlphttp: endpoint: https://observability-cluster.com:443 1 tls: insecure: false cert_file: /certs/server.crt key_file: /certs/server.key ca_file: /certs/ca.crt service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otlp-receiver namespace: observability spec: mode: \"deployment\" ingress: type: route route: termination: \"passthrough\" config: receivers: otlp: protocols: http: tls: 1 cert_file: /certs/server.crt key_file: /certs/server.key client_ca_file: /certs/ca.crt exporters: otlp: endpoint: \"tempo-<simplest>-distributor:4317\" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/red_hat_build_of_opentelemetry/otel-gathering-observability-data-from-multiple-clusters |
6.7. Enabling TLS Client Authentication for the Internal LDAP Server | 6.7. Enabling TLS Client Authentication for the Internal LDAP Server Red Hat Certificate System can communicate with its internal LDAP server via TLS mutual authentication. Once you complete the installation. see Section 7.10.3, "Enabling TLS Client Authentication" in the Post-Installation section for further details on how to enable it. | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/enabling_tls_auth_for_internal_ldap |
probe::sunrpc.svc.destroy | probe::sunrpc.svc.destroy Name probe::sunrpc.svc.destroy - Destroy an RPC service Synopsis sunrpc.svc.destroy Values sv_nrthreads the number of concurrent threads sv_name the service name sv_prog the number of the program rpcbadauth the count of requests drooped for authentication failure rpcbadfmt the count of requests dropped for bad formats rpccnt the count of valid RPC requests sv_progname the name of the program netcnt the count of received RPC requests nettcpconn the count of accepted TCP connections | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sunrpc-svc-destroy |
Chapter 13. Updating a cluster that includes the Special Resource Operator | Chapter 13. Updating a cluster that includes the Special Resource Operator When updating a cluster that includes the Special Resource Operator (SRO), it is important to consider whether the new kernel module version is compatible with the kernel modules currently loaded by the SRO. You can run a preflight check to confirm if the SRO will be able to upgrade the kernel modules. 13.1. Custom resource and verification status information The preflight check provides information on the status of the custom resource (CR) and the status of the verification. Possible CR statuses The possible CR statuses are as follows: True The Special Resource Operator (SRO) CR will upgrade without kernel compatibility issues. False There is an issue with kernel compatibility for the CR. The Status Reason field provides additional information on this. Error The status check was not completed due to an internal error. The Status Reason field provides additional information on debugging this. Unknown The check has not yet verified the status of the CR. This might be because the process has not yet reached the specific CR or not enough time has elapsed for the check to complete. Possible verification statuses The possible verification statuses are as follows: True The image exists and is compatible, or the image does not exist but there is a BuildConfig resource in place. False The image does not exist and there is no BuildConfig resource in place, or an image exists but it is not compatible with the new kernel version. If the verification status is False , you can take one of the following steps: Create a prebuilt image with the correct name and check the status field again. Change the CR to include a BuildConfig reosurce. 13.2. Running a preflight check for the Special Resource Operator You can use the following example procedure to check the compatibility of a kernel module version before updating a cluster that includes the Special Resource Operator (SRO). Prerequisites You have a running OpenShift Container Platform cluster. You installed the OpenShift CLI ( oc ). You are logged in to the OpenShift CLI as a user with cluster-admin privileges. You installed the SRO. Procedure Create the following preflight validation custom resource definition (CRD) and save the YAML as prevalidation.yaml . apiVersion: sro.openshift.io/v1beta1 kind: PreflightValidation metadata: name: preflight namespace: preflight spec: updateImage: quay.io/openshift-release-dev/ocp-release@sha256:f7f252c39b64601c8ac3de737a584ba4f6016b1f4b17801d726ca2fd15492878 1 1 Specify the name of the update image here. Start the validation check by running the following command: USD oc apply -f prevalidation.yaml Verification Check the status of the custom resource (CR) by running the following command: USD oc describe preflightvalidations.sro.openshift.io/v1beta1 preflight Example output The following is an example output in which simple-oot is a SpecialResource CR that is deployed on the cluster. Status: Cr Statuses: Last Transition Time: 2022-08-02T08:48:45Z Name: simple-oot Status Reason: Verification successful, all driver-containers for the kernel version are present Verification Stage: Image Verification Status: True Events: <none> The preflight check continues to run until all CRs are verified. You can repeat the preceding command to check the status. After all CRs are verified, you should delete the preflight CR. 13.3. Additional resources Special Resource Operator | [
"apiVersion: sro.openshift.io/v1beta1 kind: PreflightValidation metadata: name: preflight namespace: preflight spec: updateImage: quay.io/openshift-release-dev/ocp-release@sha256:f7f252c39b64601c8ac3de737a584ba4f6016b1f4b17801d726ca2fd15492878 1",
"oc apply -f prevalidation.yaml",
"oc describe preflightvalidations.sro.openshift.io/v1beta1 preflight",
"Status: Cr Statuses: Last Transition Time: 2022-08-02T08:48:45Z Name: simple-oot Status Reason: Verification successful, all driver-containers for the next kernel version are present Verification Stage: Image Verification Status: True Events: <none>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/updating_clusters/updating-clusters-sro |
7.2. Impact Printers | 7.2. Impact Printers Impact printers are the oldest printing technologies still in active production. Some of the largest printer vendors continue to manufacture, market, and support impact printers, parts, and supplies. Impact printers are most functional in specialized environments where low-cost printing is essential. The three most common forms of impact printers are dot-matrix , daisy-wheel , and line printers . 7.2.1. Dot-Matrix Printers The technology behind dot-matrix printing is quite simple. The paper is pressed against a drum (a rubber-coated cylinder) and is intermittently pulled forward as printing progresses. The electromagnetically-driven printhead moves across the paper and strikes the printer ribbon situated between the paper and printhead pin. The impact of the printhead against the printer ribbon imprints ink dots on the paper which form human-readable characters. Dot-matrix printers vary in print resolution and overall quality with either 9 or 24-pin printheads. The more pins per inch, the higher the print resolution. Most dot-matrix printers have a maximum resolution of around 240 dpi (dots per inch). While this resolution is not as high as those possible in laser or inkjet printers, there is one distinct advantage to dot-matrix (or any form of impact) printing. Because the printhead must strike the surface of the paper with enough force to transfer ink from a ribbon onto the page, it is ideal for environments that must produce carbon copies through the use of special multi-part documents. These documents have carbon (or other pressure-sensitive material) on the underside and create a mark on the sheet underneath when pressure is applied. Retailers and small businesses often use carbon copies as receipts or bills of sale. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-printers-types-impact |
Chapter 10. Creating other kinds of load balancers | Chapter 10. Creating other kinds of load balancers You use the Load-balancing service (octavia) to create the type of load balancer that matches the type of non-HTTP network traffic that you want to manage. Section 10.1, "Creating a TCP load balancer" Section 10.2, "Creating a UDP load balancer with a health monitor" Section 10.3, "Creating a QoS-ruled load balancer" Section 10.4, "Creating a load balancer with an access control list" Section 10.5, "Creating an OVN load balancer" 10.1. Creating a TCP load balancer You can create a load balancer when you need to manage network traffic for non-HTTP, TCP-based services and applications. It is a best practice to also create a health monitor to ensure that your back-end members remain available. Prerequisites A shared external (public) subnet that you can reach from the internet. Procedure Source your credentials file. Example Create a load balancer ( lb1 ) on the public subnet ( public_subnet ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site. Example Create a TCP listener ( listener1 ) on the specified port ( 23456 ) for which the custom application is configured. Example Create a pool ( pool1 ) and make it the default pool for the listener. Example In this example, a pool is created that uses a private subnet containing back-end servers that host a custom application on a specific TCP port: Create a health monitor ( healthmon1 ) on the pool ( pool1 ) that connects to the back-end servers and probes the TCP service port. Example Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be ONLINE . Add the back-end servers ( 192.0.2.10 and 192.0.2.11 ) on the private subnet ( private_subnet ) to the pool. Example In this example, the back-end servers, 192.0.2.10 and 192.0.2.11 , are named member1 and member2 , respectively: Verification View and verify the load balancer ( lb1 ) settings. Example Sample output When a health monitor is present and functioning properly, you can check the status of each member. Use the following command to obtain a member ID: Example A working member ( member1 ) has an ONLINE value for its operating_status . Example Sample output Additional resources loadbalancer in the Command line interface reference 10.2. Creating a UDP load balancer with a health monitor You can create a load balancer when you need to manage network traffic on UDP ports. It is a best practice to also create a health monitor to ensure that your back-end members remain available. Prerequisites A shared external (public) subnet that you can reach from the internet. No security rules that block ICMP Destination Unreachable messages (ICMP type 3). Procedure Source your credentials file. Example Create a load balancer ( lb1 ) on a private subnet ( private_subnet ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site. Example Create a listener ( listener1 ) on a port ( 1234 ). Example Create the listener default pool ( pool1 ). Example The command in this example creates a pool that uses a private subnet containing back-end servers that host one or more applications configured to use UDP ports: Create a health monitor ( healthmon1 ) on the pool ( pool1 ) that connects to the back-end servers by using UDP ( UDP-CONNECT ). Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be ONLINE . Example Add the back-end servers ( 192.0.2.10 and 192.0.2.11 ) on the private subnet ( private_subnet ) to the default pool. Example In this example, the back-end servers, 192.0.2.10 and 192.0.2.11 , are named member1 and member2 , respectively: Verification View and verify the load balancer ( lb1 ) settings. Example Sample output When a health monitor is present and functioning properly, you can check the status of each member. Example A working member ( member1 ) has an ONLINE value for its operating_status . Sample output Additional resources loadbalancer in the Command line interface reference 10.3. Creating a QoS-ruled load balancer You can apply a Red Hat OpenStack Platform (RHOSP) Networking service (neutron) Quality of Service (QoS) policy to virtual IP addresses (VIPs) that use load balancers. In this way, you can use a QoS policy to limit incoming or outgoing network traffic that the load balancer can manage. It is a best practice to also create a health monitor to ensure that your back-end members remain available. Prerequisites A shared external (public) subnet that you can reach from the internet. A QoS policy that contains bandwidth limit rules created for the RHOSP Networking service. Procedure Source your credentials file. Example Create a network bandwidth QoS policy ( qos_policy_bandwidth ) with a maximum 1024 kbps and a maximum burst rate of 1024 kb. Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site. Example Create a load balancer ( lb1 ) on the public subnet ( public_subnet ) by using a QoS policy ( qos-policy-bandwidth ). Example Create a listener ( listener1 ) on a port ( 80 ). Example Create the listener default pool ( pool1 ). Example The command in this example creates an HTTP pool that uses a private subnet containing back-end servers that host an HTTP application on TCP port 80: Create a health monitor ( healthmon1 ) on the pool that connects to the back-end servers and tests the path ( / ). Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be ONLINE . Example Add load balancer members ( 192.0.2.10 and 192.0.2.11 ) on the private subnet ( private_subnet ) to the default pool. Example In this example, the back-end servers, 192.0.2.10 and 192.0.2.11 , are named member1 and member2 , respectively: Verification View and verify the listener ( listener1 ) settings. Example Sample output In this example the parameter, vip_qos_policy_id , contains a policy ID. Additional resources loadbalancer in the Command line interface reference 10.4. Creating a load balancer with an access control list You can create an access control list (ACL) to limit incoming traffic to a listener to a set of allowed source IP addresses. Any other incoming traffic is rejected. It is a best practice to also create a health monitor to ensure that your back-end members remain available. Prerequisites A shared external (public) subnet that you can reach from the internet. Procedure Source your credentials file. Example Create a load balancer ( lb1 ) on the public subnet ( public_subnet ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site. Example Create a listener ( listener1 ) with the allowed CIDRs ( 192.0.2.0/24 and 198.51.100.0/24 ). Example Create the listener default pool ( pool1 ). Example In this example, a pool is created that uses a private subnet containing back-end servers that are configured with a custom application on TCP port 80: Create a health monitor on the pool that connects to the back-end servers and tests the path ( / ). Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be ONLINE . Example Add load balancer members ( 192.0.2.10 and 192.0.2.11 ) on the private subnet ( private_subnet ) to the default pool. Example In this example, the back-end servers, 192.0.2.10 and 192.0.2.11 , are named member1 and member2 , respectively: Verification View and verify the listener ( listener1 ) settings. Example Sample output In this example the parameter, allowed_cidrs , is set to allow traffic only from 192.0.2.0/24 and 198.51.100.0/24. To verify that the load balancer is secure, ensure that a request to the listener from a client whose CIDR is not in the allowed_cidrs list; the request does not succeed. Sample output Additional resources loadbalancer in the Command line interface reference 10.5. Creating an OVN load balancer You can use the Red Hat OpenStack Platform (RHOSP) client to create a load balancer that manages network traffic in your RHOSP deployment. The RHOSP Load-Balancing service supports the neutron Modular Layer 2 plug-in with the Open Virtual Network mechanism driver (ML2/OVN). Prerequisites The ML2/OVN provider driver must be deployed. Important The OVN provider only supports Layer 4 TCP and UDP network traffic and the SOURCE_IP_PORT load balancer algorithm. The OVN provider does not support health monitoring. A shared external (public) subnet that you can reach from the internet. Procedure Source your credentials file. Example Create a load balancer ( lb1 ) on the private subnet ( private_subnet ) using the --provider ovn argument. Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site. Example Create a listener ( listener1 ) that uses the protocol ( tcp ) on the specified port ( 80 ) for which the custom application is configured. Note The OVN provider only supports Layer 4 TCP and UDP network traffic. Example Create the listener default pool ( pool1 ). Note The only supported load-balancing algorithm for OVN is SOURCE_IP_PORT . Example The command in this example creates an HTTP pool that uses a private subnet containing back-end servers that host a custom application on a specific TCP port: Important OVN does not support the health monitor feature for load-balancing. Add the back-end servers ( 192.0.2.10 and 192.0.2.11 ) on the private subnet ( private_subnet ) to the pool. Example In this example, the back-end servers, 192.0.2.10 and 192.0.2.11 , are named member1 and member2 , respectively: Verification View and verify the load balancer ( lb1 ) settings. Example Sample output Run the openstack loadbalancer listener show command to view the listener details. Example Sample output Run the openstack loadbalancer pool show command to view the pool ( pool1 ) and load-balancer members. Example Sample output Additional resources loadbalancer in the Command line interface reference | [
"source ~/overcloudrc",
"openstack loadbalancer create --name lb1 --vip-subnet-id public_subnet --wait",
"openstack loadbalancer listener create --name listener1 --protocol TCP --protocol-port 23456 lb1",
"openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol TCP",
"openstack loadbalancer healthmonitor create --name healthmon1 --delay 15 --max-retries 4 --timeout 10 --type TCP pool1",
"openstack loadbalancer member create --name member1 --subnet-id private_subnet --address 192.0.2.10 --protocol-port 443 pool1 openstack loadbalancer member create --name member2 --subnet-id private_subnet --address 192.0.2.11 --protocol-port 443 pool1",
"openstack loadbalancer show lb1",
"+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2022-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2022-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+",
"openstack loadbalancer member list pool1",
"openstack loadbalancer member show pool1 member1",
"+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2022-01-15T11:11:09 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | member1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2022-01-15T11:12:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+",
"source ~/overcloudrc",
"openstack loadbalancer create --name lb1 --vip-subnet-id private_subnet --wait",
"openstack loadbalancer listener create --name listener1 --protocol UDP --protocol-port 1234 lb1",
"openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol UDP",
"openstack loadbalancer healthmonitor create --name healthmon1 --delay 5 --max-retries 2 --timeout 3 --type UDP-CONNECT pool1",
"openstack loadbalancer member create --name member1 --subnet-id private_subnet --address 192.0.2.10 --protocol-port 1234 pool1 openstack loadbalancer member create --name member2 --subnet-id private_subnet --address 192.0.2.11 --protocol-port 1234 pool1",
"openstack loadbalancer show lb1",
"+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2022-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2022-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+",
"openstack loadbalancer member show pool1 member1",
"+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2022-01-15T11:11:09 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | member1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 1234 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2022-01-15T11:12:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+",
"source ~/overcloudrc",
"openstack network qos policy create qos_policy_bandwidth openstack network qos rule create --type bandwidth-limit --max-kbps 1024 --max-burst-kbits 1024 qos-policy-bandwidth",
"openstack loadbalancer create --name lb1 --vip-subnet-id public_subnet --vip-qos-policy-id qos-policy-bandwidth --wait",
"openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 lb1",
"openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP",
"openstack loadbalancer healthmonitor create --name healthmon1 --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1",
"openstack loadbalancer member create --name member1 --subnet-id private_subnet --address 192.0.2.10 --protocol-port 443 pool1 openstack loadbalancer member create --name member2 --subnet-id private_subnet --address 192.0.2.11 --protocol-port 443 pool1",
"openstack loadbalancer list",
"+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2022-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2022-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | cdfc3398-997b-46eb-9db1-ebbd88f7de05 | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+",
"source ~/overcloudrc",
"openstack loadbalancer create --name lb1 --vip-subnet-id public_subnet --wait",
"openstack loadbalancer listener create --name listener1 --protocol TCP --protocol-port 80 --allowed-cidr 192.0.2.0/24 --allowed-cidr 198.51.100.0/24 lb1",
"openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol TCP",
"openstack loadbalancer healthmonitor create --name healthmon1 --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1",
"openstack loadbalancer member create --subnet-id private_subnet --address 192.0.2.10 --protocol-port 80 pool1 openstack loadbalancer member create --subnet-id private_subnet --address 192.0.2.11 --protocol-port 80 pool1",
"openstack loadbalancer listener show listener1",
"+-----------------------------+--------------------------------------+ | Field | Value | +-----------------------------+--------------------------------------+ | admin_state_up | True | | connection_limit | -1 | | created_at | 2022-01-15T11:11:09 | | default_pool_id | None | | default_tls_container_ref | None | | description | | | id | d26ba156-03c3-4051-86e8-f8997a202d8e | | insert_headers | None | | l7policies | | | loadbalancers | 2281487a-54b9-4c2a-8d95-37262ec679d6 | | name | listener1 | | operating_status | ONLINE | | project_id | 308ca9f600064f2a8b3be2d57227ef8f | | protocol | TCP | | protocol_port | 80 | | provisioning_status | ACTIVE | | sni_container_refs | [] | | timeout_client_data | 50000 | | timeout_member_connect | 5000 | | timeout_member_data | 50000 | | timeout_tcp_inspect | 0 | | updated_at | 2022-01-15T11:12:42 | | client_ca_tls_container_ref | None | | client_authentication | NONE | | client_crl_container_ref | None | | allowed_cidrs | 192.0.2.0/24 | | | 198.51.100.0/24 | +-----------------------------+--------------------------------------+",
"curl: (7) Failed to connect to 203.0.113.226 port 80: Connection timed out curl: (7) Failed to connect to 203.0.113.226 port 80: Connection timed out curl: (7) Failed to connect to 203.0.113.226 port 80: Connection timed out curl: (7) Failed to connect to 203.0.113.226 port 80: Connection timed out",
"source ~/overcloudrc",
"openstack loadbalancer create --name lb1 --provider ovn --vip-subnet-id private_subnet --wait",
"openstack loadbalancer listener create --name listener1 --protocol tcp --protocol-port 80 lb1",
"openstack loadbalancer pool create --name pool1 --lb-algorithm SOURCE_IP_PORT --listener listener1 --protocol tcp",
"openstack loadbalancer member create --name member1 --subnet-id private_subnet --address 192.0.2.10 --protocol-port 80 pool1 openstack loadbalancer member create --name member2 --subnet-id private_subnet --address 192.0.2.11 --protocol-port 80 pool1",
"openstack loadbalancer show lb1",
"+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2022-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | ovn | | provisioning_status | ACTIVE | | updated_at | 2022-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+",
"openstack loadbalancer listener show listener1",
"+-----------------------------+--------------------------------------+ | Field | Value | +-----------------------------+--------------------------------------+ | admin_state_up | True | | connection_limit | -1 | | created_at | 2022-01-15T11:13:52 | | default_pool_id | a5034e7a-7ddf-416f-9c42-866863def1f2 | | default_tls_container_ref | None | | description | | | id | a101caba-5573-4153-ade9-4ea63153b164 | | insert_headers | None | | l7policies | | | loadbalancers | 653b8d79-e8a4-4ddc-81b4-e3e6b42a2fe3 | | name | listener1 | | operating_status | ONLINE | | project_id | 7982a874623944d2a1b54fac9fe46f0b | | protocol | TCP | | protocol_port | 64015 | | provisioning_status | ACTIVE | | sni_container_refs | [] | | timeout_client_data | 50000 | | timeout_member_connect | 5000 | | timeout_member_data | 50000 | | timeout_tcp_inspect | 0 | | updated_at | 2022-01-15T11:15:17 | | client_ca_tls_container_ref | None | | client_authentication | NONE | | client_crl_container_ref | None | | allowed_cidrs | None | +-----------------------------+--------------------------------------+",
"openstack loadbalancer pool show pool1",
"+----------------------+--------------------------------------+ | Field | Value | +----------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2022-01-15T11:17:34 | | description | | | healthmonitor_id | | | id | a5034e7a-7ddf-416f-9c42-866863def1f2 | | lb_algorithm | SOURCE_IP_PORT | | listeners | a101caba-5573-4153-ade9-4ea63153b164 | | loadbalancers | 653b8d79-e8a4-4ddc-81b4-e3e6b42a2fe3 | | members | 90d69170-2f73-4bfd-ad31-896191088f59 | | name | pool1 | | operating_status | ONLINE | | project_id | 7982a874623944d2a1b54fac9fe46f0b | | protocol | TCP | | provisioning_status | ACTIVE | | session_persistence | None | | updated_at | 2022-01-15T11:18:59 | | tls_container_ref | None | | ca_tls_container_ref | None | | crl_container_ref | None | | tls_enabled | False | +----------------------+--------------------------------------+"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_load_balancing_as_a_service/create-other-kinds-lbs_rhosp-lbaas |
Chapter 6. Installing a cluster on GCP with network customizations | Chapter 6. Installing a cluster on GCP with network customizations In OpenShift Container Platform version 4.12, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Google Cloud Platform (GCP). By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 6.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 6.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 6.5.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.5.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.5.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 6.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.5.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. If you are installing on GCP into a shared virtual private cloud (VPC), credentialsMode must be set to Passthrough . Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 6.5.1.4. Additional Google Cloud Platform (GCP) configuration parameters Additional GCP configuration parameters are described in the following table: Table 6.4. Additional GCP parameters Parameter Description Values platform.gcp.network The name of the existing Virtual Private Cloud (VPC) where you want to deploy your cluster. If you want to deploy your cluster into a shared VPC, you must set platform.gcp.networkProjectID with the name of the GCP project that contains the shared VPC. String. platform.gcp.networkProjectID Optional. The name of the GCP project that contains the shared VPC where you want to deploy your cluster. String. platform.gcp.projectID The name of the GCP project where the installation program installs the cluster. String. platform.gcp.region The name of the GCP region that hosts your cluster. Any valid region name, such as us-central1 . platform.gcp.controlPlaneSubnet The name of the existing subnet where you want to deploy your control plane machines. The subnet name. platform.gcp.computeSubnet The name of the existing subnet where you want to deploy your compute machines. The subnet name. platform.gcp.createFirewallRules Optional. Set this value to Disabled if you want to create and manage your firewall rules using network tags. By default, the cluster will automatically create and manage the firewall rules that are required for cluster communication. Your service account must have roles/compute.networkAdmin and roles/compute.securityAdmin privileges in the host project to perform these tasks automatically. If your service account does not have the roles/dns.admin privilege in the host project, it must have the dns.networks.bindPrivateDNSZone permission. Enabled or Disabled . The default value is Enabled . platform.gcp.publicDNSZone.project Optional. The name of the project that contains the public DNS zone. If you set this value, your service account must have the roles/dns.admin privilege in the specified project. If you do not set this value, it defaults to gcp.projectId . The name of the project that contains the public DNS zone. platform.gcp.publicDNSZone.id Optional. The ID or name of an existing public DNS zone. The public DNS zone domain must match the baseDomain parameter. If you do not set this value, the installation program will use a public DNS zone in the service project. The public DNS zone name. platform.gcp.privateDNSZone.project Optional. The name of the project that contains the private DNS zone. If you set this value, your service account must have the roles/dns.admin privilege in the host project. If you do not set this value, it defaults to gcp.projectId . The name of the project that contains the private DNS zone. platform.gcp.privateDNSZone.id Optional. The ID or name of an existing private DNS zone. If you do not set this value, the installation program will create a private DNS zone in the service project. The private DNS zone name. platform.gcp.licenses A list of license URLs that must be applied to the compute images. Important The licenses parameter is a deprecated field and nested virtualization is enabled by default. It is not recommended to use this field. Any license available with the license API , such as the license to enable nested virtualization . You cannot use this parameter with a mechanism that generates pre-built images. Using a license URL forces the installation program to copy the source image before use. platform.gcp.defaultMachinePlatform.zones The availability zones where the installation program creates machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . platform.gcp.defaultMachinePlatform.osDisk.diskSizeGB The size of the disk in gigabytes (GB). Any size between 16 GB and 65536 GB. platform.gcp.defaultMachinePlatform.osDisk.diskType The GCP disk type . Either the default pd-ssd or the pd-standard disk type. The control plane nodes must be the pd-ssd disk type. Compute nodes can be either type. platform.gcp.defaultMachinePlatform.osImage.project Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot control plane and compute machines. You can override the default behavior by specifying the location of a custom RHCOS image for the installation program to use for both types of machines. String. The name of GCP project where the image is located. platform.gcp.defaultMachinePlatform.osImage.name The name of the custom RHCOS image for the installation program to use to boot control plane and compute machines. If you use platform.gcp.defaultMachinePlatform.osImage.project , this field is required. String. The name of the RHCOS image. platform.gcp.defaultMachinePlatform.tags Optional. Additional network tags to add to the control plane and compute machines. One or more strings, for example network-tag1 . platform.gcp.defaultMachinePlatform.type The GCP machine type for control plane and compute machines. The GCP machine type, for example n1-standard-4 . platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for machine disk encryption. The encryption key name. platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.keyRing The name of the Key Management Service (KMS) key ring to which the KMS key belongs. The KMS key ring name. platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.location The GCP location in which the KMS key ring exists. The GCP location. platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.projectID The ID of the project in which the KMS key ring exists. This value defaults to the value of the platform.gcp.projectID parameter if it is not set. The GCP project ID. platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKeyServiceAccount The GCP service account used for the encryption request for control plane and compute machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for control plane machine disk encryption. The encryption key name. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing For control plane machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.location For control plane machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.projectID For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKeyServiceAccount The GCP service account used for the encryption request for control plane machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . controlPlane.platform.gcp.osDisk.diskSizeGB The size of the disk in gigabytes (GB). This value applies to control plane machines. Any integer between 16 and 65536. controlPlane.platform.gcp.osDisk.diskType The GCP disk type for control plane machines. Control plane machines must use the pd-ssd disk type, which is the default. controlPlane.platform.gcp.osImage.project Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot control plane machines. You can override the default behavior by specifying the location of a custom RHCOS image for the installation program to use for control plane machines only. String. The name of GCP project where the image is located. controlPlane.platform.gcp.osImage.name The name of the custom RHCOS image for the installation program to use to boot control plane machines. If you use controlPlane.platform.gcp.osImage.project , this field is required. String. The name of the RHCOS image. controlPlane.platform.gcp.tags Optional. Additional network tags to add to the control plane machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.tags parameter for control plane machines. One or more strings, for example control-plane-tag1 . controlPlane.platform.gcp.type The GCP machine type for control plane machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.type parameter. The GCP machine type, for example n1-standard-4 . controlPlane.platform.gcp.zones The availability zones where the installation program creates control plane machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . compute.platform.gcp.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for compute machine disk encryption. The encryption key name. compute.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing For compute machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. compute.platform.gcp.osDisk.encryptionKey.kmsKey.location For compute machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. compute.platform.gcp.osDisk.encryptionKey.kmsKey.projectID For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. compute.platform.gcp.osDisk.encryptionKey.kmsKeyServiceAccount The GCP service account used for the encryption request for compute machines. If this value is not set, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . compute.platform.gcp.osDisk.diskSizeGB The size of the disk in gigabytes (GB). This value applies to compute machines. Any integer between 16 and 65536. compute.platform.gcp.osDisk.diskType The GCP disk type for compute machines. Either the default pd-ssd or the pd-standard disk type. compute.platform.gcp.osImage.project Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot compute machines. You can override the default behavior by specifying the location of a custom RHCOS image for the installation program to use for compute machines only. String. The name of GCP project where the image is located. compute.platform.gcp.osImage.name The name of the custom RHCOS image for the installation program to use to boot compute machines. If you use compute.platform.gcp.osImage.project , this field is required. String. The name of the RHCOS image. compute.platform.gcp.tags Optional. Additional network tags to add to the compute machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.tags parameter for compute machines. One or more strings, for example compute-network-tag1 . compute.platform.gcp.type The GCP machine type for compute machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.type parameter. The GCP machine type, for example n1-standard-4 . compute.platform.gcp.zones The availability zones where the installation program creates compute machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . 6.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.5. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.5.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 6.1. Machine series A2 A3 C2 C2D C3 C3D C4 E2 M1 N1 N2 N2D N4 Tau T2D 6.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 6.5.5. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 6 - control-plane-tag1 - control-plane-tag2 osImage: 7 project: example-project-name name: example-image-name replicas: 3 compute: 8 9 - hyperthreading: Enabled 10 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 11 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 12 - compute-tag1 - compute-tag2 osImage: 13 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 14 networking: 15 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 1 14 17 18 21 Required. The installation program prompts you for this value. 2 8 15 If you do not provide these parameters and values, the installation program provides the default value. 3 9 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 10 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 5 11 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 6 12 19 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 7 13 20 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image for the installation program to use to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 16 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources Enabling customer-managed encryption keys for a compute machine set 6.5.6. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.6. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information on these fields, refer to Installation configuration parameters . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Important The CIDR range 172.17.0.0/16 is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the network plugin during phase 2. 6.7. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 6.8. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 6.8.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 6.6. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 6.7. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 6.8. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 6.9. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. v4InternalSubnet If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . This field cannot be changed after installation. The default value is 100.64.0.0/16 . v6InternalSubnet If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . Table 6.10. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 6.11. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. Note In OpenShift Container Platform {version}, egress IP is only assigned to the primary interface. Consequentially, setting routingViaHost to true will not work for egress IP in OpenShift Container Platform {version}. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 6.12. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 6.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 6.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 6.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 6 - control-plane-tag1 - control-plane-tag2 osImage: 7 project: example-project-name name: example-image-name replicas: 3 compute: 8 9 - hyperthreading: Enabled 10 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 11 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 12 - compute-tag1 - compute-tag2 osImage: 13 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 14 networking: 15 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_gcp/installing-gcp-network-customizations |
Subsets and Splits