title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
7.5. Configuring System Services for SSSD
7.5. Configuring System Services for SSSD SSSD provides interfaces towards several system services. Most notably: Name Service Switch (NSS) See Section 7.5.1, "Configuring Services: NSS" . Pluggable Authentication Modules (PAM) See Section 7.5.2, "Configuring Services: PAM" . OpenSSH See Configuring SSSD to Provide a Cache for the OpenSSH Services in the Linux Domain Identity, Authentication, and Policy Guide . autofs See Section 7.5.3, "Configuring Services: autofs " . sudo See Section 7.5.4, "Configuring Services: sudo " . 7.5.1. Configuring Services: NSS How SSSD Works with NSS The Name Service Switch (NSS) service maps system identities and services with configuration sources: it provides a central configuration store where services can look up sources for various configuration and name resolution mechanisms. SSSD can use NSS as a provider for several types of NSS maps. Most notably: User information (the passwd map) Groups (the groups map) Netgroups (the netgroups map) Services (the services map) Prerequisites Install SSSD. Configure NSS Services to Use SSSD Use the authconfig utility to enable SSSD: This updates the /etc/nsswitch.conf file to enable the following NSS maps to use SSSD: Open /etc/nsswitch.conf and add sss to the services map line: Configure SSSD to work with NSS Open the /etc/sssd/sssd.conf file. In the [sssd] section, make sure that NSS is listed as one of the services that works with SSSD. In the [nss] section, configure how SSSD interacts with NSS. For example: For a complete list of available options, see NSS configuration options in the sssd.conf (5) man page. Restart SSSD. Test That the Integration Works Correctly Display information about a user with these commands: id user getent passwd user 7.5.2. Configuring Services: PAM Warning A mistake in the PAM configuration file can lock users out of the system completely. Always back up the configuration files before performing any changes, and keep a session open so that you can revert any changes. Configure PAM to Use SSSD Use the authconfig utility to enable SSSD: This updates the PAM configuration to reference the SSSD modules, usually in the /etc/pam.d/system-auth and /etc/pam.d/password-auth files. For example: For details, see the pam.conf (5) or pam (8) man pages. Configure SSSD to work with PAM Open the /etc/sssd/sssd.conf file. In the [sssd] section, make sure that PAM is listed as one of the services that works with SSSD. In the [pam] section, configure how SSSD interacts with PAM. For example: For a complete list of available options, see PAM configuration options in the sssd.conf (5) man page. Restart SSSD. Test That the Integration Works Correctly Try logging in as a user. Use the sssctl user-checks user_name auth command to check your SSSD configuration. For details, use the sssctl user-checks --help command. 7.5.3. Configuring Services: autofs How SSSD Works with automount The automount utility can mount and unmount NFS file systems automatically (on-demand mounting), which saves system resources. For details on automount , see autofs in the Storage Administration Guide . You can configure automount to point to SSSD. In this setup: When a user attempts to mount a directory, SSSD contacts LDAP to obtain the required information about the current automount configuration. SSSD stores the information required by automount in a cache, so that users can mount directories even when the LDAP server is offline. Configure autofs to Use SSSD Install the autofs package. Open the /etc/nsswitch.conf file. On the automount line, change the location where to look for the automount map information from ldap to sss : Configure SSSD to work with autofs Open the /etc/sssd/sssd.conf file. In the [sssd] section, add autofs to the list of services that SSSD manages. Create a new [autofs] section. You can leave it empty. For a list of available options, see AUTOFS configuration options in the sssd.conf (5) man page. Make sure an LDAP domain is available in sssd.conf , so that SSSD can read the automount information from LDAP. See Section 7.3.2, "Configuring an LDAP Domain for SSSD" . The [domain] section of sssd.conf accepts several autofs -related options. For example: For a complete list of available options, see DOMAIN SECTIONS in the sssd.conf (5) man page. If you do not provide additional autofs options, the configuration depends on the identity provider settings. Restart SSSD. Test the Configuration Use the automount -m command to print the maps from SSSD. 7.5.4. Configuring Services: sudo How SSSD Works with sudo The sudo utility gives administrative access to specified users. For more information about sudo , see The sudo utility documentation in the System Administrator's Guide . You can configure sudo to point to SSSD. In this setup: When a user attempts a sudo operation, SSSD contacts LDAP or AD to obtain the required information about the current sudo configuration. SSSD stores the sudo information in a cache, so that users can perform sudo operations even when the LDAP or AD server is offline. SSSD only caches sudo rules which apply to the local system, depending on the value of the sudoHost attribute. See the sssd-sudo (5) man page for details. Configure sudo to Use SSSD Open the /etc/nsswitch.conf file. Add SSSD to the list on the sudoers line. Configure SSSD to work with sudo Open the /etc/sssd/sssd.conf file. In the [sssd] section, add sudo to the list of services that SSSD manages. Create a new [sudo] section. You can leave it empty. For a list of available options, see SUDO configuration options in the sssd.conf (5) man page. Make sure an LDAP or AD domain is available in sssd.conf , so that SSSD can read the sudo information from the directory. For details, see: Section 7.3.2, "Configuring an LDAP Domain for SSSD" the Using Active Directory as an Identity Provider for SSSD section in the Windows Integration Guide . The [domain] section for the LDAP or AD domain must include these sudo -related parameters: Note Setting Identity Management or AD as the ID provider automatically enables the sudo provider. In this situation, it is not necessary to specify the sudo_provider parameter. For a complete list of available options, see DOMAIN SECTIONS in the sssd.conf (5) man page. For options available for a sudo provider, see the sssd-ldap (5) man page. Restart SSSD. If you use AD as the provider, you must extend the AD schema to support sudo rules. For details, see the sudo documentation. For details about providing sudo rules in LDAP or AD, see the sudoers.ldap (5) man page.
[ "yum install sssd", "authconfig --enablesssd --update", "passwd: files sss shadow: files sss group: files sss netgroup: files sss", "services: files sss", "[sssd] [... file truncated ...] services = nss , pam", "[nss] filter_groups = root filter_users = root entry_cache_timeout = 300 entry_cache_nowait_percentage = 75", "systemctl restart sssd.service", "authconfig --enablesssdauth --update", "[... file truncated ...] auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth sufficient pam_sss.so use_first_pass auth required pam_deny.so [... file truncated ...]", "[sssd] [... file truncated ...] services = nss, pam", "[pam] offline_credentials_expiration = 2 offline_failed_login_attempts = 3 offline_failed_login_delay = 5", "systemctl restart sssd.service", "yum install autofs", "automount: files sss", "[sssd] services = nss,pam, autofs", "[autofs]", "[domain/LDAP] [... file truncated ...] autofs_provider=ldap ldap_autofs_search_base=cn=automount,dc=example,dc=com ldap_autofs_map_object_class=automountMap ldap_autofs_entry_object_class=automount ldap_autofs_map_name=automountMapName ldap_autofs_entry_key=automountKey ldap_autofs_entry_value=automountInformation", "systemctl restart sssd.service", "sudoers: files sss", "[sssd] services = nss,pam, sudo", "[sudo]", "[domain/ LDAP_or_AD_domain ] sudo_provider = ldap ldap_sudo_search_base = ou=sudoers,dc= example ,dc= com", "systemctl restart sssd.service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/configuring_services
probe::tty.open
probe::tty.open Name probe::tty.open - Called when a tty is opened Synopsis tty.open Values inode_state the inode state file_mode the file mode inode_number the inode number file_flags the file flags file_name the file name inode_flags the inode flags
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-tty-open
Monitoring APIs
Monitoring APIs OpenShift Container Platform 4.18 Reference guide for monitoring APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/monitoring_apis/index
2.4. Configure Indexing
2.4. Configure Indexing 2.4.1. Configure Indexing in Library Mode Using XML Indexing can be configured in XML by adding the <indexing ... /> element to the cache configuration in the Infinispan core configuration file, and optionally pass additional properties in the embedded Lucene-based Query API engine. For example: Example 2.3. Configuring Indexing Using XML in Library Mode In this example, the index is stored in memory. As a result, when the relevant nodes shut down the index is lost. This arrangement is ideal for brief demonstration purposes, but in real world applications, use the default (store on file system) or store the index in Red Hat JBoss Data Grid to persist the index. Report a bug 2.4.2. Configure Indexing Programmatically Indexing can be configured programmatically, avoiding XML configuration files. In this example, Red Hat JBoss Data Grid is started programmatically and also maps an object Author , which is stored in the grid and made searchable via two properties, without annotating the class. Example 2.4. Configure Indexing Programmatically Report a bug 2.4.3. Configure the Index in Remote Client-Server Mode In Remote Client-Server Mode, index configuration depends on the provider and its configuration. The indexing mode depends on the provider and whether or not it is local or distributed. The following indexing modes are supported: NONE LOCAL = indexLocalOnly="true" ALL = indexLocalOnly="false" Index configuration in Remote Client-Server Mode is as follows: Example 2.5. Configuration in Remote Client-Server Mode Configure Lucene Caches By default the Lucene caches will be created as local caches; however, with this configuration the Lucene search results are not shared between nodes in the cluster. To prevent this define the caches required by Lucene in a clustered mode, as seen in the following configuration snippet: Example 2.6. Configuring the Lucene cache in Remote Client-Server Mode These caches are discussed in further detail at Section 9.3, "Lucene Directory Configuration for Replicated Indexing" . Report a bug 2.4.4. Rebuilding the Index The Lucene index can be rebuilt, if required, by reconstructing it from the data store in the cache. The index must be rebuilt if: The definition of what is indexed in the types has changed. A parameter affecting how the index is defined, such as the Analyser changes. The index is destroyed or corrupted, possibly due to a system administration error. To rebuild the index, obtain a reference to the MassIndexer and start it as follows: This operation reprocesses all data in the grid, and therefore may take some time. Rebuilding the index is also available as a JMX operation. Report a bug
[ "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <infinispan xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"urn:infinispan:config:6.4 http://www.infinispan.org/schemas/infinispan-config-6.4.xsd\" xmlns=\"urn:infinispan:config:6.4\"> <replicated-cache> <indexing enabled=\"true\"> <properties> <property name=\"default.directory_provider\" value=\"ram\" /> </properties> </indexing> </replicated-cache> </infinispan>", "import java.util.Properties; import org.hibernate.search.cfg.SearchMapping; import org.infinispan.Cache; import org.infinispan.configuration.cache.Configuration; import org.infinispan.configuration.cache.ConfigurationBuilder; import org.infinispan.manager.DefaultCacheManager; import org.infinispan.query.CacheQuery; import org.infinispan.query.Search; import org.infinispan.query.SearchManager; import org.infinispan.query.dsl.Query; import org.infinispan.query.dsl.QueryBuilder; [...] SearchMapping mapping = new SearchMapping(); mapping.entity(Author.class).indexed().providedId() .property(\"name\", ElementType.METHOD).field() .property(\"surname\", ElementType.METHOD).field(); Properties properties = new Properties(); properties.put(org.hibernate.search.Environment.MODEL_MAPPING, mapping); properties.put(\"[other.options]\", \"[...]\"); Configuration infinispanConfiguration = new ConfigurationBuilder() .indexing() .enable() .withProperties(properties) .build(); DefaultCacheManager cacheManager = new DefaultCacheManager(infinispanConfiguration); Cache<Long, Author> cache = cacheManager.getCache(); SearchManager sm = Search.getSearchManager(cache); Author author = new Author(1, \"FirstName\", \"Surname\"); cache.put(author.getId(), author); QueryBuilder qb = sm.buildQueryBuilderForClass(Author.class).get(); Query q = qb.keyword().onField(\"name\").matching(\"FirstName\").createQuery(); CacheQuery cq = sm.getQuery(q, Author.class); Assert.assertEquals(cq.getResultSize(), 1);", "<indexing index=\"LOCAL\"> <property name=\"default.directory_provider\" value=\"ram\" /> <!-- Additional configuration information here --> </indexing>", "<cache-container name=\"clustered\" default-cache=\"repltestcache\"> [...] <replicated-cache name=\"LuceneIndexesMetadata\" mode=\"SYNC\"> <transaction mode=\"NONE\"/> <indexing index=\"NONE\"/> </replicated-cache> <distributed-cache name=\"LuceneIndexesData\" mode=\"SYNC\"> <transaction mode=\"NONE\"/> <indexing index=\"NONE\"/> </distributed-cache> <replicated-cache name=\"LuceneIndexesLocking\" mode=\"SYNC\"> <transaction mode=\"NONE\"/> <indexing index=\"NONE\"/> </replicated-cache> [...] </cache-container>", "SearchManager searchManager = Search.getSearchManager(cache); searchManager.getMassIndexer().start();" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/sect-configure_indexing
Chapter 6. Migrating custom providers
Chapter 6. Migrating custom providers Similarly to the Red Hat Single Sign-On 7.6, custom providers are deployed to the Red Hat build of Keycloak by copying them to a deployment directory. In the Red Hat build of Keycloak, copy your providers to the providers directory instead of standalone/deployments , which no longer exists. Additional dependencies should also be copied to the providers directory. Red Hat build of Keycloak does not use a separate classpath for custom providers, so you may need to be more careful with additional dependencies that you include. In addition, the EAR and WAR packaging formats, and jboss-deployment-structure.xml files, are no longer supported. While Red Hat Single Sign-On 7.6 automatically discovered custom providers, and even supported the ability to hot-deploy custom providers while Keycloak is running, this behavior is no longer supported. Also, after you make a change to the providers or dependencies in the providers directory, you have to do a build or restart the server with the auto build feature. Depending on what APIs your providers use you may also need to make some changes to the providers. See the following sections for details. 6.1. Transition from Java EE to Jakarta EE Keycloak migrated its codebase from Java EE (Enterprise Edition) to Jakarta EE, which brought various changes. We have upgraded all Jakarta EE specifications in order to support Jakarta EE 10, such as: Jakarta Persistence 3.1 Jakarta RESTful Web Services 3.1 Jakarta Mail API 2.1 Jakarta Servlet 6.0 Jakarta Activation 2.1 Jakarta EE 10 provides a modernized, simplified, lightweight approach to building cloud-native Java applications. The main changes provided within this initiative are changing the namespace from javax.* to jakarta.* . This change does not apply for javax.* packages provided directly in the JDK, such as javax.security , javax.net , javax.crypto , etc. In addition, Jakarta EE APIs like session/stateless beans are no longer supported. 6.2. Removed third party dependencies Some dependencies were removed in Red Hat build of Keycloak including openshift-rest-client okio-jvm okhttp commons-lang commons-compress jboss-dmr kotlin-stdlib Also, since Red Hat build of Keycloak is no longer based on EAP, most of the EAP dependencies were removed. This change means that if you use any of these libraries as dependencies of your own providers deployed to the Red Hat build of Keycloak, you may also need to copy those JAR files explicitly to the Keycloak distribution providers directory. 6.3. Context and dependency injection are no longer enabled for JAX-RS Resources To provide a better runtime and leverage as much as possible the underlying stack, all injection points for contextual data using the javax.ws.rs.core.Context annotation were removed. The expected improvement in performance involves no longer creating proxies instances multiple times during the request lifecycle, and drastically reducing the amount of reflection code at runtime. If you need access to the current request and response objects, you can now obtain their instances directly from the KeycloakSession : @Context org.jboss.resteasy.spi.HttpRequest request; @Context org.jboss.resteasy.spi.HttpResponse response; was replaced by: KeycloakSession session = // obtain the session, which is usually available when creating a custom provider from a factory KeycloakContext context = session.getContext(); HttpRequest request = context.getHttpRequest(); HttpResponse response = context.getHttpResponse(); Additional contextual data can be obtained from the runtime through the KeycloakContext instance: KeycloakSession session = // obtain the session KeycloakContext context = session.getContext(); MyContextualObject myContextualObject = context.getContextObject(MyContextualObject.class); 6.4. Deprecated methods from data providers and models Some previously deprecated methods are now removed in Red Hat build of Keycloak: RealmModel#searchForGroupByNameStream(String, Integer, Integer) UserProvider#getUsersStream(RealmModel, boolean) UserSessionPersisterProvider#loadUserSessions(int, int, boolean, int, String) Interfaces added for Streamification work, such as RoleMapperModel.Streams and similar KeycloakModelUtils#getClientScopeMappings Deprecated methods from KeycloakSession UserQueryProvider#getUsersStream methods Also, these other changes were made: Some methods from UserSessionProvider were moved to UserLoginFailureProvider . Streams interfaces in federated storage provider classes were deprecated. Streamification - interfaces now contain only Stream-based methods. For example in GroupProvider interface @Deprecated List<GroupModel> getGroups(RealmModel realm); was replaced by Stream<GroupModel> getGroupsStream(RealmModel realm); Consistent parameter ordering - methods now have strict parameter ordering where RealmModel is always the first parameter. For example in UserLookupProvider interface: @Deprecated UserModel getUserById(String id, RealmModel realm); was replaced by UserModel getUserById(RealmModel realm, String id) 6.4.1. List of changed interfaces ( o.k. stands for org.keycloak. package) server-spi module o.k.credential.CredentialInputUpdater o.k.credential.UserCredentialStore o.k.models.ClientProvider o.k.models.ClientSessionContext o.k.models.GroupModel o.k.models.GroupProvider o.k.models.KeyManager o.k.models.KeycloakSessionFactory o.k.models.ProtocolMapperContainerModel o.k.models.RealmModel o.k.models.RealmProvider o.k.models.RoleContainerModel o.k.models.RoleMapperModel o.k.models.RoleModel o.k.models.RoleProvider o.k.models.ScopeContainerModel o.k.models.UserCredentialManager o.k.models.UserModel o.k.models.UserProvider o.k.models.UserSessionProvider o.k.models.utils.RoleUtils o.k.sessions.AuthenticationSessionProvider o.k.storage.client.ClientLookupProvider o.k.storage.group.GroupLookupProvider o.k.storage.user.UserLookupProvider o.k.storage.user.UserQueryProvider server-spi-private module o.k.events.EventQuery o.k.events.admin.AdminEventQuery o.k.keys.KeyProvider 6.4.2. Refactorings in the storage layer Red Hat build of Keycloak undergoes a large refactoring to simplify the API usage, which impacts existing code. Some of these changes require updates to existing code. The following sections provide more detail. 6.4.2.1. Changes in the module structure Several public APIs around storage functionality in KeycloakSession have been consolidated, and some have been moved, deprecated, or removed. Three new modules have been introduced, and data-oriented code from server-spi , server-spi-private , and services modules have been moved there: org.keycloak:keycloak-model-legacy Contains all public facing APIs from the legacy store, such as the User Storage API. org.keycloak:keycloak-model-legacy-private Contains private implementations that relate to user storage management, such as storage *Manager classes. org.keycloak:keycloak-model-legacy-services Contains all REST endpoints that directly operate on the legacy store. If you are using for example in your custom user storage provider implementation the classes which have been moved to the new modules, you need to update your dependencies to include the new modules listed above. 6.4.2.2. Changes in KeycloakSession KeycloakSession has been simplified. Several methods have been removed in KeycloakSession . KeycloakSession session contained several methods for obtaining a provider for a particular object type, such as for a UserProvider there are users() , userLocalStorage() , userCache() , userStorageManager() , and userFederatedStorage() . This situation may be confusing for the developer who has to understand the exact meaning of each method. For those reasons, only the users() method is kept in KeycloakSession , and should replace all other calls listed above. The rest of the methods have been removed. The same pattern of depreciation applies to methods of other object areas, such as clients() or groups() . All methods ending in *StorageManager() and *LocalStorage() have been removed. The section describes how to migrate those calls to the new API or use the legacy API. 6.4.3. Migrating existing providers The existing providers need no migration if they do not call a removed method, which should be the case for most providers. If the provider uses removed methods, but does not rely on local versus non-local storage, changing a call from the now removed userLocalStorage() to the method users() is the best option. Be aware that the semantics change here as the new method involves a cache if that has been enabled in the local setup. Before migration: accessing a removed API doesn't compile session .userLocalStorage() ; After migration: accessing the new API when caller does not depend on the legacy storage API session .users() ; In the rare case when a custom provider needs to distinguish between the mode of a particular provider, access to the deprecated objects is provided by using the LegacyStoreManagers data store provider. This might be the case if the provider accesses the local storage directly or wants to skip the cache. This option will be available only if the legacy modules are part of the deployment. Before migration: accessing a removed API session .userLocalStorage() ; After migration: accessing the new functionality via the LegacyStoreManagers API ((LegacyDatastoreProvider) session.getProvider(DatastoreProvider.class)) .userLocalStorage() ; Some user storage related APIs have been wrapped in org.keycloak.storage.UserStorageUtil for convenience. 6.4.4. Changes to RealmModel The methods getUserStorageProviders , getUserStorageProvidersStream , getClientStorageProviders , getClientStorageProvidersStream , getRoleStorageProviders and getRoleStorageProvidersStream have been removed. Code which depends on these methods should cast the instance as follows: Before migration: code will not compile due to the changed API realm .getClientStorageProvidersStream() ...; After migration: cast the instance to the legacy interface ((LegacyRealmModel) realm) .getClientStorageProvidersStream() ...; Similarly, code that used to implement the interface RealmModel and wants to provide these methods should implement the new interface LegacyRealmModel . This interface is a sub-interface of RealmModel and includes the old methods: Before migration: code implements the old interface public class MyClass extends RealmModel { /* might not compile due to @Override annotations for methods no longer present in the interface RealmModel. / / ... */ } After migration: code implements the new interface public class MyClass extends LegacyRealmModel { /* ... */ } 6.4.5. Interface UserCache moved to the legacy module As the caching status of objects will be transparent to services, the interface UserCache has been moved to the module keycloak-model-legacy . Code that depends on the legacy implementation should access the UserCache directly. Before migration: code will not compile[source,java,subs="+quotes"] After migration: use the API directly UserStorageUitl.userCache(session); To trigger the invalidation of a realm, instead of using the UserCache API, consider triggering an event: Before migration: code uses cache API[source,java,subs="+quotes"] After migration: use the invalidation API session.invalidate(InvalidationHandler.ObjectType.REALM, realm.getId()); 6.4.6. Credential management for users Credentials for users were previously managed using session.userCredentialManager(). method (realm, user, ...) . The new way is to leverage user.credentialManager(). method (...) . This form gets the credential functionality closer to the API of users, and does not rely on prior knowledge of the user credential's location in regard to realm and storage. The old APIs have been removed. Before migration: accessing a removed API session.userCredentialManager() .createCredential (realm, user, credentialModel) After migration: accessing the new API user.credentialManager() .createStoredCredential (credentialModel) For a custom UserStorageProvider , there is a new method credentialManager() that needs to be implemented when returning a UserModel . Those must return an instance of the LegacyUserCredentialManager : Before migration: code will not compile due to the new method credentialManager() required by UserModel public class MyUserStorageProvider implements UserLookupProvider, ... { /* ... */ protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } }; } } After migration: implementation of the API UserModel.credentialManager() for the legacy store. public class MyUserStorageProvider implements UserLookupProvider, ... { /* ... */ protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } @Override public SubjectCredentialManager credentialManager() { return new LegacyUserCredentialManager(session, realm, this); } }; } }
[ "@Context org.jboss.resteasy.spi.HttpRequest request; @Context org.jboss.resteasy.spi.HttpResponse response;", "KeycloakSession session = // obtain the session, which is usually available when creating a custom provider from a factory KeycloakContext context = session.getContext(); HttpRequest request = context.getHttpRequest(); HttpResponse response = context.getHttpResponse();", "KeycloakSession session = // obtain the session KeycloakContext context = session.getContext(); MyContextualObject myContextualObject = context.getContextObject(MyContextualObject.class);", "@Deprecated List<GroupModel> getGroups(RealmModel realm);", "Stream<GroupModel> getGroupsStream(RealmModel realm);", "@Deprecated UserModel getUserById(String id, RealmModel realm);", "UserModel getUserById(RealmModel realm, String id)", "session .userLocalStorage() ;", "session .users() ;", "session .userLocalStorage() ;", "((LegacyDatastoreProvider) session.getProvider(DatastoreProvider.class)) .userLocalStorage() ;", "realm .getClientStorageProvidersStream() ...;", "((LegacyRealmModel) realm) .getClientStorageProvidersStream() ...;", "public class MyClass extends RealmModel { /* might not compile due to @Override annotations for methods no longer present in the interface RealmModel. / / ... */ }", "public class MyClass extends LegacyRealmModel { /* ... */ }", "session**.userCache()**.evict(realm, user);", "UserStorageUitl.userCache(session);", "UserCache cache = session.getProvider(UserCache.class); if (cache != null) cache.evict(realm)();", "session.invalidate(InvalidationHandler.ObjectType.REALM, realm.getId());", "session.userCredentialManager() .createCredential (realm, user, credentialModel)", "user.credentialManager() .createStoredCredential (credentialModel)", "public class MyUserStorageProvider implements UserLookupProvider, ... { /* ... */ protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } }; } }", "public class MyUserStorageProvider implements UserLookupProvider, ... { /* ... */ protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } @Override public SubjectCredentialManager credentialManager() { return new LegacyUserCredentialManager(session, realm, this); } }; } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/migration_guide/migrating-providers
23.19.2. IPL on an LPAR
23.19.2. IPL on an LPAR For LPAR-based installations, on the HMC, issue a load command to the LPAR, specifying the particular DASD, or the FCP adapter, WWPN, and FCP LUN where the /boot partition is located.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-complete-s390-ipl-lpar
Chapter 1. Operators overview
Chapter 1. Operators overview Operators are among the most important components of Red Hat OpenShift Service on AWS. They are the preferred method of packaging, deploying, and managing services on the control plane. They can also provide advantages to applications that users run. Operators integrate with Kubernetes APIs and CLI tools such as kubectl and the OpenShift CLI ( oc ). They provide the means of monitoring applications, performing health checks, managing over-the-air (OTA) updates, and ensuring that applications remain in your specified state. Operators are designed specifically for Kubernetes-native applications to implement and automate common Day 1 operations, such as installation and configuration. Operators can also automate Day 2 operations, such as autoscaling up or down and creating backups. All of these activities are directed by a piece of software running on your cluster. While both follow similar Operator concepts and goals, Operators in Red Hat OpenShift Service on AWS are managed by two different systems, depending on their purpose: Cluster Operators Managed by the Cluster Version Operator (CVO) and installed by default to perform cluster functions. Optional add-on Operators Managed by Operator Lifecycle Manager (OLM) and can be made accessible for users to run in their applications. Also known as OLM-based Operators . 1.1. For developers As an Operator author, you can perform the following development tasks for OLM-based Operators: Install Operator SDK CLI . Create Go-based Operators , Ansible-based Operators , and Helm-based Operators . Use Operator SDK to build, test, and deploy an Operator . Create an application from an installed Operator through the web console . 1.2. For administrators As an administrator with the dedicated-admin role, you can perform the following Operator tasks: Manage custom catalogs . Install an Operator from OperatorHub . View Operator status . Manage Operator conditions . Upgrade installed Operators . Delete installed Operators . Configure proxy support . 1.3. steps To understand more about Operators, see What are Operators?
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/operators/operators-overview
Chapter 2. Installing and configuring web console by using RHEL system roles
Chapter 2. Installing and configuring web console by using RHEL system roles With the cockpit RHEL system role, you can automatically deploy and enable the web console on multiple RHEL systems. 2.1. Installing the web console by using the cockpit RHEL system role You can use the cockpit system role to automate installing and enabling the RHEL web console on multiple systems. In this example, you use the cockpit system role to: Install the RHEL web console. Configure the web console to use a custom port number (9050/tcp). By default, the web console uses port 9090. Allow the firewalld and selinux system roles to configure the system for opening new ports. Set the web console to use a certificate from the ipa trusted certificate authority instead of using a self-signed certificate. Note You do not have to call the firewall or certificate system roles in the playbook to manage the firewall or create the certificate. The cockpit system role calls them automatically as needed. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example, ~/playbook.yml , with the following content: --- - name: Manage the RHEL web console hosts: managed-node-01.example.com tasks: - name: Install RHEL web console ansible.builtin.include_role: name: rhel-system-roles.cockpit vars: cockpit_packages: default cockpit_port: 9050 cockpit_manage_selinux: true cockpit_manage_firewall: true cockpit_certificates: - name: /etc/cockpit/ws-certs.d/01-certificate dns: ['localhost', 'www.example.com'] ca: ipa The settings specified in the example playbook include the following: cockpit_manage_selinux: true Allow using the selinux system role to configure SELinux for setting up the correct port permissions on the websm_port_t SELinux type. cockpit_manage_firewall: true Allow the cockpit system role to use the firewalld system role for adding ports. cockpit_certificates: <YAML_dictionary> By default, the RHEL web console uses a self-signed certificate. Alternatively, you can add the cockpit_certificates variable to the playbook and configure the role to request certificates from an IdM certificate authority (CA) or to use an existing certificate and private key that is available on the managed node. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.cockpit/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.cockpit/README.md file /usr/share/doc/rhel-system-roles/cockpit directory Requesting certificates using RHEL system roles
[ "--- - name: Manage the RHEL web console hosts: managed-node-01.example.com tasks: - name: Install RHEL web console ansible.builtin.include_role: name: rhel-system-roles.cockpit vars: cockpit_packages: default cockpit_port: 9050 cockpit_manage_selinux: true cockpit_manage_firewall: true cockpit_certificates: - name: /etc/cockpit/ws-certs.d/01-certificate dns: ['localhost', 'www.example.com'] ca: ipa", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_systems_using_the_rhel_8_web_console/assembly_installing-and-configuring-web-console-with-the-cockpit-rhel-system-role_system-management-using-the-rhel-8-web-console
8.2. sVirt Labeling
8.2. sVirt Labeling Like other services under the protection of SELinux, sVirt uses process-based mechanisms and restrictions to provide an extra layer of security over guest instances. Under typical use, you should not even notice that sVirt is working in the background. This section describes the labeling features of sVirt. As shown in the following output, when using sVirt, each Virtual Machine (VM) process is labeled and runs with a dynamically generated level. Each process is isolated from other VMs with different levels: The actual disk images are automatically labeled to match the processes, as shown in the following output: The following table outlines the different labels that can be assigned when using sVirt: Table 8.1. sVirt Labels Type SELinux Context Description Virtual Machine Processes system_u:system_r:svirt_t:MCS1 MCS1 is a randomly selected MCS field. Currently approximately 500,000 labels are supported. Virtual Machine Image system_u:object_r:svirt_image_t:MCS1 Only processes labeled svirt_t with the same MCS fields are able to read/write these image files and devices. Virtual Machine Shared Read/Write Content system_u:object_r:svirt_image_t:s0 All processes labeled svirt_t are allowed to write to the svirt_image_t:s0 files and devices. Virtual Machine Image system_u:object_r:virt_content_t:s0 System default label used when an image exits. No svirt_t virtual processes are allowed to read files/devices with this label. It is also possible to perform static labeling when using sVirt. Static labels allow the administrator to select a specific label, including the MCS/MLS field, for a virtual machine. Administrators who run statically-labeled virtual machines are responsible for setting the correct label on the image files. The virtual machine will always be started with that label, and the sVirt system will never modify the label of a statically-labeled virtual machine's content. This allows the sVirt component to run in an MLS environment. You can also run multiple virtual machines with different sensitivity levels on a system, depending on your requirements.
[ "~]# ps -eZ | grep qemu system_u:system_r:svirt_t:s0:c87,c520 27950 ? 00:00:17 qemu-kvm system_u:system_r:svirt_t:s0:c639,c757 27989 ? 00:00:06 qemu-system-x86", "~]# ls -lZ /var/lib/libvirtimages/* system_u:object_r:svirt_image_t:s0:c87,c520 image1" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sec-security-enhanced_linux-svirt_labeling
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/converting_from_a_linux_distribution_to_rhel_using_the_convert2rhel_utility_in_red_hat_insights/proc_providing-feedback-on-red-hat-documentation_converting-from-a-linux-distribution-to-rhel-in-insights
1.2. Logical Volumes
1.2. Logical Volumes Volume management creates a layer of abstraction over physical storage, allowing you to create logical storage volumes. This provides much greater flexibility in a number of ways than using physical storage directly. With a logical volume, you are not restricted to physical disk sizes. In addition, the hardware storage configuration is hidden from the software so it can be resized and moved without stopping applications or unmounting file systems. This can reduce operational costs. Logical volumes provide the following advantages over using physical storage directly: Flexible capacity When using logical volumes, file systems can extend across multiple disks, since you can aggregate disks and partitions into a single logical volume. Resizeable storage pools You can extend logical volumes or reduce logical volumes in size with simple software commands, without reformatting and repartitioning the underlying disk devices. Online data relocation To deploy newer, faster, or more resilient storage subsystems, you can move data while your system is active. Data can be rearranged on disks while the disks are in use. For example, you can empty a hot-swappable disk before removing it. Convenient device naming Logical storage volumes can be managed in user-defined and custom named groups. Disk striping You can create a logical volume that stripes data across two or more disks. This can dramatically increase throughput. Mirroring volumes Logical volumes provide a convenient way to configure a mirror for your data. Volume Snapshots Using logical volumes, you can take device snapshots for consistent backups or to test the effect of changes without affecting the real data. The implementation of these features in LVM is described in the remainder of this document.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/logical_volumes
Chapter 15. Managing security context constraints
Chapter 15. Managing security context constraints In OpenShift Container Platform, you can use security context constraints (SCCs) to control permissions for the pods in your cluster. Default SCCs are created during installation and when you install some Operators or other components. As a cluster administrator, you can also create your own SCCs by using the OpenShift CLI ( oc ). Important Do not modify the default SCCs. Customizing the default SCCs can lead to issues when some of the platform pods deploy or OpenShift Container Platform is upgraded. Additionally, the default SCC values are reset to the defaults during some cluster upgrades, which discards all customizations to those SCCs. Instead of modifying the default SCCs, create and modify your own SCCs as needed. For detailed steps, see Creating security context constraints . 15.1. About security context constraints Similar to the way that RBAC resources control user access, administrators can use security context constraints (SCCs) to control permissions for pods. These permissions determine the actions that a pod can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system. Security context constraints allow an administrator to control: Whether a pod can run privileged containers with the allowPrivilegedContainer flag Whether a pod is constrained with the allowPrivilegeEscalation flag The capabilities that a container can request The use of host directories as volumes The SELinux context of the container The container user ID The use of host namespaces and networking The allocation of an FSGroup that owns the pod volumes The configuration of allowable supplemental groups Whether a container requires write access to its root file system The usage of volume types The configuration of allowable seccomp profiles Important Do not set the openshift.io/run-level label on any namespaces in OpenShift Container Platform. This label is for use by internal OpenShift Container Platform components to manage the startup of major API groups, such as the Kubernetes API server and OpenShift API server. If the openshift.io/run-level label is set, no SCCs are applied to pods in that namespace, causing any workloads running in that namespace to be highly privileged. 15.1.1. Default security context constraints The cluster contains several default security context constraints (SCCs) as described in the table below. Additional SCCs might be installed when you install Operators or other components to OpenShift Container Platform. Important Do not modify the default SCCs. Customizing the default SCCs can lead to issues when some of the platform pods deploy or OpenShift Container Platform is upgraded. Additionally, the default SCC values are reset to the defaults during some cluster upgrades, which discards all customizations to those SCCs. Instead of modifying the default SCCs, create and modify your own SCCs as needed. For detailed steps, see Creating security context constraints . Table 15.1. Default security context constraints Security context constraint Description anyuid Provides all features of the restricted SCC, but allows users to run with any UID and any GID. hostaccess Allows access to all host namespaces but still requires pods to be run with a UID and SELinux context that are allocated to the namespace. Warning This SCC allows host access to namespaces, file systems, and PIDs. It should only be used by trusted pods. Grant with caution. hostmount-anyuid Provides all the features of the restricted SCC, but allows host mounts and running as any UID and any GID on the system. Warning This SCC allows host file system access as any UID, including UID 0. Grant with caution. hostnetwork Allows using host networking and host ports but still requires pods to be run with a UID and SELinux context that are allocated to the namespace. Warning If additional workloads are run on control plane hosts, use caution when providing access to hostnetwork . A workload that runs hostnetwork on a control plane host is effectively root on the cluster and must be trusted accordingly. hostnetwork-v2 Like the hostnetwork SCC, but with the following differences: ALL capabilities are dropped from containers. The NET_BIND_SERVICE capability can be added explicitly. seccompProfile is set to runtime/default by default. allowPrivilegeEscalation must be unset or set to false in security contexts. node-exporter Used for the Prometheus node exporter. Warning This SCC allows host file system access as any UID, including UID 0. Grant with caution. nonroot Provides all features of the restricted SCC, but allows users to run with any non-root UID. The user must specify the UID or it must be specified in the manifest of the container runtime. nonroot-v2 Like the nonroot SCC, but with the following differences: ALL capabilities are dropped from containers. The NET_BIND_SERVICE capability can be added explicitly. seccompProfile is set to runtime/default by default. allowPrivilegeEscalation must be unset or set to false in security contexts. privileged Allows access to all privileged and host features and the ability to run as any user, any group, any FSGroup, and with any SELinux context. Warning This is the most relaxed SCC and should be used only for cluster administration. Grant with caution. The privileged SCC allows: Users to run privileged pods Pods to mount host directories as volumes Pods to run as any user Pods to run with any MCS label Pods to use the host's IPC namespace Pods to use the host's PID namespace Pods to use any FSGroup Pods to use any supplemental group Pods to use any seccomp profiles Pods to request any capabilities Note Setting privileged: true in the pod specification does not necessarily select the privileged SCC. The SCC that has allowPrivilegedContainer: true and has the highest prioritization will be chosen if the user has the permissions to use it. restricted Denies access to all host features and requires pods to be run with a UID, and SELinux context that are allocated to the namespace. The restricted SCC: Ensures that pods cannot run as privileged Ensures that pods cannot mount host directory volumes Requires that a pod is run as a user in a pre-allocated range of UIDs Requires that a pod is run with a pre-allocated MCS label Requires that a pod is run with a preallocated FSGroup Allows pods to use any supplemental group In clusters that were upgraded from OpenShift Container Platform 4.10 or earlier, this SCC is available for use by any authenticated user. The restricted SCC is no longer available to users of new OpenShift Container Platform 4.11 or later installations, unless the access is explicitly granted. restricted-v2 Like the restricted SCC, but with the following differences: ALL capabilities are dropped from containers. The NET_BIND_SERVICE capability can be added explicitly. seccompProfile is set to runtime/default by default. allowPrivilegeEscalation must be unset or set to false in security contexts. This is the most restrictive SCC provided by a new installation and will be used by default for authenticated users. Note The restricted-v2 SCC is the most restrictive of the SCCs that is included by default with the system. However, you can create a custom SCC that is even more restrictive. For example, you can create an SCC that restricts readOnlyRootFilesystem to true . 15.1.2. Security context constraints settings Security context constraints (SCCs) are composed of settings and strategies that control the security features a pod has access to. These settings fall into three categories: Category Description Controlled by a boolean Fields of this type default to the most restrictive value. For example, AllowPrivilegedContainer is always set to false if unspecified. Controlled by an allowable set Fields of this type are checked against the set to ensure their value is allowed. Controlled by a strategy Items that have a strategy to generate a value provide: A mechanism to generate the value, and A mechanism to ensure that a specified value falls into the set of allowable values. CRI-O has the following default list of capabilities that are allowed for each container of a pod: CHOWN DAC_OVERRIDE FSETID FOWNER SETGID SETUID SETPCAP NET_BIND_SERVICE KILL The containers use the capabilities from this default list, but pod manifest authors can alter the list by requesting additional capabilities or removing some of the default behaviors. Use the allowedCapabilities , defaultAddCapabilities , and requiredDropCapabilities parameters to control such requests from the pods. With these parameters you can specify which capabilities can be requested, which ones must be added to each container, and which ones must be forbidden, or dropped, from each container. Note You can drop all capabilites from containers by setting the requiredDropCapabilities parameter to ALL . This is what the restricted-v2 SCC does. 15.1.3. Security context constraints strategies RunAsUser MustRunAs - Requires a runAsUser to be configured. Uses the configured runAsUser as the default. Validates against the configured runAsUser . Example MustRunAs snippet ... runAsUser: type: MustRunAs uid: <id> ... MustRunAsRange - Requires minimum and maximum values to be defined if not using pre-allocated values. Uses the minimum as the default. Validates against the entire allowable range. Example MustRunAsRange snippet ... runAsUser: type: MustRunAsRange uidRangeMax: <maxvalue> uidRangeMin: <minvalue> ... MustRunAsNonRoot - Requires that the pod be submitted with a non-zero runAsUser or have the USER directive defined in the image. No default provided. Example MustRunAsNonRoot snippet ... runAsUser: type: MustRunAsNonRoot ... RunAsAny - No default provided. Allows any runAsUser to be specified. Example RunAsAny snippet ... runAsUser: type: RunAsAny ... SELinuxContext MustRunAs - Requires seLinuxOptions to be configured if not using pre-allocated values. Uses seLinuxOptions as the default. Validates against seLinuxOptions . RunAsAny - No default provided. Allows any seLinuxOptions to be specified. SupplementalGroups MustRunAs - Requires at least one range to be specified if not using pre-allocated values. Uses the minimum value of the first range as the default. Validates against all ranges. RunAsAny - No default provided. Allows any supplementalGroups to be specified. FSGroup MustRunAs - Requires at least one range to be specified if not using pre-allocated values. Uses the minimum value of the first range as the default. Validates against the first ID in the first range. RunAsAny - No default provided. Allows any fsGroup ID to be specified. 15.1.4. Controlling volumes The usage of specific volume types can be controlled by setting the volumes field of the SCC. The allowable values of this field correspond to the volume sources that are defined when creating a volume: awsElasticBlockStore azureDisk azureFile cephFS cinder configMap csi downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk ephemeral gitRepo glusterfs hostPath iscsi nfs persistentVolumeClaim photonPersistentDisk portworxVolume projected quobyte rbd scaleIO secret storageos vsphereVolume * (A special value to allow the use of all volume types.) none (A special value to disallow the use of all volumes types. Exists only for backwards compatibility.) The recommended minimum set of allowed volumes for new SCCs are configMap , downwardAPI , emptyDir , persistentVolumeClaim , secret , and projected . Note This list of allowable volume types is not exhaustive because new types are added with each release of OpenShift Container Platform. Note For backwards compatibility, the usage of allowHostDirVolumePlugin overrides settings in the volumes field. For example, if allowHostDirVolumePlugin is set to false but allowed in the volumes field, then the hostPath value will be removed from volumes . 15.1.5. Admission control Admission control with SCCs allows for control over the creation of resources based on the capabilities granted to a user. In terms of the SCCs, this means that an admission controller can inspect the user information made available in the context to retrieve an appropriate set of SCCs. Doing so ensures the pod is authorized to make requests about its operating environment or to generate a set of constraints to apply to the pod. The set of SCCs that admission uses to authorize a pod are determined by the user identity and groups that the user belongs to. Additionally, if the pod specifies a service account, the set of allowable SCCs includes any constraints accessible to the service account. Note When you create a workload resource, such as deployment, only the service account is used to find the SCCs and admit the pods when they are created. Admission uses the following approach to create the final security context for the pod: Retrieve all SCCs available for use. Generate field values for security context settings that were not specified on the request. Validate the final settings against the available constraints. If a matching set of constraints is found, then the pod is accepted. If the request cannot be matched to an SCC, the pod is rejected. A pod must validate every field against the SCC. The following are examples for just two of the fields that must be validated: Note These examples are in the context of a strategy using the pre-allocated values. An FSGroup SCC strategy of MustRunAs If the pod defines a fsGroup ID, then that ID must equal the default fsGroup ID. Otherwise, the pod is not validated by that SCC and the SCC is evaluated. If the SecurityContextConstraints.fsGroup field has value RunAsAny and the pod specification omits the Pod.spec.securityContext.fsGroup , then this field is considered valid. Note that it is possible that during validation, other SCC settings will reject other pod fields and thus cause the pod to fail. A SupplementalGroups SCC strategy of MustRunAs If the pod specification defines one or more supplementalGroups IDs, then the pod's IDs must equal one of the IDs in the namespace's openshift.io/sa.scc.supplemental-groups annotation. Otherwise, the pod is not validated by that SCC and the SCC is evaluated. If the SecurityContextConstraints.supplementalGroups field has value RunAsAny and the pod specification omits the Pod.spec.securityContext.supplementalGroups , then this field is considered valid. Note that it is possible that during validation, other SCC settings will reject other pod fields and thus cause the pod to fail. 15.1.6. Security context constraints prioritization Security context constraints (SCCs) have a priority field that affects the ordering when attempting to validate a request by the admission controller. A priority value of 0 is the lowest possible priority. A nil priority is considered a 0 , or lowest, priority. Higher priority SCCs are moved to the front of the set when sorting. When the complete set of available SCCs is determined, the SCCs are ordered in the following manner: The highest priority SCCs are ordered first. If the priorities are equal, the SCCs are sorted from most restrictive to least restrictive. If both the priorities and restrictions are equal, the SCCs are sorted by name. By default, the anyuid SCC granted to cluster administrators is given priority in their SCC set. This allows cluster administrators to run pods as any user by specifying RunAsUser in the pod's SecurityContext . 15.2. About pre-allocated security context constraints values The admission controller is aware of certain conditions in the security context constraints (SCCs) that trigger it to look up pre-allocated values from a namespace and populate the SCC before processing the pod. Each SCC strategy is evaluated independently of other strategies, with the pre-allocated values, where allowed, for each policy aggregated with pod specification values to make the final values for the various IDs defined in the running pod. The following SCCs cause the admission controller to look for pre-allocated values when no ranges are defined in the pod specification: A RunAsUser strategy of MustRunAsRange with no minimum or maximum set. Admission looks for the openshift.io/sa.scc.uid-range annotation to populate range fields. An SELinuxContext strategy of MustRunAs with no level set. Admission looks for the openshift.io/sa.scc.mcs annotation to populate the level. A FSGroup strategy of MustRunAs . Admission looks for the openshift.io/sa.scc.supplemental-groups annotation. A SupplementalGroups strategy of MustRunAs . Admission looks for the openshift.io/sa.scc.supplemental-groups annotation. During the generation phase, the security context provider uses default values for any parameter values that are not specifically set in the pod. Default values are based on the selected strategy: RunAsAny and MustRunAsNonRoot strategies do not provide default values. If the pod needs a parameter value, such as a group ID, you must define the value in the pod specification. MustRunAs (single value) strategies provide a default value that is always used. For example, for group IDs, even if the pod specification defines its own ID value, the namespace's default parameter value also appears in the pod's groups. MustRunAsRange and MustRunAs (range-based) strategies provide the minimum value of the range. As with a single value MustRunAs strategy, the namespace's default parameter value appears in the running pod. If a range-based strategy is configurable with multiple ranges, it provides the minimum value of the first configured range. Note FSGroup and SupplementalGroups strategies fall back to the openshift.io/sa.scc.uid-range annotation if the openshift.io/sa.scc.supplemental-groups annotation does not exist on the namespace. If neither exists, the SCC is not created. Note By default, the annotation-based FSGroup strategy configures itself with a single range based on the minimum value for the annotation. For example, if your annotation reads 1/3 , the FSGroup strategy configures itself with a minimum and maximum value of 1 . If you want to allow more groups to be accepted for the FSGroup field, you can configure a custom SCC that does not use the annotation. Note The openshift.io/sa.scc.supplemental-groups annotation accepts a comma-delimited list of blocks in the format of <start>/<length or <start>-<end> . The openshift.io/sa.scc.uid-range annotation accepts only a single block. 15.3. Example security context constraints The following examples show the security context constraints (SCC) format and annotations: Annotated privileged SCC allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegedContainer: true allowedCapabilities: 1 - '*' apiVersion: security.openshift.io/v1 defaultAddCapabilities: [] 2 fsGroup: 3 type: RunAsAny groups: 4 - system:cluster-admins - system:nodes kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: 'privileged allows access to all privileged and host features and the ability to run as any user, any group, any fsGroup, and with any SELinux context. WARNING: this is the most relaxed SCC and should be used only for cluster administration. Grant with caution.' creationTimestamp: null name: privileged priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null 5 runAsUser: 6 type: RunAsAny seLinuxContext: 7 type: RunAsAny seccompProfiles: - '*' supplementalGroups: 8 type: RunAsAny users: 9 - system:serviceaccount:default:registry - system:serviceaccount:default:router - system:serviceaccount:openshift-infra:build-controller volumes: 10 - '*' 1 A list of capabilities that a pod can request. An empty list means that none of capabilities can be requested while the special symbol * allows any capabilities. 2 A list of additional capabilities that are added to any pod. 3 The FSGroup strategy, which dictates the allowable values for the security context. 4 The groups that can access this SCC. 5 A list of capabilities to drop from a pod. Or, specify ALL to drop all capabilities. 6 The runAsUser strategy type, which dictates the allowable values for the security context. 7 The seLinuxContext strategy type, which dictates the allowable values for the security context. 8 The supplementalGroups strategy, which dictates the allowable supplemental groups for the security context. 9 The users who can access this SCC. 10 The allowable volume types for the security context. In the example, * allows the use of all volume types. The users and groups fields on the SCC control which users can access the SCC. By default, cluster administrators, nodes, and the build controller are granted access to the privileged SCC. All authenticated users are granted access to the restricted-v2 SCC. Without explicit runAsUser setting apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0 1 When a container or pod does not request a user ID under which it should be run, the effective UID depends on the SCC that emits this pod. Because the restricted-v2 SCC is granted to all authenticated users by default, it will be available to all users and service accounts and used in most cases. The restricted-v2 SCC uses MustRunAsRange strategy for constraining and defaulting the possible values of the securityContext.runAsUser field. The admission plugin will look for the openshift.io/sa.scc.uid-range annotation on the current project to populate range fields, as it does not provide this range. In the end, a container will have runAsUser equal to the first value of the range that is hard to predict because every project has different ranges. With explicit runAsUser setting apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: runAsUser: 1000 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0 1 A container or pod that requests a specific user ID will be accepted by OpenShift Container Platform only when a service account or a user is granted access to a SCC that allows such a user ID. The SCC can allow arbitrary IDs, an ID that falls into a range, or the exact user ID specific to the request. This configuration is valid for SELinux, fsGroup, and Supplemental Groups. 15.4. Creating security context constraints If the default security context constraints (SCCs) do not satisfy your application workload requirements, you can create a custom SCC by using the OpenShift CLI ( oc ). Important Creating and modifying your own SCCs are advanced operations that might cause instability to your cluster. If you have questions about using your own SCCs, contact Red Hat Support. For information about contacting Red Hat support, see Getting support . Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster as a user with the cluster-admin role. Procedure Define the SCC in a YAML file named scc-admin.yaml : kind: SecurityContextConstraints apiVersion: security.openshift.io/v1 metadata: name: scc-admin allowPrivilegedContainer: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny users: - my-admin-user groups: - my-admin-group Optionally, you can drop specific capabilities for an SCC by setting the requiredDropCapabilities field with the desired values. Any specified capabilities are dropped from the container. To drop all capabilities, specify ALL . For example, to create an SCC that drops the KILL , MKNOD , and SYS_CHROOT capabilities, add the following to the SCC object: requiredDropCapabilities: - KILL - MKNOD - SYS_CHROOT Note You cannot list a capability in both allowedCapabilities and requiredDropCapabilities . CRI-O supports the same list of capability values that are found in the Docker documentation . Create the SCC by passing in the file: USD oc create -f scc-admin.yaml Example output securitycontextconstraints "scc-admin" created Verification Verify that the SCC was created: USD oc get scc scc-admin Example output NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES scc-admin true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk quobyte rbd secret vsphere] 15.5. Configuring a workload to require a specific SCC You can configure a workload to require a certain security context constraint (SCC). This is useful in scenarios where you want to pin a specific SCC to the workload or if you want to prevent your required SCC from being preempted by another SCC in the cluster. To require a specific SCC, set the openshift.io/required-scc annotation on your workload. You can set this annotation on any resource that can set a pod manifest template, such as a deployment or daemon set. The SCC must exist in the cluster and must be applicable to the workload, otherwise pod admission fails. An SCC is considered applicable to the workload if the user creating the pod or the pod's service account has use permissions for the SCC in the pod's namespace. Warning Do not change the openshift.io/required-scc annotation in the live pod's manifest, because doing so causes the pod admission to fail. To change the required SCC, update the annotation in the underlying pod template, which causes the pod to be deleted and re-created. Prerequisites The SCC must exist in the cluster. Procedure Create a YAML file for the deployment and specify a required SCC by setting the openshift.io/required-scc annotation: Example deployment.yaml apiVersion: config.openshift.io/v1 kind: Deployment apiVersion: apps/v1 spec: # ... template: metadata: annotations: openshift.io/required-scc: "my-scc" 1 # ... 1 Specify the name of the SCC to require. Create the resource by running the following command: USD oc create -f deployment.yaml Verification Verify that the deployment used the specified SCC: View the value of the pod's openshift.io/scc annotation by running the following command: USD oc get pod <pod_name> -o jsonpath='{.metadata.annotations.openshift\.io\/scc}{"\n"}' 1 1 Replace <pod_name> with the name of your deployment pod. Examine the output and confirm that the displayed SCC matches the SCC that you defined in the deployment: Example output my-scc 15.6. Role-based access to security context constraints You can specify SCCs as resources that are handled by RBAC. This allows you to scope access to your SCCs to a certain project or to the entire cluster. Assigning users, groups, or service accounts directly to an SCC retains cluster-wide scope. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. To include access to SCCs for your role, specify the scc resource when creating a role. USD oc create role <role-name> --verb=use --resource=scc --resource-name=<scc-name> -n <namespace> This results in the following role definition: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: ... name: role-name 1 namespace: namespace 2 ... rules: - apiGroups: - security.openshift.io 3 resourceNames: - scc-name 4 resources: - securitycontextconstraints 5 verbs: 6 - use 1 The role's name. 2 Namespace of the defined role. Defaults to default if not specified. 3 The API group that includes the SecurityContextConstraints resource. Automatically defined when scc is specified as a resource. 4 An example name for an SCC you want to have access. 5 Name of the resource group that allows users to specify SCC names in the resourceNames field. 6 A list of verbs to apply to the role. A local or cluster role with such a rule allows the subjects that are bound to it with a role binding or a cluster role binding to use the user-defined SCC called scc-name . Note Because RBAC is designed to prevent escalation, even project administrators are unable to grant access to an SCC. By default, they are not allowed to use the verb use on SCC resources, including the restricted-v2 SCC. 15.7. Reference of security context constraints commands You can manage security context constraints (SCCs) in your instance as normal API objects by using the OpenShift CLI ( oc ). Note You must have cluster-admin privileges to manage SCCs. 15.7.1. Listing security context constraints To get a current list of SCCs: USD oc get scc Example output NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny 10 false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] hostaccess false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","hostPath","persistentVolumeClaim","projected","secret"] hostmount-anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","hostPath","nfs","persistentVolumeClaim","projected","secret"] hostnetwork false <no value> MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] hostnetwork-v2 false ["NET_BIND_SERVICE"] MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] node-exporter true <no value> RunAsAny RunAsAny RunAsAny RunAsAny <no value> false ["*"] nonroot false <no value> MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] nonroot-v2 false ["NET_BIND_SERVICE"] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] privileged true ["*"] RunAsAny RunAsAny RunAsAny RunAsAny <no value> false ["*"] restricted false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] restricted-v2 false ["NET_BIND_SERVICE"] MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] 15.7.2. Examining security context constraints You can view information about a particular SCC, including which users, service accounts, and groups the SCC is applied to. For example, to examine the restricted SCC: USD oc describe scc restricted Example output Name: restricted Priority: <none> Access: Users: <none> 1 Groups: <none> 2 Settings: Allow Privileged: false Allow Privilege Escalation: true Default Add Capabilities: <none> Required Drop Capabilities: KILL,MKNOD,SETUID,SETGID Allowed Capabilities: <none> Allowed Seccomp Profiles: <none> Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret Allowed Flexvolumes: <all> Allowed Unsafe Sysctls: <none> Forbidden Sysctls: <none> Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsRange UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: MustRunAs Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none> 1 Lists which users and service accounts the SCC is applied to. 2 Lists which groups the SCC is applied to. Note To preserve customized SCCs during upgrades, do not edit settings on the default SCCs. 15.7.3. Updating security context constraints If your custom SCC no longer satisfies your application workloads requirements, you can update your SCC by using the OpenShift CLI ( oc ). To update an existing SCC: USD oc edit scc <scc_name> Important To preserve customized SCCs during upgrades, do not edit settings on the default SCCs. 15.7.4. Deleting security context constraints If you no longer require your custom SCC, you can delete the SCC by using the OpenShift CLI ( oc ). To delete an SCC: USD oc delete scc <scc_name> Important Do not delete default SCCs. If you delete a default SCC, it is regenerated by the Cluster Version Operator. 15.8. Additional resources Getting support
[ "runAsUser: type: MustRunAs uid: <id>", "runAsUser: type: MustRunAsRange uidRangeMax: <maxvalue> uidRangeMin: <minvalue>", "runAsUser: type: MustRunAsNonRoot", "runAsUser: type: RunAsAny", "allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegedContainer: true allowedCapabilities: 1 - '*' apiVersion: security.openshift.io/v1 defaultAddCapabilities: [] 2 fsGroup: 3 type: RunAsAny groups: 4 - system:cluster-admins - system:nodes kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: 'privileged allows access to all privileged and host features and the ability to run as any user, any group, any fsGroup, and with any SELinux context. WARNING: this is the most relaxed SCC and should be used only for cluster administration. Grant with caution.' creationTimestamp: null name: privileged priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null 5 runAsUser: 6 type: RunAsAny seLinuxContext: 7 type: RunAsAny seccompProfiles: - '*' supplementalGroups: 8 type: RunAsAny users: 9 - system:serviceaccount:default:registry - system:serviceaccount:default:router - system:serviceaccount:openshift-infra:build-controller volumes: 10 - '*'", "apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0", "apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: runAsUser: 1000 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0", "kind: SecurityContextConstraints apiVersion: security.openshift.io/v1 metadata: name: scc-admin allowPrivilegedContainer: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny users: - my-admin-user groups: - my-admin-group", "requiredDropCapabilities: - KILL - MKNOD - SYS_CHROOT", "oc create -f scc-admin.yaml", "securitycontextconstraints \"scc-admin\" created", "oc get scc scc-admin", "NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES scc-admin true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk quobyte rbd secret vsphere]", "apiVersion: config.openshift.io/v1 kind: Deployment apiVersion: apps/v1 spec: template: metadata: annotations: openshift.io/required-scc: \"my-scc\" 1", "oc create -f deployment.yaml", "oc get pod <pod_name> -o jsonpath='{.metadata.annotations.openshift\\.io\\/scc}{\"\\n\"}' 1", "my-scc", "oc create role <role-name> --verb=use --resource=scc --resource-name=<scc-name> -n <namespace>", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: role-name 1 namespace: namespace 2 rules: - apiGroups: - security.openshift.io 3 resourceNames: - scc-name 4 resources: - securitycontextconstraints 5 verbs: 6 - use", "oc get scc", "NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny 10 false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostaccess false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"hostPath\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostmount-anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"hostPath\",\"nfs\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostnetwork false <no value> MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostnetwork-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] node-exporter true <no value> RunAsAny RunAsAny RunAsAny RunAsAny <no value> false [\"*\"] nonroot false <no value> MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] nonroot-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] privileged true [\"*\"] RunAsAny RunAsAny RunAsAny RunAsAny <no value> false [\"*\"] restricted false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] restricted-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"]", "oc describe scc restricted", "Name: restricted Priority: <none> Access: Users: <none> 1 Groups: <none> 2 Settings: Allow Privileged: false Allow Privilege Escalation: true Default Add Capabilities: <none> Required Drop Capabilities: KILL,MKNOD,SETUID,SETGID Allowed Capabilities: <none> Allowed Seccomp Profiles: <none> Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret Allowed Flexvolumes: <all> Allowed Unsafe Sysctls: <none> Forbidden Sysctls: <none> Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsRange UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: MustRunAs Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none>", "oc edit scc <scc_name>", "oc delete scc <scc_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/authentication_and_authorization/managing-pod-security-policies
15.11. Managing Attributes Within Fractional Replication
15.11. Managing Attributes Within Fractional Replication As Section 15.1.7, "Replicating a Subset of Attributes with Fractional Replication" describes, fractional replication allows administrators to set attributes that are excluded from replication updates. Administrators can do this for a variety of performance reasons - to limit the number of large attributes that are sent over a network or to reduce the number of times that fixup tasks (like memberOf calculations) are run. The list of attributes to exclude from replication are defined in the nsDS5ReplicatedAttributeList attribute. This attribute is part of the replication agreement and it can be configured in the replication agreement wizard in the web console or through the command line when the replication agreement is created. Important Directory Server requires the (objectclass=*) USD EXCLUDE part in the value of the nsDS5ReplicatedAttributeList attribute. If you edit the attribute directly, for example using the ldapmodify utility, you must specify this part together with the list of attributes as displayed in the example above. However, both the dsconf and web console automatically add the (objectclass=*) USD EXCLUDE part, and you must only specify the attributes. 15.11.1. Setting Different Fractional Replication Attributes for Total and Incremental Updates When fractional replication is first configured, the list of excluded attributes applies to every update operation. Meaning, this list of attributes is excluded for a total update as well as regular incremental updates. However, there can be times when attributes should be excluded from incremental updates for performance but should be included in a total update to ensure the directory data sets are complete. In this case, it is possible to add a second attribute that defines a separate list of attributes to exclude from total updates, nsDS5ReplicatedAttributeListTotal . Note nsDS5ReplicatedAttributeList is the primary fractional replication attribute. If only nsDS5ReplicatedAttributeList is set, then it applies to both incremental updates and total updates. If both nsDS5ReplicatedAttributeList and nsDS5ReplicatedAttributeListTotal are set, then nsDS5ReplicatedAttributeList only applies to incremental updates. For example, every time a memberOf attribute is added to an entry, a memberOf fixup task is run to resolve the group membership. This can cause overhead on the server if that task is run every time replication occurs. Since a total update only occurs for a database which is newly-added to replication or that has been offline for a long time, running a memberOf fixup task after a total update is an acceptable option. In this case, the nsDS5ReplicatedAttributeList attribute lists memberOf so it is excluded from incremental updates, but nsDS5ReplicatedAttributeListTotal does not list memberOf so that it is included in total updates. The exclusion list for incremental updates is set in the nsDS5ReplicatedAttributeList attribute for the replication agreement. For example: To set the nsDS5ReplicatedAttributeList attribute, use the dsconf repl-agmt set command. For example: If nsDS5ReplicatedAttributeList is the only attribute set, then that list applies to both incremental and total updates. To set a separate list for total updates, add the nsDS5ReplicatedAttributeListTotal attribute to the replication agreement: Note The nsDS5ReplicatedAttributeList attribute must be set for incremental updates before nsDS5ReplicatedAttributeListTotal can be set for total updates. 15.11.2. The Replication Keep-alive Entry When you update an attribute on a supplier, the changelog change sequence number (CSN) is increased on the supplier. In a replication topology, this server now connects to the first consumer and compares the local CSN with the CSN on the consumer. If it is lower, the update is retrieved from the local changelog and replicated to the consumer. In a replication topology with fractional replication enabled, this can cause problems: For example, if only attributes are updated on the supplier that are excluded from replication, no update to replicate is found, and therefore the CSN is not updated on the consumer. In certain scenarios, such as when only attributes are updated on a supplier that are excluded from replication, unnecessary searching for updates on the supplier can cause other servers to receive the data later than needed . To work around this problem, Directory Server uses keep-alive entries. If all updated attributes on the supplier are excluded from replication and the number of skipped updates exceeds 100, the keepalivetimestamp attribute is updated on the supplier and replicated to the consumer. Because the keepalivetimestamp attribute is not excluded from replication, the update of the keep-alive entry is replicated, the CSN on the consumer is updated, and then equal to the one on the supplier. The time the supplier connects to the consumer, only updates that are newer than the CSN on the consumer are searched. This reduces the amount of time spent by a supplier to search for new updates to send. Directory Server automatically creates the replication keep-alive entry on demand on a supplier. It contains the replica ID of the supplier in the distinguished name (DN). Each keep-alive entry is specific to a given supplier. For example, to display the hidden keep-alive entry: The keep-alive entry is updated in the following situations (if it does not exist before the update, it is created first): When a fractional replication agreement skips more than 100 updates and does not send any updates before ending the replication session. When a supplier initializes a consumer, initially it creates its own keep-alive entry. A consumer that is also a supplier does not create its own keep-alive entry unless it also initializes another consumer. 15.11.3. Preventing "Empty" Updates from Fractional Replication Fractional replication allows a list of attributes which are removed from replication updates ( nsDS5ReplicatedAttributeList ). However, a changed to an excluded attribute still triggers a modify event and generates an empty replication update. The nsds5ReplicaStripAttrs attribute adds a list of attributes which cannot be sent in an empty replication event and are stripped from the update sequence. This logically includes operational attribtes like modifiersName . For example, let's say that the accountUnlockTime attribute is excluded. John Smith's user account is locked and then the time period expires and it is automatically unlocked. Only the accountUnlockTime attribute has changed, and that attribute is excluded from replication. However, the operational attribute internalmodifytimestamp also changed. A replication event is triggered because John Smith's user account was modified - but the only data to send is the new modify time stamp and the update is otherwise emtpy. If there are a large number of attributes related to login times or password expiration times (for example), this could create a flood of empty replication updates that negatively affect server performance or that interfere with associated applications. To prevent this, add the nsds5ReplicaStripAttrs attribute to the replication agreement to help tune the fractional replication behavior: If a replication event is not empty, the stripped attributes are still replicated with the other changes. These attributes are removed from updates only if the event would otherwise be emtpy.
[ "nsDS5ReplicatedAttributeList: (objectclass=*) USD EXCLUDE memberof authorityRevocationList accountUnlockTime", "nsds5replicatedattributelist: (objectclass=*) USD EXCLUDE authorityRevocationList accountUnlockTime memberof", "dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com repl-agmt set --suffix=\" suffix \" --frac-list=\"authorityRevocationList accountUnlockTime memberof\" agreement_name", "dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com repl-agmt set --suffix=\" suffix \" --frac-list-total=\"accountUnlockTime\" agreement_name", "ldapsearch -D \"cn=Directory Manager\" -b \"dc=example,dc=com\" -W -p 389 -h server.example.com -x 'objectClass=ldapsubentry' dn: cn=repl keep alive 1,dc=example,dc=com objectclass: top objectclass: ldapsubentry objectclass: extensibleObject cn: repl keep alive 1 keepalivetimestamp: 20181112150654Z", "dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com repl-agmt set --suffix=\" suffix \" --strip-list=\"modifiersname modifytimestamp internalmodifiersname\" agreement_name" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/managing-fractional-repl
19.5. Clustering and High Availability
19.5. Clustering and High Availability High Availability Add-On Administration The High Availability Add-On Administration guide provides information on how to configure and administer the High Availability Add-On in Red Hat Enterprise Linux 7. High Availability Add-On Overview The High Availability Add-On Overview document provides an overview of the High Availability Add-On for Red Hat Enterprise Linux 7. High Availability Add-On Reference High Availability Add-On Reference is a reference guide to the High Availability Add-On for Red Hat Enterprise Linux 7. Load Balancer Administration Load Balancer Administration is a guide to configuring and administering high-performance load balancing in Red Hat Enterprise Linux 7. DM Multipath The DM Multipath book guides users through configuring and administering the Device-Mapper Multipath feature for Red Hat Enterprise Linux 7.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/sect-Red_Hat_Enterprise_Linux-7.0_Release_Notes-Documentation-Clustering_and_High_Availability
Chapter 9. Installing a private cluster on GCP
Chapter 9. Installing a private cluster on GCP In OpenShift Container Platform version 4.14, you can install a private cluster into an existing VPC on Google Cloud Platform (GCP). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 9.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 9.2.1. Private clusters in GCP To create a private cluster on Google Cloud Platform (GCP), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. The cluster still requires access to internet to access the GCP APIs. The following items are not required or created when you install a private cluster: Public subnets Public network load balancers, which support public ingress A public DNS zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private DNS zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. Because it is not possible to limit access to external load balancers based on source tags, the private cluster uses only internal load balancers to allow access to internal instances. The internal load balancer relies on instance groups rather than the target pools that the network load balancers use. The installation program creates instance groups for each zone, even if there is no instance in that group. The cluster IP address is internal only. One forwarding rule manages both the Kubernetes API and machine config server ports. The backend service is comprised of each zone's instance group and, while it exists, the bootstrap instance group. The firewall uses a single rule that is based on only internal source ranges. 9.2.1.1. Limitations No health check for the Machine config server, /healthz , runs because of a difference in load balancer functionality. Two internal load balancers cannot share a single IP address, but two network load balancers can share a single external IP address. Instead, the health of an instance is determined entirely by the /readyz check on port 6443. 9.3. About using a custom VPC In OpenShift Container Platform 4.14, you can deploy a cluster into an existing VPC in Google Cloud Platform (GCP). If you do, you must also use existing subnets within the VPC and routing rules. By deploying OpenShift Container Platform into an existing GCP VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself. 9.3.1. Requirements for using your VPC The installation program will no longer create the following components: VPC Subnets Cloud router Cloud NAT NAT IP addresses If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VPC options like DHCP, so you must do so before you install the cluster. Your VPC and subnets must meet the following characteristics: The VPC must be in the same GCP project that you deploy the OpenShift Container Platform cluster to. To allow access to the internet from the control plane and compute machines, you must configure cloud NAT on the subnets to allow egress to it. These machines do not have a public address. Even if you do not require access to the internet, you must allow egress to the VPC network to obtain the installation program and images. Because multiple cloud NATs cannot be configured on the shared subnets, the installation program cannot configure it. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist and belong to the VPC that you specified. The subnet CIDRs belong to the machine CIDR. You must provide a subnet to deploy the cluster control plane and compute machines to. You can use the same subnet for both machine types. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. 9.3.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or Ingress rules. The GCP credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage, and nodes. 9.3.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is preserved by firewall rules that reference the machines in your cluster by the cluster's infrastructure ID. Only traffic within the cluster is allowed. If you deploy multiple clusters to the same VPC, the following components might share access between clusters: The API, which is globally available with an external publishing strategy or available throughout the network in an internal publishing strategy Debugging tools, such as ports on VM instances that are open to the machine CIDR for SSH and ICMP access 9.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 9.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 9.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 9.7. Manually creating the installation configuration file When installing a private OpenShift Container Platform cluster, you must manually generate the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for GCP 9.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 9.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 9.7.2. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 9.1. Machine series C2 C2D C3 E2 M1 N1 N2 N2D Tau T2D 9.7.3. Tested instance types for GCP on 64-bit ARM infrastructures The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform. Example 9.2. Machine series for 64-bit ARM machines Tau T2A 9.7.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 9.7.5. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 9.7.6. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 9.7.7. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{"auths": ...}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 publish: Internal 27 1 15 17 18 24 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 9 If you do not provide these parameters and values, the installation program provides the default value. 4 10 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 11 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 6 12 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 7 13 19 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 8 14 20 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 16 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 21 Specify the name of an existing VPC. 22 Specify the name of the existing subnet to deploy the control plane machines to. The subnet must belong to the VPC that you specified. 23 Specify the name of the existing subnet to deploy the compute machines to. The subnet must belong to the VPC that you specified. 25 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 26 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 27 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . Additional resources Enabling customer-managed encryption keys for a compute machine set 9.7.8. Create an Ingress Controller with global access on GCP You can create an Ingress Controller that has global access to a Google Cloud Platform (GCP) cluster. Global access is only available to Ingress Controllers using internal load balancers. Prerequisites You created the install-config.yaml and complete any modifications to it. Procedure Create an Ingress Controller with global access on a new GCP cluster. Change to the directory that contains the installation program and create a manifest file: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: Sample clientAccess configuration to Global apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService 1 Set gcp.clientAccess to Global . 2 Global access is only available to Ingress Controllers using internal load balancers. 9.7.9. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 9.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 9.9. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 9.9.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 9.9.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 9.9.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 9.9.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 9.9.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 9.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 9.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 9.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 9.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{\"auths\": ...}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 publish: Internal 27", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_gcp/installing-gcp-private
HawtIO Diagnostic Console Guide
HawtIO Diagnostic Console Guide Red Hat build of Apache Camel 4.0 Manage applications with Red Hat build of HawtIO
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/hawtio_diagnostic_console_guide/index
Chapter 49. loadbalancer
Chapter 49. loadbalancer This chapter describes the commands under the loadbalancer command. 49.1. loadbalancer amphora configure Update the amphora agent configuration Usage: Table 49.1. Positional arguments Value Summary <amphora-id> Uuid of the amphora to configure. Table 49.2. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete 49.2. loadbalancer amphora delete Delete a amphora Usage: Table 49.3. Positional arguments Value Summary <amphora-id> Uuid of the amphora to delete. Table 49.4. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete 49.3. loadbalancer amphora failover Force failover an amphora Usage: Table 49.5. Positional arguments Value Summary <amphora-id> Uuid of the amphora. Table 49.6. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete 49.4. loadbalancer amphora list List amphorae Usage: Table 49.7. Command arguments Value Summary -h, --help Show this help message and exit --loadbalancer <loadbalancer> Filter by load balancer (name or id). --compute-id <compute-id> Filter by compute id. --role {BACKUP,MASTER,STANDALONE} Filter by role. --status {ALLOCATED,BOOTING,DELETED,ERROR,PENDING_CREATE,PENDING_DELETE,READY}, --provisioning-status {ALLOCATED,BOOTING,DELETED,ERROR,PENDING_CREATE,PENDING_DELETE,READY} Filter by amphora provisioning status. --long Show additional fields. Table 49.8. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.9. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.10. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.11. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.5. loadbalancer amphora show Show the details of a single amphora Usage: Table 49.12. Positional arguments Value Summary <amphora-id> Uuid of the amphora. Table 49.13. Command arguments Value Summary -h, --help Show this help message and exit Table 49.14. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.15. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.16. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.17. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.6. loadbalancer amphora stats show Shows the current statistics for an amphora. Usage: Table 49.18. Positional arguments Value Summary <amphora-id> Uuid of the amphora Table 49.19. Command arguments Value Summary -h, --help Show this help message and exit --listener <listener> Filter by listener (name or id) Table 49.20. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.21. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.22. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.23. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.7. loadbalancer availabilityzone create Create an octavia availability zone Usage: Table 49.24. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New availability zone name. --availabilityzoneprofile <availabilityzone_profile> Availability zone profile to add the az to (name or ID). --description <description> Set the availability zone description. --enable Enable the availability zone. --disable Disable the availability zone. Table 49.25. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.26. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.27. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.28. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.8. loadbalancer availabilityzone delete Delete an availability zone Usage: Table 49.29. Positional arguments Value Summary <availabilityzone> Name of the availability zone to delete. Table 49.30. Command arguments Value Summary -h, --help Show this help message and exit 49.9. loadbalancer availabilityzone list List availability zones Usage: Table 49.31. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List availability zones according to their name. --availabilityzoneprofile <availabilityzone_profile> List availability zones according to their az profile. --enable List enabled availability zones. --disable List disabled availability zones. Table 49.32. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.33. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.34. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.35. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.10. loadbalancer availabilityzone set Update an availability zone Usage: Table 49.36. Positional arguments Value Summary <availabilityzone> Name of the availability zone to update. Table 49.37. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Set the description of the availability zone. --enable Enable the availability zone. --disable Disable the availability zone. 49.11. loadbalancer availabilityzone show Show the details for a single availability zone Usage: Table 49.38. Positional arguments Value Summary <availabilityzone> Name of the availability zone. Table 49.39. Command arguments Value Summary -h, --help Show this help message and exit Table 49.40. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.41. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.42. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.43. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.12. loadbalancer availabilityzone unset Clear availability zone settings Usage: Table 49.44. Positional arguments Value Summary <availabilityzone> Name of the availability zone to update. Table 49.45. Command arguments Value Summary -h, --help Show this help message and exit --description Clear the availability zone description. 49.13. loadbalancer availabilityzoneprofile create Create an octavia availability zone profile Usage: Table 49.46. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New octavia availability zone profile name. --provider <provider name> Provider name for the availability zone profile. --availability-zone-data <availability_zone_data> The json string containing the availability zone metadata. Table 49.47. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.48. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.49. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.50. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.14. loadbalancer availabilityzoneprofile delete Delete an availability zone profile Usage: Table 49.51. Positional arguments Value Summary <availabilityzone_profile> Availability zone profile to delete (name or id) Table 49.52. Command arguments Value Summary -h, --help Show this help message and exit 49.15. loadbalancer availabilityzoneprofile list List availability zone profiles Usage: Table 49.53. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List availabilityzone profiles by profile name. --provider <provider_name> List availability zone profiles according to their provider. Table 49.54. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.55. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.56. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.57. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.16. loadbalancer availabilityzoneprofile set Update an availability zone profile Usage: Table 49.58. Positional arguments Value Summary <availabilityzone_profile> Name or uuid of the availability zone profile to update. Table 49.59. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the name of the availability zone profile. --provider <provider_name> Set the provider of the availability zone profile. --availabilityzone-data <availabilityzone_data> Set the availability zone data of the profile. 49.17. loadbalancer availabilityzoneprofile show Show the details of a single availability zone profile Usage: Table 49.60. Positional arguments Value Summary <availabilityzone_profile> Name or uuid of the availability zone profile to show. Table 49.61. Command arguments Value Summary -h, --help Show this help message and exit Table 49.62. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.63. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.64. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.65. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.18. loadbalancer create Create a load balancer Usage: Table 49.66. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New load balancer name. --description <description> Set load balancer description. --vip-address <vip_address> Set the vip ip address. --vip-qos-policy-id <vip_qos_policy_id> Set qos policy id for vip port. unset with none . --project <project> Project for the load balancer (name or id). --provider <provider> Provider name for the load balancer. --availability-zone <availability_zone> Availability zone for the load balancer. --enable Enable load balancer (default). --disable Disable load balancer. --flavor <flavor> The name or id of the flavor for the load balancer. --wait Wait for action to complete --tag <tag> Tag to be added to the load balancer (repeat option to set multiple tags) --no-tag No tags associated with the load balancer Table 49.67. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.68. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.69. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.70. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. Table 49.71. VIP Network Value Summary At least one of the following arguments is required.--vip-port-id <vip_port_id> Set port for the load balancer (name or id). --vip-subnet-id <vip_subnet_id> Set subnet for the load balancer (name or id). --vip-network-id <vip_network_id> Set network for the load balancer (name or id). 49.19. loadbalancer delete Delete a load balancer Usage: Table 49.72. Positional arguments Value Summary <load_balancer> Load balancers to delete (name or id) Table 49.73. Command arguments Value Summary -h, --help Show this help message and exit --cascade Cascade the delete to all child elements of the load balancer. --wait Wait for action to complete 49.20. loadbalancer failover Trigger load balancer failover Usage: Table 49.74. Positional arguments Value Summary <load_balancer> Name or uuid of the load balancer. Table 49.75. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete 49.21. loadbalancer flavor create Create a octavia flavor Usage: Table 49.76. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New flavor name. --flavorprofile <flavor_profile> Flavor profile to add the flavor to (name or id). --description <description> Set flavor description. --enable Enable flavor. --disable Disable flavor. Table 49.77. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.78. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.79. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.80. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.22. loadbalancer flavor delete Delete a flavor Usage: Table 49.81. Positional arguments Value Summary <flavor> Flavor to delete (name or id) Table 49.82. Command arguments Value Summary -h, --help Show this help message and exit 49.23. loadbalancer flavor list List flavor Usage: Table 49.83. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List flavors according to their name. --flavorprofile <flavor_profile> List flavors according to their flavor profile. --enable List enabled flavors. --disable List disabled flavors. Table 49.84. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.85. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.86. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.87. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.24. loadbalancer flavor set Update a flavor Usage: Table 49.88. Positional arguments Value Summary <flavor> Name or uuid of the flavor to update. Table 49.89. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the name of the flavor. --enable Enable flavor. --disable Disable flavor. 49.25. loadbalancer flavor show Show the details for a single flavor Usage: Table 49.90. Positional arguments Value Summary <flavor> Name or uuid of the flavor. Table 49.91. Command arguments Value Summary -h, --help Show this help message and exit Table 49.92. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.93. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.94. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.95. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.26. loadbalancer flavor unset Clear flavor settings Usage: Table 49.96. Positional arguments Value Summary <flavor> Flavor to update (name or id). Table 49.97. Command arguments Value Summary -h, --help Show this help message and exit --description Clear the flavor description. 49.27. loadbalancer flavorprofile create Create a octavia flavor profile Usage: Table 49.98. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New octavia flavor profile name. --provider <provider name> Provider name for the flavor profile. --flavor-data <flavor_data> The json string containing the flavor metadata. Table 49.99. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.100. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.101. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.102. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.28. loadbalancer flavorprofile delete Delete a flavor profile Usage: Table 49.103. Positional arguments Value Summary <flavor_profile> Flavor profiles to delete (name or id) Table 49.104. Command arguments Value Summary -h, --help Show this help message and exit 49.29. loadbalancer flavorprofile list List flavor profile Usage: Table 49.105. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List flavor profiles by flavor profile name. --provider <provider_name> List flavor profiles according to their provider. Table 49.106. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.107. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.108. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.109. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.30. loadbalancer flavorprofile set Update a flavor profile Usage: Table 49.110. Positional arguments Value Summary <flavor_profile> Name or uuid of the flavor profile to update. Table 49.111. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the name of the flavor profile. --provider <provider_name> Set the provider of the flavor profile. --flavor-data <flavor_data> Set the flavor data of the flavor profile. 49.31. loadbalancer flavorprofile show Show the details for a single flavor profile Usage: Table 49.112. Positional arguments Value Summary <flavor_profile> Name or uuid of the flavor profile to show. Table 49.113. Command arguments Value Summary -h, --help Show this help message and exit Table 49.114. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.115. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.116. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.117. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.32. loadbalancer healthmonitor create Create a health monitor Usage: Table 49.118. Positional arguments Value Summary <pool> Set the pool for the health monitor (name or id). Table 49.119. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the health monitor name. --delay <delay> Set the time in seconds, between sending probes to members. --domain-name <domain_name> Set the domain name, which be injected into the http Host Header to the backend server for HTTP health check. --expected-codes <codes> Set the list of http status codes expected in response from the member to declare it healthy. --http-method {GET,POST,DELETE,PUT,HEAD,OPTIONS,PATCH,CONNECT,TRACE} Set the http method that the health monitor uses for requests. --http-version <http_version> Set the http version. --timeout <timeout> Set the maximum time, in seconds, that a monitor waits to connect before it times out. This value must be less than the delay value. --max-retries <max_retries> The number of successful checks before changing the operating status of the member to ONLINE. --url-path <url_path> Set the http url path of the request sent by the monitor to test the health of a backend member. --type {PING,HTTP,TCP,HTTPS,TLS-HELLO,UDP-CONNECT,SCTP} Set the health monitor type. --max-retries-down <max_retries_down> Set the number of allowed check failures before changing the operating status of the member to ERROR. --enable Enable health monitor (default). --disable Disable health monitor. --wait Wait for action to complete --tag <tag> Tag to be added to the health monitor (repeat option to set multiple tags) --no-tag No tags associated with the health monitor Table 49.120. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.121. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.122. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.123. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.33. loadbalancer healthmonitor delete Delete a health monitor Usage: Table 49.124. Positional arguments Value Summary <health_monitor> Health monitor to delete (name or id). Table 49.125. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete 49.34. loadbalancer healthmonitor list List health monitors Usage: Table 49.126. Command arguments Value Summary -h, --help Show this help message and exit --tags <tag>[,<tag>,... ] List health monitor which have all given tag(s) (Comma-separated list of tags) --any-tags <tag>[,<tag>,... ] List health monitor which have any given tag(s) (Comma-separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude health monitor which have all given tag(s) (Comma-separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude health monitor which have any given tag(s) (Comma-separated list of tags) Table 49.127. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.128. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.129. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.130. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.35. loadbalancer healthmonitor set Update a health monitor Usage: Table 49.131. Positional arguments Value Summary <health_monitor> Health monitor to update (name or id). Table 49.132. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set health monitor name. --delay <delay> Set the time in seconds, between sending probes to members. --domain-name <domain_name> Set the domain name, which be injected into the http Host Header to the backend server for HTTP health check. --expected-codes <codes> Set the list of http status codes expected in response from the member to declare it healthy. --http-method {GET,POST,DELETE,PUT,HEAD,OPTIONS,PATCH,CONNECT,TRACE} Set the http method that the health monitor uses for requests. --http-version <http_version> Set the http version. --timeout <timeout> Set the maximum time, in seconds, that a monitor waits to connect before it times out. This value must be less than the delay value. --max-retries <max_retries> Set the number of successful checks before changing the operating status of the member to ONLINE. --max-retries-down <max_retries_down> Set the number of allowed check failures before changing the operating status of the member to ERROR. --url-path <url_path> Set the http url path of the request sent by the monitor to test the health of a backend member. --enable Enable health monitor. --disable Disable health monitor. --wait Wait for action to complete --tag <tag> Tag to be added to the health monitor (repeat option to set multiple tags) --no-tag Clear tags associated with the health monitor. specify both --tag and --no-tag to overwrite current tags 49.36. loadbalancer healthmonitor show Show the details of a single health monitor Usage: Table 49.133. Positional arguments Value Summary <health_monitor> Name or uuid of the health monitor. Table 49.134. Command arguments Value Summary -h, --help Show this help message and exit Table 49.135. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.136. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.137. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.138. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.37. loadbalancer healthmonitor unset Clear health monitor settings Usage: Table 49.139. Positional arguments Value Summary <health_monitor> Health monitor to update (name or id). Table 49.140. Command arguments Value Summary -h, --help Show this help message and exit --domain-name Clear the health monitor domain name. --expected-codes Reset the health monitor expected codes to the api default. --http-method Reset the health monitor http method to the api default. --http-version Reset the health monitor http version to the api default. --max-retries-down Reset the health monitor max retries down to the api default. --name Clear the health monitor name. --url-path Clear the health monitor url path. --wait Wait for action to complete --tag <tag> Tag to be removed from the health monitor (repeat option to remove multiple tags) --all-tag Clear all tags associated with the health monitor 49.38. loadbalancer l7policy create Create a l7policy Usage: Table 49.141. Positional arguments Value Summary <listener> Listener to add l7policy to (name or id). Table 49.142. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the l7policy name. --description <description> Set l7policy description. --action {REDIRECT_TO_URL,REDIRECT_TO_POOL,REDIRECT_PREFIX,REJECT} Set the action of the policy. --redirect-pool <pool> Set the pool to redirect requests to (name or id). --redirect-url <url> Set the url to redirect requests to. --redirect-prefix <url> Set the url prefix to redirect requests to. --redirect-http-code <redirect_http_code> Set the http response code for redirect_url orREDIRECT_PREFIX action. --position <position> Sequence number of this l7 policy. --enable Enable l7policy (default). --disable Disable l7policy. --wait Wait for action to complete --tag <tag> Tag to be added to the l7policy (repeat option to set multiple tags) --no-tag No tags associated with the l7policy Table 49.143. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.144. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.145. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.146. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.39. loadbalancer l7policy delete Delete a l7policy Usage: Table 49.147. Positional arguments Value Summary <policy> L7policy to delete (name or id). Table 49.148. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete 49.40. loadbalancer l7policy list List l7policies Usage: Table 49.149. Command arguments Value Summary -h, --help Show this help message and exit --listener LISTENER List l7policies that applied to the given listener (name or ID). --tags <tag>[,<tag>,... ] List l7policy which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List l7policy which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude l7policy which have all given tag(s) (comma- separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude l7policy which have any given tag(s) (comma- separated list of tags) Table 49.150. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.151. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.152. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.153. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.41. loadbalancer l7policy set Update a l7policy Usage: Table 49.154. Positional arguments Value Summary <policy> L7policy to update (name or id). Table 49.155. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set l7policy name. --description <description> Set l7policy description. --action {REDIRECT_TO_URL,REDIRECT_TO_POOL,REDIRECT_PREFIX,REJECT} Set the action of the policy. --redirect-pool <pool> Set the pool to redirect requests to (name or id). --redirect-url <url> Set the url to redirect requests to. --redirect-prefix <url> Set the url prefix to redirect requests to. --redirect-http-code <redirect_http_code> Set the http response code for redirect_url orREDIRECT_PREFIX action. --position <position> Set sequence number of this l7 policy. --enable Enable l7policy. --disable Disable l7policy. --wait Wait for action to complete --tag <tag> Tag to be added to the l7policy (repeat option to set multiple tags) --no-tag Clear tags associated with the l7policy. specify both --tag and --no-tag to overwrite current tags 49.42. loadbalancer l7policy show Show the details of a single l7policy Usage: Table 49.156. Positional arguments Value Summary <policy> Name or uuid of the l7policy. Table 49.157. Command arguments Value Summary -h, --help Show this help message and exit Table 49.158. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.159. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.160. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.161. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.43. loadbalancer l7policy unset Clear l7policy settings Usage: Table 49.162. Positional arguments Value Summary <policy> L7policy to update (name or id). Table 49.163. Command arguments Value Summary -h, --help Show this help message and exit --description Clear the l7policy description. --name Clear the l7policy name. --redirect-http-code Clear the l7policy redirect http code. --wait Wait for action to complete --tag <tag> Tag to be removed from the l7policy (repeat option to remove multiple tags) --all-tag Clear all tags associated with the l7policy 49.44. loadbalancer l7rule create Create a l7rule Usage: Table 49.164. Positional arguments Value Summary <l7policy> L7policy to add l7rule to (name or id). Table 49.165. Command arguments Value Summary -h, --help Show this help message and exit --compare-type {REGEX,EQUAL_TO,CONTAINS,ENDS_WITH,STARTS_WITH} Set the compare type for the l7rule. --invert Invert l7rule. --value <value> Set the rule value to match on. --key <key> Set the key for the l7rule's value to match on. --type {FILE_TYPE,PATH,COOKIE,HOST_NAME,HEADER,SSL_CONN_HAS_CERT,SSL_VERIFY_RESULT,SSL_DN_FIELD} Set the type for the l7rule. --enable Enable l7rule (default). --disable Disable l7rule. --wait Wait for action to complete --tag <tag> Tag to be added to the l7rule (repeat option to set multiple tags) --no-tag No tags associated with the l7rule Table 49.166. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.167. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.168. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.169. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.45. loadbalancer l7rule delete Delete a l7rule Usage: Table 49.170. Positional arguments Value Summary <l7policy> L7policy to delete rule from (name or id). <rule_id> L7rule to delete. Table 49.171. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete 49.46. loadbalancer l7rule list List l7rules for l7policy Usage: Table 49.172. Positional arguments Value Summary <l7policy> L7policy to list rules for (name or id). Table 49.173. Command arguments Value Summary -h, --help Show this help message and exit --tags <tag>[,<tag>,... ] List l7rule which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List l7rule which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude l7rule which have all given tag(s) (comma- separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude l7rule which have any given tag(s) (comma- separated list of tags) Table 49.174. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.175. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.176. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.177. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.47. loadbalancer l7rule set Update a l7rule Usage: Table 49.178. Positional arguments Value Summary <l7policy> L7policy to update l7rule on (name or id). <l7rule_id> L7rule to update. Table 49.179. Command arguments Value Summary -h, --help Show this help message and exit --compare-type {REGEX,EQUAL_TO,CONTAINS,ENDS_WITH,STARTS_WITH} Set the compare type for the l7rule. --invert Invert l7rule. --value <value> Set the rule value to match on. --key <key> Set the key for the l7rule's value to match on. --type {FILE_TYPE,PATH,COOKIE,HOST_NAME,HEADER,SSL_CONN_HAS_CERT,SSL_VERIFY_RESULT,SSL_DN_FIELD} Set the type for the l7rule. --enable Enable l7rule. --disable Disable l7rule. --wait Wait for action to complete --tag <tag> Tag to be added to the l7rule (repeat option to set multiple tags) --no-tag Clear tags associated with the l7rule. specify both --tag and --no-tag to overwrite current tags 49.48. loadbalancer l7rule show Show the details of a single l7rule Usage: Table 49.180. Positional arguments Value Summary <l7policy> L7policy to show rule from (name or id). <l7rule_id> L7rule to show. Table 49.181. Command arguments Value Summary -h, --help Show this help message and exit Table 49.182. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.183. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.184. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.185. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.49. loadbalancer l7rule unset Clear l7rule settings Usage: Table 49.186. Positional arguments Value Summary <l7policy> L7policy to update (name or id). <l7rule_id> L7rule to update. Table 49.187. Command arguments Value Summary -h, --help Show this help message and exit --invert Reset the l7rule invert to the api default. --key Clear the l7rule key. --wait Wait for action to complete --tag <tag> Tag to be removed from the l7rule (repeat option to remove multiple tags) --all-tag Clear all tags associated with the l7rule 49.50. loadbalancer list List load balancers Usage: Table 49.188. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List load balancers according to their name. --enable List enabled load balancers. --disable List disabled load balancers. --project <project-id> List load balancers according to their project (name or ID). --vip-network-id <vip_network_id> List load balancers according to their vip network (name or ID). --vip-subnet-id <vip_subnet_id> List load balancers according to their vip subnet (name or ID). --vip-qos-policy-id <vip_qos_policy_id> List load balancers according to their vip qos policy (name or ID). --vip-port-id <vip_port_id> List load balancers according to their vip port (name or ID). --provisioning-status {ACTIVE,DELETED,ERROR,PENDING_CREATE,PENDING_UPDATE,PENDING_DELETE} List load balancers according to their provisioning status. --operating-status {ONLINE,DRAINING,OFFLINE,DEGRADED,ERROR,NO_MONITOR} List load balancers according to their operating status. --provider <provider> List load balancers according to their provider. --flavor <flavor> List load balancers according to their flavor. --availability-zone <availability_zone> List load balancers according to their availability zone. --tags <tag>[,<tag>,... ] List load balancer which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List load balancer which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude load balancer which have all given tag(s) (Comma-separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude load balancer which have any given tag(s) (Comma-separated list of tags) Table 49.189. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.190. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.191. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.192. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.51. loadbalancer listener create Create a listener Usage: Table 49.193. Positional arguments Value Summary <loadbalancer> Load balancer for the listener (name or id). Table 49.194. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the listener name. --description <description> Set the description of this listener. --protocol {TCP,HTTP,HTTPS,TERMINATED_HTTPS,UDP,SCTP} The protocol for the listener. --connection-limit <limit> Set the maximum number of connections permitted for this listener. --default-pool <pool> Set the name or id of the pool used by the listener if no L7 policies match. --default-tls-container-ref <container_ref> The uri to the key manager service secrets container containing the certificate and key for TERMINATED_TLS listeners. --sni-container-refs [<container_ref> ... ] A list of uris to the key manager service secrets containers containing the certificates and keys for TERMINATED_TLS the listener using Server Name Indication. --insert-headers <header=value,... > A dictionary of optional headers to insert into the request before it is sent to the backend member. --protocol-port <port> Set the protocol port number for the listener. --timeout-client-data <timeout> Frontend client inactivity timeout in milliseconds. Default: 50000. --timeout-member-connect <timeout> Backend member connection timeout in milliseconds. Default: 5000. --timeout-member-data <timeout> Backend member inactivity timeout in milliseconds. Default: 50000. --timeout-tcp-inspect <timeout> Time, in milliseconds, to wait for additional tcp packets for content inspection. Default: 0. --enable Enable listener (default). --disable Disable listener. --client-ca-tls-container-ref <container_ref> The uri to the key manager service secrets container containing the CA certificate for TERMINATED_TLS listeners. --client-authentication {NONE,OPTIONAL,MANDATORY} The tls client authentication verify options for TERMINATED_TLS listeners. --client-crl-container-ref <client_crl_container_ref> The uri to the key manager service secrets container containting the CA revocation list file for TERMINATED_TLS listeners. --allowed-cidr [<allowed_cidr>] Cidr to allow access to the listener (can be set multiple times). --wait Wait for action to complete --tls-ciphers <tls_ciphers> Set the tls ciphers to be used by the listener in OpenSSL format. --tls-version [<tls_versions>] Set the tls protocol version to be used by the listener (can be set multiple times). --alpn-protocol [<alpn_protocols>] Set the alpn protocol to be used by the listener (can be set multiple times). --tag <tag> Tag to be added to the listener (repeat option to set multiple tags) --no-tag No tags associated with the listener Table 49.195. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.196. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.197. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.198. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.52. loadbalancer listener delete Delete a listener Usage: Table 49.199. Positional arguments Value Summary <listener> Listener to delete (name or id) Table 49.200. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete 49.53. loadbalancer listener list List listeners Usage: Table 49.201. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List listeners by listener name. --loadbalancer <loadbalancer> Filter by load balancer (name or id). --enable List enabled listeners. --disable List disabled listeners. --project <project> List listeners by project id. --tags <tag>[,<tag>,... ] List listener which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List listener which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude listener which have all given tag(s) (comma- separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude listener which have any given tag(s) (comma- separated list of tags) Table 49.202. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.203. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.204. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.205. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.54. loadbalancer listener set Update a listener Usage: Table 49.206. Positional arguments Value Summary <listener> Listener to modify (name or id). Table 49.207. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the listener name. --description <description> Set the description of this listener. --connection-limit <limit> The maximum number of connections permitted for this listener. Default value is -1 which represents infinite connections. --default-pool <pool> The id of the pool used by the listener if no l7 policies match. --default-tls-container-ref <container-ref> The uri to the key manager service secrets container containing the certificate and key for TERMINATED_TLSlisteners. --sni-container-refs [<container-ref> ... ] A list of uris to the key manager service secrets containers containing the certificates and keys for TERMINATED_TLS the listener using Server Name Indication. --insert-headers <header=value> A dictionary of optional headers to insert into the request before it is sent to the backend member. --timeout-client-data <timeout> Frontend client inactivity timeout in milliseconds. Default: 50000. --timeout-member-connect <timeout> Backend member connection timeout in milliseconds. Default: 5000. --timeout-member-data <timeout> Backend member inactivity timeout in milliseconds. Default: 50000. --timeout-tcp-inspect <timeout> Time, in milliseconds, to wait for additional tcp packets for content inspection. Default: 0. --enable Enable listener. --disable Disable listener. --client-ca-tls-container-ref <container_ref> The uri to the key manager service secrets container containing the CA certificate for TERMINATED_TLS listeners. --client-authentication {NONE,OPTIONAL,MANDATORY} The tls client authentication verify options for TERMINATED_TLS listeners. --client-crl-container-ref <client_crl_container_ref> The uri to the key manager service secrets container containting the CA revocation list file for TERMINATED_TLS listeners. --allowed-cidr [<allowed_cidr>] Cidr to allow access to the listener (can be set multiple times). --wait Wait for action to complete --tls-ciphers <tls_ciphers> Set the tls ciphers to be used by the listener in OpenSSL format. --tls-version [<tls_versions>] Set the tls protocol version to be used by the listener (can be set multiple times). --alpn-protocol [<alpn_protocols>] Set the alpn protocol to be used by the listener (can be set multiple times). --tag <tag> Tag to be added to the listener (repeat option to set multiple tags) --no-tag Clear tags associated with the listener. specify both --tag and --no-tag to overwrite current tags 49.55. loadbalancer listener show Show the details of a single listener Usage: Table 49.208. Positional arguments Value Summary <listener> Name or uuid of the listener Table 49.209. Command arguments Value Summary -h, --help Show this help message and exit Table 49.210. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.211. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.212. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.213. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.56. loadbalancer listener stats show Shows the current statistics for a listener. Usage: Table 49.214. Positional arguments Value Summary <listener> Name or uuid of the listener Table 49.215. Command arguments Value Summary -h, --help Show this help message and exit Table 49.216. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.217. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.218. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.219. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.57. loadbalancer listener unset Clear listener settings Usage: Table 49.220. Positional arguments Value Summary <listener> Listener to modify (name or id). Table 49.221. Command arguments Value Summary -h, --help Show this help message and exit --name Clear the listener name. --description Clear the description of this listener. --connection-limit Reset the connection limit to the api default. --default-pool Clear the default pool from the listener. --default-tls-container-ref Remove the default tls container reference from the listener. --sni-container-refs Remove the tls sni container references from the listener. --insert-headers Clear the insert headers from the listener. --timeout-client-data Reset the client data timeout to the api default. --timeout-member-connect Reset the member connect timeout to the api default. --timeout-member-data Reset the member data timeout to the api default. --timeout-tcp-inspect Reset the tcp inspection timeout to the api default. --client-ca-tls-container-ref Clear the client ca tls container reference from the listener. --client-authentication Reset the client authentication setting to the api default. --client-crl-container-ref Clear the client crl container reference from the listener. --allowed-cidrs Clear all allowed cidrs from the listener. --tls-versions Clear all tls versions from the listener. --tls-ciphers Clear all tls ciphers from the listener. --wait Wait for action to complete. --alpn-protocols Clear all alpn protocols from the listener. --tag <tag> Tag to be removed from the listener (repeat option to remove multiple tags) --all-tag Clear all tags associated with the listener 49.58. loadbalancer member create Creating a member in a pool Usage: Table 49.222. Positional arguments Value Summary <pool> Id or name of the pool to create the member for. Table 49.223. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Name of the member. --disable-backup Disable member backup (default) --enable-backup Enable member backup --weight <weight> The weight of a member determines the portion of requests or connections it services compared to the other members of the pool. --address <ip_address> The ip address of the backend member server --subnet-id <subnet_id> The subnet id the member service is accessible from. --protocol-port <protocol_port> The protocol port number the backend member server is listening on. --monitor-port <monitor_port> An alternate protocol port used for health monitoring a backend member. --monitor-address <monitor_address> An alternate ip address used for health monitoring a backend member. --enable Enable member (default) --disable Disable member --wait Wait for action to complete --tag <tag> Tag to be added to the member (repeat option to set multiple tags) --no-tag No tags associated with the member Table 49.224. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.225. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.226. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.227. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.59. loadbalancer member delete Delete a member from a pool Usage: Table 49.228. Positional arguments Value Summary <pool> Pool name or id to delete the member from. <member> Name or id of the member to be deleted. Table 49.229. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete 49.60. loadbalancer member list List members in a pool Usage: Table 49.230. Positional arguments Value Summary <pool> Pool name or id to list the members of. Table 49.231. Command arguments Value Summary -h, --help Show this help message and exit --tags <tag>[,<tag>,... ] List member which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List member which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude member which have all given tag(s) (comma- separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude member which have any given tag(s) (comma- separated list of tags) Table 49.232. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.233. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.234. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.235. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.61. loadbalancer member set Update a member Usage: Table 49.236. Positional arguments Value Summary <pool> Pool that the member to update belongs to (name or ID). <member> Name or id of the member to update Table 49.237. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the name of the member --disable-backup Disable member backup (default) --enable-backup Enable member backup --weight <weight> Set the weight of member in the pool --monitor-port <monitor_port> An alternate protocol port used for health monitoring a backend member --monitor-address <monitor_address> An alternate ip address used for health monitoring a backend member. --enable Set the admin_state_up to true --disable Set the admin_state_up to false --wait Wait for action to complete --tag <tag> Tag to be added to the member (repeat option to set multiple tags) --no-tag Clear tags associated with the member. specify both --tag and --no-tag to overwrite current tags 49.62. loadbalancer member show Shows details of a single Member Usage: Table 49.238. Positional arguments Value Summary <pool> Pool name or id to show the members of. <member> Name or id of the member to show. Table 49.239. Command arguments Value Summary -h, --help Show this help message and exit Table 49.240. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.241. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.242. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.243. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.63. loadbalancer member unset Clear member settings Usage: Table 49.244. Positional arguments Value Summary <pool> Pool that the member to update belongs to (name or id). <member> Member to modify (name or id). Table 49.245. Command arguments Value Summary -h, --help Show this help message and exit --backup Clear the backup member flag. --monitor-address Clear the member monitor address. --monitor-port Clear the member monitor port. --name Clear the member name. --weight Reset the member weight to the api default. --wait Wait for action to complete --tag <tag> Tag to be removed from the member (repeat option to remove multiple tags) --all-tag Clear all tags associated with the member 49.64. loadbalancer pool create Create a pool Usage: Table 49.246. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set pool name. --description <description> Set pool description. --protocol {TCP,HTTP,HTTPS,TERMINATED_HTTPS,PROXY,PROXYV2,UDP,SCTP} Set the pool protocol. --listener <listener> Listener to add the pool to (name or id). --loadbalancer <load_balancer> Load balncer to add the pool to (name or id) --session-persistence <session persistence> Set the session persistence for the listener (key=value). --lb-algorithm {SOURCE_IP,ROUND_ROBIN,LEAST_CONNECTIONS,SOURCE_IP_PORT} Load balancing algorithm to use. --enable Enable pool (default). --disable Disable pool. --tls-container-ref <container-ref> The reference to the key manager service secrets container containing the certificate and key for ``tls_enabled`` pools to re-encrpt the traffic to backend member servers. --ca-tls-container-ref <ca_tls_container_ref> The reference to the key manager service secrets container containing the CA certificate for ``tls_enabled`` pools to check the backend member servers certificates --crl-container-ref <crl_container_ref> The reference to the key manager service secrets container containting the CA revocation list file for ``tls_enabled`` pools to validate the backend member servers certificates. --enable-tls Enable backend member re-encryption. --disable-tls Disable backend member re-encryption. --wait Wait for action to complete --tls-ciphers <tls_ciphers> Set the tls ciphers to be used by the pool in openssl cipher string format. --tls-version [<tls_versions>] Set the tls protocol version to be used by the pool (can be set multiple times). --alpn-protocol [<alpn_protocols>] Set the alpn protocol to be used by the pool (can be set multiple times). --tag <tag> Tag to be added to the pool (repeat option to set multiple tags) --no-tag No tags associated with the pool Table 49.247. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.248. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.249. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.250. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.65. loadbalancer pool delete Delete a pool Usage: Table 49.251. Positional arguments Value Summary <pool> Pool to delete (name or id). Table 49.252. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete 49.66. loadbalancer pool list List pools Usage: Table 49.253. Command arguments Value Summary -h, --help Show this help message and exit --loadbalancer <loadbalancer> Filter by load balancer (name or id). --tags <tag>[,<tag>,... ] List pool which have all given tag(s) (comma-separated list of tags) --any-tags <tag>[,<tag>,... ] List pool which have any given tag(s) (comma-separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude pool which have all given tag(s) (comma- separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude pool which have any given tag(s) (comma- separated list of tags) Table 49.254. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.255. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.256. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.257. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.67. loadbalancer pool set Update a pool Usage: Table 49.258. Positional arguments Value Summary <pool> Pool to update (name or id). Table 49.259. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the name of the pool. --description <description> Set the description of the pool. --session-persistence <session_persistence> Set the session persistence for the listener (key=value). --lb-algorithm {SOURCE_IP,ROUND_ROBIN,LEAST_CONNECTIONS,SOURCE_IP_PORT} Set the load balancing algorithm to use. --enable Enable pool. --disable Disable pool. --tls-container-ref <container-ref> The uri to the key manager service secrets container containing the certificate and key for TERMINATED_TLS pools to re-encrpt the traffic from TERMINATED_TLS listener to backend servers. --ca-tls-container-ref <ca_tls_container_ref> The uri to the key manager service secrets container containing the CA certificate for TERMINATED_TLS listeners to check the backend servers certificates in ssl traffic. --crl-container-ref <crl_container_ref> The uri to the key manager service secrets container containting the CA revocation list file for TERMINATED_TLS listeners to valid the backend servers certificates in ssl traffic. --enable-tls Enable backend associated members re-encryption. --disable-tls Disable backend associated members re-encryption. --wait Wait for action to complete --tls-ciphers <tls_ciphers> Set the tls ciphers to be used by the pool in openssl cipher string format. --tls-version [<tls_versions>] Set the tls protocol version to be used by the pool (can be set multiple times). --alpn-protocol [<alpn_protocols>] Set the alpn protocol to be used by the pool (can be set multiple times). --tag <tag> Tag to be added to the pool (repeat option to set multiple tags) --no-tag Clear tags associated with the pool. specify both --tag and --no-tag to overwrite current tags 49.68. loadbalancer pool show Show the details of a single pool Usage: Table 49.260. Positional arguments Value Summary <pool> Name or uuid of the pool. Table 49.261. Command arguments Value Summary -h, --help Show this help message and exit Table 49.262. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.263. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.264. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.265. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.69. loadbalancer pool unset Clear pool settings Usage: Table 49.266. Positional arguments Value Summary <pool> Pool to modify (name or id). Table 49.267. Command arguments Value Summary -h, --help Show this help message and exit --name Clear the pool name. --description Clear the description of this pool. --ca-tls-container-ref Clear the certificate authority certificate reference on this pool. --crl-container-ref Clear the certificate revocation list reference on this pool. --session-persistence Disables session persistence on the pool. --tls-container-ref Clear the certificate reference for this pool. --tls-versions Clear all tls versions from the pool. --tls-ciphers Clear all tls ciphers from the pool. --wait Wait for action to complete --alpn-protocols Clear all alpn protocols from the pool. --tag <tag> Tag to be removed from the pool (repeat option to remove multiple tags) --all-tag Clear all tags associated with the pool 49.70. loadbalancer provider capability list List specified provider driver's capabilities. Usage: Table 49.268. Positional arguments Value Summary <provider_name> Name of the provider driver. Table 49.269. Command arguments Value Summary -h, --help Show this help message and exit --flavor Get capabilities for flavor only. --availability-zone Get capabilities for availability zone only. Table 49.270. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.271. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.272. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.273. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.71. loadbalancer provider list List all providers Usage: Table 49.274. Command arguments Value Summary -h, --help Show this help message and exit Table 49.275. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.276. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.277. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.278. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.72. loadbalancer quota defaults show Show quota defaults Usage: Table 49.279. Command arguments Value Summary -h, --help Show this help message and exit Table 49.280. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.281. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.282. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.283. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.73. loadbalancer quota list List quotas Usage: Table 49.284. Command arguments Value Summary -h, --help Show this help message and exit --project <project-id> Name or uuid of the project. Table 49.285. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.286. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.287. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.288. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.74. loadbalancer quota reset Resets quotas to default quotas Usage: Table 49.289. Positional arguments Value Summary <project> Project to reset quotas (name or id) Table 49.290. Command arguments Value Summary -h, --help Show this help message and exit 49.75. loadbalancer quota set Update a quota Usage: Table 49.291. Positional arguments Value Summary <project> Name or uuid of the project. Table 49.292. Command arguments Value Summary -h, --help Show this help message and exit Table 49.293. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.294. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.295. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.296. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. Table 49.297. Quota limits Value Summary At least one of the following arguments is required.--healthmonitor <health_monitor> New value for the health monitor quota. value -1 means unlimited. --listener <listener> New value for the listener quota. value -1 means unlimited. --loadbalancer <load_balancer> New value for the load balancer quota limit. value -1 means unlimited. --member <member> New value for the member quota limit. value -1 means unlimited. --pool <pool> New value for the pool quota limit. value -1 means unlimited. --l7policy <l7policy> New value for the l7policy quota limit. value -1 means unlimited. --l7rule <l7rule> New value for the l7rule quota limit. value -1 means unlimited. 49.76. loadbalancer quota show Show the quota details for a project Usage: Table 49.298. Positional arguments Value Summary <project> Name or uuid of the project. Table 49.299. Command arguments Value Summary -h, --help Show this help message and exit Table 49.300. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.301. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.302. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.303. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.77. loadbalancer quota unset Clear quota settings Usage: Table 49.304. Positional arguments Value Summary <project> Name or uuid of the project. Table 49.305. Command arguments Value Summary -h, --help Show this help message and exit --loadbalancer Reset the load balancer quota to the default. --listener Reset the listener quota to the default. --pool Reset the pool quota to the default. --member Reset the member quota to the default. --healthmonitor Reset the health monitor quota to the default. --l7policy Reset the l7policy quota to the default. --l7rule Reset the l7rule quota to the default. 49.78. loadbalancer set Update a load balancer Usage: Table 49.306. Positional arguments Value Summary <load_balancer> Name or uuid of the load balancer to update. Table 49.307. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set load balancer name. --description <description> Set load balancer description. --vip-qos-policy-id <vip_qos_policy_id> Set qos policy id for vip port. unset with none . --enable Enable load balancer. --disable Disable load balancer. --wait Wait for action to complete --tag <tag> Tag to be added to the load balancer (repeat option to set multiple tags) --no-tag Clear tags associated with the load balancer. specify both --tag and --no-tag to overwrite current tags 49.79. loadbalancer show Show the details for a single load balancer Usage: Table 49.308. Positional arguments Value Summary <load_balancer> Name or uuid of the load balancer. Table 49.309. Command arguments Value Summary -h, --help Show this help message and exit Table 49.310. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.311. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.312. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.313. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.80. loadbalancer stats show Shows the current statistics for a load balancer Usage: Table 49.314. Positional arguments Value Summary <load_balancer> Name or uuid of the load balancer. Table 49.315. Command arguments Value Summary -h, --help Show this help message and exit Table 49.316. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.317. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.318. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.319. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.81. loadbalancer status show Display load balancer status tree in json format Usage: Table 49.320. Positional arguments Value Summary <load_balancer> Name or uuid of the load balancer. Table 49.321. Command arguments Value Summary -h, --help Show this help message and exit 49.82. loadbalancer unset Clear load balancer settings Usage: Table 49.322. Positional arguments Value Summary <load_balancer> Name or uuid of the load balancer to update. Table 49.323. Command arguments Value Summary -h, --help Show this help message and exit --name Clear the load balancer name. --description Clear the load balancer description. --vip-qos-policy-id Clear the load balancer qos policy. --wait Wait for action to complete --tag <tag> Tag to be removed from the load balancer (repeat option to remove multiple tags) --all-tag Clear all tags associated with the load balancer
[ "openstack loadbalancer amphora configure [-h] [--wait] <amphora-id>", "openstack loadbalancer amphora delete [-h] [--wait] <amphora-id>", "openstack loadbalancer amphora failover [-h] [--wait] <amphora-id>", "openstack loadbalancer amphora list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--loadbalancer <loadbalancer>] [--compute-id <compute-id>] [--role {BACKUP,MASTER,STANDALONE}] [--status {ALLOCATED,BOOTING,DELETED,ERROR,PENDING_CREATE,PENDING_DELETE,READY}] [--long]", "openstack loadbalancer amphora show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <amphora-id>", "openstack loadbalancer amphora stats show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--listener <listener>] <amphora-id>", "openstack loadbalancer availabilityzone create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --name <name> --availabilityzoneprofile <availabilityzone_profile> [--description <description>] [--enable | --disable]", "openstack loadbalancer availabilityzone delete [-h] <availabilityzone>", "openstack loadbalancer availabilityzone list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--availabilityzoneprofile <availabilityzone_profile>] [--enable | --disable]", "openstack loadbalancer availabilityzone set [-h] [--description <description>] [--enable | --disable] <availabilityzone>", "openstack loadbalancer availabilityzone show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <availabilityzone>", "openstack loadbalancer availabilityzone unset [-h] [--description] <availabilityzone>", "openstack loadbalancer availabilityzoneprofile create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --name <name> --provider <provider name> --availability-zone-data <availability_zone_data>", "openstack loadbalancer availabilityzoneprofile delete [-h] <availabilityzone_profile>", "openstack loadbalancer availabilityzoneprofile list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--provider <provider_name>]", "openstack loadbalancer availabilityzoneprofile set [-h] [--name <name>] [--provider <provider_name>] [--availabilityzone-data <availabilityzone_data>] <availabilityzone_profile>", "openstack loadbalancer availabilityzoneprofile show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <availabilityzone_profile>", "openstack loadbalancer create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--description <description>] [--vip-address <vip_address>] [--vip-port-id <vip_port_id>] [--vip-subnet-id <vip_subnet_id>] [--vip-network-id <vip_network_id>] [--vip-qos-policy-id <vip_qos_policy_id>] [--project <project>] [--provider <provider>] [--availability-zone <availability_zone>] [--enable | --disable] [--flavor <flavor>] [--wait] [--tag <tag> | --no-tag]", "openstack loadbalancer delete [-h] [--cascade] [--wait] <load_balancer>", "openstack loadbalancer failover [-h] [--wait] <load_balancer>", "openstack loadbalancer flavor create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --name <name> --flavorprofile <flavor_profile> [--description <description>] [--enable | --disable]", "openstack loadbalancer flavor delete [-h] <flavor>", "openstack loadbalancer flavor list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--flavorprofile <flavor_profile>] [--enable | --disable]", "openstack loadbalancer flavor set [-h] [--name <name>] [--enable | --disable] <flavor>", "openstack loadbalancer flavor show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <flavor>", "openstack loadbalancer flavor unset [-h] [--description] <flavor>", "openstack loadbalancer flavorprofile create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --name <name> --provider <provider name> --flavor-data <flavor_data>", "openstack loadbalancer flavorprofile delete [-h] <flavor_profile>", "openstack loadbalancer flavorprofile list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--provider <provider_name>]", "openstack loadbalancer flavorprofile set [-h] [--name <name>] [--provider <provider_name>] [--flavor-data <flavor_data>] <flavor_profile>", "openstack loadbalancer flavorprofile show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <flavor_profile>", "openstack loadbalancer healthmonitor create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] --delay <delay> [--domain-name <domain_name>] [--expected-codes <codes>] [--http-method {GET,POST,DELETE,PUT,HEAD,OPTIONS,PATCH,CONNECT,TRACE}] [--http-version <http_version>] --timeout <timeout> --max-retries <max_retries> [--url-path <url_path>] --type {PING,HTTP,TCP,HTTPS,TLS-HELLO,UDP-CONNECT,SCTP} [--max-retries-down <max_retries_down>] [--enable | --disable] [--wait] [--tag <tag> | --no-tag] <pool>", "openstack loadbalancer healthmonitor delete [-h] [--wait] <health_monitor>", "openstack loadbalancer healthmonitor list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]", "openstack loadbalancer healthmonitor set [-h] [--name <name>] [--delay <delay>] [--domain-name <domain_name>] [--expected-codes <codes>] [--http-method {GET,POST,DELETE,PUT,HEAD,OPTIONS,PATCH,CONNECT,TRACE}] [--http-version <http_version>] [--timeout <timeout>] [--max-retries <max_retries>] [--max-retries-down <max_retries_down>] [--url-path <url_path>] [--enable | --disable] [--wait] [--tag <tag>] [--no-tag] <health_monitor>", "openstack loadbalancer healthmonitor show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <health_monitor>", "openstack loadbalancer healthmonitor unset [-h] [--domain-name] [--expected-codes] [--http-method] [--http-version] [--max-retries-down] [--name] [--url-path] [--wait] [--tag <tag> | --all-tag] <health_monitor>", "openstack loadbalancer l7policy create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--description <description>] --action {REDIRECT_TO_URL,REDIRECT_TO_POOL,REDIRECT_PREFIX,REJECT} [--redirect-pool <pool> | --redirect-url <url> | --redirect-prefix <url>] [--redirect-http-code <redirect_http_code>] [--position <position>] [--enable | --disable] [--wait] [--tag <tag> | --no-tag] <listener>", "openstack loadbalancer l7policy delete [-h] [--wait] <policy>", "openstack loadbalancer l7policy list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--listener LISTENER] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]", "openstack loadbalancer l7policy set [-h] [--name <name>] [--description <description>] [--action {REDIRECT_TO_URL,REDIRECT_TO_POOL,REDIRECT_PREFIX,REJECT}] [--redirect-pool <pool> | --redirect-url <url> | --redirect-prefix <url>] [--redirect-http-code <redirect_http_code>] [--position <position>] [--enable | --disable] [--wait] [--tag <tag>] [--no-tag] <policy>", "openstack loadbalancer l7policy show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <policy>", "openstack loadbalancer l7policy unset [-h] [--description] [--name] [--redirect-http-code] [--wait] [--tag <tag> | --all-tag] <policy>", "openstack loadbalancer l7rule create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --compare-type {REGEX,EQUAL_TO,CONTAINS,ENDS_WITH,STARTS_WITH} [--invert] --value <value> [--key <key>] --type {FILE_TYPE,PATH,COOKIE,HOST_NAME,HEADER,SSL_CONN_HAS_CERT,SSL_VERIFY_RESULT,SSL_DN_FIELD} [--enable | --disable] [--wait] [--tag <tag> | --no-tag] <l7policy>", "openstack loadbalancer l7rule delete [-h] [--wait] <l7policy> <rule_id>", "openstack loadbalancer l7rule list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]] <l7policy>", "openstack loadbalancer l7rule set [-h] [--compare-type {REGEX,EQUAL_TO,CONTAINS,ENDS_WITH,STARTS_WITH}] [--invert] [--value <value>] [--key <key>] [--type {FILE_TYPE,PATH,COOKIE,HOST_NAME,HEADER,SSL_CONN_HAS_CERT,SSL_VERIFY_RESULT,SSL_DN_FIELD}] [--enable | --disable] [--wait] [--tag <tag>] [--no-tag] <l7policy> <l7rule_id>", "openstack loadbalancer l7rule show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <l7policy> <l7rule_id>", "openstack loadbalancer l7rule unset [-h] [--invert] [--key] [--wait] [--tag <tag> | --all-tag] <l7policy> <l7rule_id>", "openstack loadbalancer list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--enable | --disable] [--project <project-id>] [--vip-network-id <vip_network_id>] [--vip-subnet-id <vip_subnet_id>] [--vip-qos-policy-id <vip_qos_policy_id>] [--vip-port-id <vip_port_id>] [--provisioning-status {ACTIVE,DELETED,ERROR,PENDING_CREATE,PENDING_UPDATE,PENDING_DELETE}] [--operating-status {ONLINE,DRAINING,OFFLINE,DEGRADED,ERROR,NO_MONITOR}] [--provider <provider>] [--flavor <flavor>] [--availability-zone <availability_zone>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]", "openstack loadbalancer listener create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--description <description>] --protocol {TCP,HTTP,HTTPS,TERMINATED_HTTPS,UDP,SCTP} [--connection-limit <limit>] [--default-pool <pool>] [--default-tls-container-ref <container_ref>] [--sni-container-refs [<container_ref> ...]] [--insert-headers <header=value,...>] --protocol-port <port> [--timeout-client-data <timeout>] [--timeout-member-connect <timeout>] [--timeout-member-data <timeout>] [--timeout-tcp-inspect <timeout>] [--enable | --disable] [--client-ca-tls-container-ref <container_ref>] [--client-authentication {NONE,OPTIONAL,MANDATORY}] [--client-crl-container-ref <client_crl_container_ref>] [--allowed-cidr [<allowed_cidr>]] [--wait] [--tls-ciphers <tls_ciphers>] [--tls-version [<tls_versions>]] [--alpn-protocol [<alpn_protocols>]] [--tag <tag> | --no-tag] <loadbalancer>", "openstack loadbalancer listener delete [-h] [--wait] <listener>", "openstack loadbalancer listener list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--loadbalancer <loadbalancer>] [--enable | --disable] [--project <project>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]", "openstack loadbalancer listener set [-h] [--name <name>] [--description <description>] [--connection-limit <limit>] [--default-pool <pool>] [--default-tls-container-ref <container-ref>] [--sni-container-refs [<container-ref> ...]] [--insert-headers <header=value>] [--timeout-client-data <timeout>] [--timeout-member-connect <timeout>] [--timeout-member-data <timeout>] [--timeout-tcp-inspect <timeout>] [--enable | --disable] [--client-ca-tls-container-ref <container_ref>] [--client-authentication {NONE,OPTIONAL,MANDATORY}] [--client-crl-container-ref <client_crl_container_ref>] [--allowed-cidr [<allowed_cidr>]] [--wait] [--tls-ciphers <tls_ciphers>] [--tls-version [<tls_versions>]] [--alpn-protocol [<alpn_protocols>]] [--tag <tag>] [--no-tag] <listener>", "openstack loadbalancer listener show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <listener>", "openstack loadbalancer listener stats show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <listener>", "openstack loadbalancer listener unset [-h] [--name] [--description] [--connection-limit] [--default-pool] [--default-tls-container-ref] [--sni-container-refs] [--insert-headers] [--timeout-client-data] [--timeout-member-connect] [--timeout-member-data] [--timeout-tcp-inspect] [--client-ca-tls-container-ref] [--client-authentication] [--client-crl-container-ref] [--allowed-cidrs] [--tls-versions] [--tls-ciphers] [--wait] [--alpn-protocols] [--tag <tag> | --all-tag] <listener>", "openstack loadbalancer member create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--disable-backup | --enable-backup] [--weight <weight>] --address <ip_address> [--subnet-id <subnet_id>] --protocol-port <protocol_port> [--monitor-port <monitor_port>] [--monitor-address <monitor_address>] [--enable | --disable] [--wait] [--tag <tag> | --no-tag] <pool>", "openstack loadbalancer member delete [-h] [--wait] <pool> <member>", "openstack loadbalancer member list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]] <pool>", "openstack loadbalancer member set [-h] [--name <name>] [--disable-backup | --enable-backup] [--weight <weight>] [--monitor-port <monitor_port>] [--monitor-address <monitor_address>] [--enable | --disable] [--wait] [--tag <tag>] [--no-tag] <pool> <member>", "openstack loadbalancer member show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <pool> <member>", "openstack loadbalancer member unset [-h] [--backup] [--monitor-address] [--monitor-port] [--name] [--weight] [--wait] [--tag <tag> | --all-tag] <pool> <member>", "openstack loadbalancer pool create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--description <description>] --protocol {TCP,HTTP,HTTPS,TERMINATED_HTTPS,PROXY,PROXYV2,UDP,SCTP} (--listener <listener> | --loadbalancer <load_balancer>) [--session-persistence <session persistence>] --lb-algorithm {SOURCE_IP,ROUND_ROBIN,LEAST_CONNECTIONS,SOURCE_IP_PORT} [--enable | --disable] [--tls-container-ref <container-ref>] [--ca-tls-container-ref <ca_tls_container_ref>] [--crl-container-ref <crl_container_ref>] [--enable-tls | --disable-tls] [--wait] [--tls-ciphers <tls_ciphers>] [--tls-version [<tls_versions>]] [--alpn-protocol [<alpn_protocols>]] [--tag <tag> | --no-tag]", "openstack loadbalancer pool delete [-h] [--wait] <pool>", "openstack loadbalancer pool list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--loadbalancer <loadbalancer>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]", "openstack loadbalancer pool set [-h] [--name <name>] [--description <description>] [--session-persistence <session_persistence>] [--lb-algorithm {SOURCE_IP,ROUND_ROBIN,LEAST_CONNECTIONS,SOURCE_IP_PORT}] [--enable | --disable] [--tls-container-ref <container-ref>] [--ca-tls-container-ref <ca_tls_container_ref>] [--crl-container-ref <crl_container_ref>] [--enable-tls | --disable-tls] [--wait] [--tls-ciphers <tls_ciphers>] [--tls-version [<tls_versions>]] [--alpn-protocol [<alpn_protocols>]] [--tag <tag>] [--no-tag] <pool>", "openstack loadbalancer pool show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <pool>", "openstack loadbalancer pool unset [-h] [--name] [--description] [--ca-tls-container-ref] [--crl-container-ref] [--session-persistence] [--tls-container-ref] [--tls-versions] [--tls-ciphers] [--wait] [--alpn-protocols] [--tag <tag> | --all-tag] <pool>", "openstack loadbalancer provider capability list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--flavor | --availability-zone] <provider_name>", "openstack loadbalancer provider list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]", "openstack loadbalancer quota defaults show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]", "openstack loadbalancer quota list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--project <project-id>]", "openstack loadbalancer quota reset [-h] <project>", "openstack loadbalancer quota set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--healthmonitor <health_monitor>] [--listener <listener>] [--loadbalancer <load_balancer>] [--member <member>] [--pool <pool>] [--l7policy <l7policy>] [--l7rule <l7rule>] <project>", "openstack loadbalancer quota show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <project>", "openstack loadbalancer quota unset [-h] [--loadbalancer] [--listener] [--pool] [--member] [--healthmonitor] [--l7policy] [--l7rule] <project>", "openstack loadbalancer set [-h] [--name <name>] [--description <description>] [--vip-qos-policy-id <vip_qos_policy_id>] [--enable | --disable] [--wait] [--tag <tag>] [--no-tag] <load_balancer>", "openstack loadbalancer show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <load_balancer>", "openstack loadbalancer stats show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <load_balancer>", "openstack loadbalancer status show [-h] <load_balancer>", "openstack loadbalancer unset [-h] [--name] [--description] [--vip-qos-policy-id] [--wait] [--tag <tag> | --all-tag] <load_balancer>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/loadbalancer
Chapter 8. Quotas
Chapter 8. Quotas 8.1. Resource quotas per project A resource quota , defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per project. It can limit the quantity of objects that can be created in a project by type, as well as the total amount of compute resources and storage that might be consumed by resources in that project. This guide describes how resource quotas work, how cluster administrators can set and manage resource quotas on a per project basis, and how developers and cluster administrators can view them. 8.1.1. Resources managed by quotas The following describes the set of compute resources and object types that can be managed by a quota. Note A pod is in a terminal state if status.phase in (Failed, Succeeded) is true. Table 8.1. Compute resources managed by quota Resource Name Description cpu The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably. memory The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably. requests.cpu The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably. requests.memory The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably. limits.cpu The sum of CPU limits across all pods in a non-terminal state cannot exceed this value. limits.memory The sum of memory limits across all pods in a non-terminal state cannot exceed this value. Table 8.2. Storage resources managed by quota Resource Name Description requests.storage The sum of storage requests across all persistent volume claims in any state cannot exceed this value. persistentvolumeclaims The total number of persistent volume claims that can exist in the project. <storage-class-name>.storageclass.storage.k8s.io/requests.storage The sum of storage requests across all persistent volume claims in any state that have a matching storage class, cannot exceed this value. <storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims The total number of persistent volume claims with a matching storage class that can exist in the project. ephemeral-storage The sum of local ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. requests.ephemeral-storage The sum of ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. limits.ephemeral-storage The sum of ephemeral storage limits across all pods in a non-terminal state cannot exceed this value. Table 8.3. Object counts managed by quota Resource Name Description pods The total number of pods in a non-terminal state that can exist in the project. replicationcontrollers The total number of ReplicationControllers that can exist in the project. resourcequotas The total number of resource quotas that can exist in the project. services The total number of services that can exist in the project. services.loadbalancers The total number of services of type LoadBalancer that can exist in the project. services.nodeports The total number of services of type NodePort that can exist in the project. secrets The total number of secrets that can exist in the project. configmaps The total number of ConfigMap objects that can exist in the project. persistentvolumeclaims The total number of persistent volume claims that can exist in the project. openshift.io/imagestreams The total number of imagestreams that can exist in the project. 8.1.2. Quota scopes Each quota can have an associated set of scopes . A quota only measures usage for a resource if it matches the intersection of enumerated scopes. Adding a scope to a quota restricts the set of resources to which that quota can apply. Specifying a resource outside of the allowed set results in a validation error. Scope Description BestEffort Match pods that have best effort quality of service for either cpu or memory . NotBestEffort Match pods that do not have best effort quality of service for cpu and memory . A BestEffort scope restricts a quota to limiting the following resources: pods A NotBestEffort scope restricts a quota to tracking the following resources: pods memory requests.memory limits.memory cpu requests.cpu limits.cpu 8.1.3. Quota enforcement After a resource quota for a project is first created, the project restricts the ability to create any new resources that may violate a quota constraint until it has calculated updated usage statistics. After a quota is created and usage statistics are updated, the project accepts the creation of new content. When you create or modify resources, your quota usage is incremented immediately upon the request to create or modify the resource. When you delete a resource, your quota use is decremented during the full recalculation of quota statistics for the project. A configurable amount of time determines how long it takes to reduce quota usage statistics to their current observed system value. If project modifications exceed a quota usage limit, the server denies the action, and an appropriate error message is returned to the user explaining the quota constraint violated, and what their currently observed usage statistics are in the system. 8.1.4. Requests versus limits When allocating compute resources, each container might specify a request and a limit value each for CPU, memory, and ephemeral storage. Quotas can restrict any of these values. If the quota has a value specified for requests.cpu or requests.memory , then it requires that every incoming container make an explicit request for those resources. If the quota has a value specified for limits.cpu or limits.memory , then it requires that every incoming container specify an explicit limit for those resources. 8.1.5. Sample resource quota definitions core-object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: "10" 1 persistentvolumeclaims: "4" 2 replicationcontrollers: "20" 3 secrets: "10" 4 services: "10" 5 services.loadbalancers: "2" 6 1 The total number of ConfigMap objects that can exist in the project. 2 The total number of persistent volume claims (PVCs) that can exist in the project. 3 The total number of replication controllers that can exist in the project. 4 The total number of secrets that can exist in the project. 5 The total number of services that can exist in the project. 6 The total number of services of type LoadBalancer that can exist in the project. openshift-object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: "10" 1 1 The total number of image streams that can exist in the project. compute-resources.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: "4" 1 requests.cpu: "1" 2 requests.memory: 1Gi 3 limits.cpu: "2" 4 limits.memory: 2Gi 5 1 The total number of pods in a non-terminal state that can exist in the project. 2 Across all pods in a non-terminal state, the sum of CPU requests cannot exceed 1 core. 3 Across all pods in a non-terminal state, the sum of memory requests cannot exceed 1Gi. 4 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed 2 cores. 5 Across all pods in a non-terminal state, the sum of memory limits cannot exceed 2Gi. besteffort.yaml apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: "1" 1 scopes: - BestEffort 2 1 The total number of pods in a non-terminal state with BestEffort quality of service that can exist in the project. 2 Restricts the quota to only matching pods that have BestEffort quality of service for either memory or CPU. compute-resources-long-running.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: "4" 1 limits.cpu: "4" 2 limits.memory: "2Gi" 3 scopes: - NotTerminating 4 1 The total number of pods in a non-terminal state. 2 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. 3 Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. 4 Restricts the quota to only matching pods where spec.activeDeadlineSeconds is set to nil . Build pods fall under NotTerminating unless the RestartNever policy is applied. compute-resources-time-bound.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: "2" 1 limits.cpu: "1" 2 limits.memory: "1Gi" 3 scopes: - Terminating 4 1 The total number of pods in a terminating state. 2 Across all pods in a terminating state, the sum of CPU limits cannot exceed this value. 3 Across all pods in a terminating state, the sum of memory limits cannot exceed this value. 4 Restricts the quota to only matching pods where spec.activeDeadlineSeconds >=0 . For example, this quota charges for build or deployer pods, but not long running pods like a web server or database. storage-consumption.yaml apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: "10" 1 requests.storage: "50Gi" 2 gold.storageclass.storage.k8s.io/requests.storage: "10Gi" 3 silver.storageclass.storage.k8s.io/requests.storage: "20Gi" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" 5 bronze.storageclass.storage.k8s.io/requests.storage: "0" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" 7 requests.ephemeral-storage: 2Gi 8 limits.ephemeral-storage: 4Gi 9 1 The total number of persistent volume claims in a project 2 Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value. 3 Across all persistent volume claims in a project, the sum of storage requested in the gold storage class cannot exceed this value. 4 Across all persistent volume claims in a project, the sum of storage requested in the silver storage class cannot exceed this value. 5 Across all persistent volume claims in a project, the total number of claims in the silver storage class cannot exceed this value. 6 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0 , it means bronze storage class cannot request storage. 7 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0 , it means bronze storage class cannot create claims. 8 Across all pods in a non-terminal state, the sum of ephemeral storage requests cannot exceed 2Gi. 9 Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed 4Gi. 8.1.6. Creating a quota You can create a quota to constrain resource usage in a given project. Procedure Define the quota in a file. Use the file to create the quota and apply it to a project: USD oc create -f <file> [-n <project_name>] For example: USD oc create -f core-object-counts.yaml -n demoproject 8.1.6.1. Creating object count quotas You can create an object count quota for all standard namespaced resource types on OpenShift Container Platform, such as BuildConfig and DeploymentConfig objects. An object quota count places a defined quota on all standard namespaced resource types. When using a resource quota, an object is charged against the quota upon creation. These types of quotas are useful to protect against exhaustion of resources. The quota can only be created if there are enough spare resources within the project. Procedure To configure an object count quota for a resource: Run the following command: USD oc create quota <name> \ --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> 1 1 The <resource> variable is the name of the resource, and <group> is the API group, if applicable. Use the oc api-resources command for a list of resources and their associated API groups. For example: USD oc create quota test \ --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 Example output resourcequota "test" created This example limits the listed resources to the hard limit in each project in the cluster. Verify that the quota was created: USD oc describe quota test Example output Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4 8.1.6.2. Setting resource quota for extended resources Overcommitment of resources is not allowed for extended resources, so you must specify requests and limits for the same extended resource in a quota. Currently, only quota items with the prefix requests. is allowed for extended resources. The following is an example scenario of how to set resource quota for the GPU resource nvidia.com/gpu . Procedure Determine how many GPUs are available on a node in your cluster. For example: # oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu' Example output openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu 0 0 In this example, 2 GPUs are available. Set a quota in the namespace nvidia . In this example, the quota is 1 : # cat gpu-quota.yaml Example output apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1 Create the quota: # oc create -f gpu-quota.yaml Example output resourcequota/gpu-quota created Verify that the namespace has the correct quota set: # oc describe quota gpu-quota -n nvidia Example output Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1 Define a pod that asks for a single GPU. The following example definition file is called gpu-pod.yaml : apiVersion: v1 kind: Pod metadata: generateName: gpu-pod- namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: "compute,utility" - name: NVIDIA_REQUIRE_CUDA value: "cuda>=5.0" command: ["sleep"] args: ["infinity"] resources: limits: nvidia.com/gpu: 1 Create the pod: # oc create -f gpu-pod.yaml Verify that the pod is running: # oc get pods Example output NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m Verify that the quota Used counter is correct: # oc describe quota gpu-quota -n nvidia Example output Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1 Attempt to create a second GPU pod in the nvidia namespace. This is technically available on the node because it has 2 GPUs: # oc create -f gpu-pod.yaml Example output Error from server (Forbidden): error when creating "gpu-pod.yaml": pods "gpu-pod-f7z2w" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1 This Forbidden error message is expected because you have a quota of 1 GPU and this pod tried to allocate a second GPU, which exceeds its quota. 8.1.7. Viewing a quota You can view usage statistics related to any hard limits defined in a project's quota by navigating in the web console to the project's Quota page. You can also use the CLI to view quota details. Procedure Get the list of quotas defined in the project. For example, for a project called demoproject : USD oc get quota -n demoproject Example output NAME AGE besteffort 11m compute-resources 2m core-object-counts 29m Describe the quota you are interested in, for example the core-object-counts quota: USD oc describe quota core-object-counts -n demoproject Example output Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10 8.1.8. Configuring explicit resource quotas Configure explicit resource quotas in a project request template to apply specific resource quotas in new projects. Prerequisites Access to the cluster as a user with the cluster-admin role. Install the OpenShift CLI ( oc ). Procedure Add a resource quota definition to a project request template: If a project request template does not exist in a cluster: Create a bootstrap project template and output it to a file called template.yaml : USD oc adm create-bootstrap-project-template -o yaml > template.yaml Add a resource quota definition to template.yaml . The following example defines a resource quota named 'storage-consumption'. The definition must be added before the parameters: section in the template: - apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption namespace: USD{PROJECT_NAME} spec: hard: persistentvolumeclaims: "10" 1 requests.storage: "50Gi" 2 gold.storageclass.storage.k8s.io/requests.storage: "10Gi" 3 silver.storageclass.storage.k8s.io/requests.storage: "20Gi" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" 5 bronze.storageclass.storage.k8s.io/requests.storage: "0" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" 7 1 The total number of persistent volume claims in a project. 2 Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value. 3 Across all persistent volume claims in a project, the sum of storage requested in the gold storage class cannot exceed this value. 4 Across all persistent volume claims in a project, the sum of storage requested in the silver storage class cannot exceed this value. 5 Across all persistent volume claims in a project, the total number of claims in the silver storage class cannot exceed this value. 6 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this value is set to 0 , the bronze storage class cannot request storage. 7 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this value is set to 0 , the bronze storage class cannot create claims. Create a project request template from the modified template.yaml file in the openshift-config namespace: USD oc create -f template.yaml -n openshift-config Note To include the configuration as a kubectl.kubernetes.io/last-applied-configuration annotation, add the --save-config option to the oc create command. By default, the template is called project-request . If a project request template already exists within a cluster: Note If you declaratively or imperatively manage objects within your cluster by using configuration files, edit the existing project request template through those files instead. List templates in the openshift-config namespace: USD oc get templates -n openshift-config Edit an existing project request template: USD oc edit template <project_request_template> -n openshift-config Add a resource quota definition, such as the preceding storage-consumption example, into the existing template. The definition must be added before the parameters: section in the template. If you created a project request template, reference it in the cluster's project configuration resource: Access the project configuration resource for editing: By using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . By using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section of the project configuration resource to include the projectRequestTemplate and name parameters. The following example references the default project request template name project-request : apiVersion: config.openshift.io/v1 kind: Project metadata: ... spec: projectRequestTemplate: name: project-request Verify that the resource quota is applied when projects are created: Create a project: USD oc new-project <project_name> List the project's resource quotas: USD oc get resourcequotas Describe the resource quota in detail: USD oc describe resourcequotas <resource_quota_name> 8.2. Resource quotas across multiple projects A multi-project quota, defined by a ClusterResourceQuota object, allows quotas to be shared across multiple projects. Resources used in each selected project are aggregated and that aggregate is used to limit resources across all the selected projects. This guide describes how cluster administrators can set and manage resource quotas across multiple projects. 8.2.1. Selecting multiple projects during quota creation When creating quotas, you can select multiple projects based on annotation selection, label selection, or both. Procedure To select projects based on annotations, run the following command: USD oc create clusterquota for-user \ --project-annotation-selector openshift.io/requester=<user_name> \ --hard pods=10 \ --hard secrets=20 This creates the following ClusterResourceQuota object: apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-user spec: quota: 1 hard: pods: "10" secrets: "20" selector: annotations: 2 openshift.io/requester: <user_name> labels: null 3 status: namespaces: 4 - namespace: ns-one status: hard: pods: "10" secrets: "20" used: pods: "1" secrets: "9" total: 5 hard: pods: "10" secrets: "20" used: pods: "1" secrets: "9" 1 The ResourceQuotaSpec object that will be enforced over the selected projects. 2 A simple key-value selector for annotations. 3 A label selector that can be used to select projects. 4 A per-namespace map that describes current quota usage in each selected project. 5 The aggregate usage across all selected projects. This multi-project quota document controls all projects requested by <user_name> using the default project request endpoint. You are limited to 10 pods and 20 secrets. Similarly, to select projects based on labels, run this command: USD oc create clusterresourcequota for-name \ 1 --project-label-selector=name=frontend \ 2 --hard=pods=10 --hard=secrets=20 1 Both clusterresourcequota and clusterquota are aliases of the same command. for-name is the name of the ClusterResourceQuota object. 2 To select projects by label, provide a key-value pair by using the format --project-label-selector=key=value . This creates the following ClusterResourceQuota object definition: apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: "10" secrets: "20" selector: annotations: null labels: matchLabels: name: frontend 8.2.2. Viewing applicable cluster resource quotas A project administrator is not allowed to create or modify the multi-project quota that limits his or her project, but the administrator is allowed to view the multi-project quota documents that are applied to his or her project. The project administrator can do this via the AppliedClusterResourceQuota resource. Procedure To view quotas applied to a project, run: USD oc describe AppliedClusterResourceQuota Example output Name: for-user Namespace: <none> Created: 19 hours ago Labels: <none> Annotations: <none> Label Selector: <null> AnnotationSelector: map[openshift.io/requester:<user-name>] Resource Used Hard -------- ---- ---- pods 1 10 secrets 9 20 8.2.3. Selection granularity Because of the locking consideration when claiming quota allocations, the number of active projects selected by a multi-project quota is an important consideration. Selecting more than 100 projects under a single multi-project quota can have detrimental effects on API server responsiveness in those projects.
[ "apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: \"10\" 1 persistentvolumeclaims: \"4\" 2 replicationcontrollers: \"20\" 3 secrets: \"10\" 4 services: \"10\" 5 services.loadbalancers: \"2\" 6", "apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: \"10\" 1", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: \"4\" 1 requests.cpu: \"1\" 2 requests.memory: 1Gi 3 limits.cpu: \"2\" 4 limits.memory: 2Gi 5", "apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: \"1\" 1 scopes: - BestEffort 2", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: \"4\" 1 limits.cpu: \"4\" 2 limits.memory: \"2Gi\" 3 scopes: - NotTerminating 4", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: \"2\" 1 limits.cpu: \"1\" 2 limits.memory: \"1Gi\" 3 scopes: - Terminating 4", "apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7 requests.ephemeral-storage: 2Gi 8 limits.ephemeral-storage: 4Gi 9", "oc create -f <file> [-n <project_name>]", "oc create -f core-object-counts.yaml -n demoproject", "oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> 1", "oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4", "resourcequota \"test\" created", "oc describe quota test", "Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4", "oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'", "openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu 0 0", "cat gpu-quota.yaml", "apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1", "oc create -f gpu-quota.yaml", "resourcequota/gpu-quota created", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1", "apiVersion: v1 kind: Pod metadata: generateName: gpu-pod- namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: \"compute,utility\" - name: NVIDIA_REQUIRE_CUDA value: \"cuda>=5.0\" command: [\"sleep\"] args: [\"infinity\"] resources: limits: nvidia.com/gpu: 1", "oc create -f gpu-pod.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1", "oc create -f gpu-pod.yaml", "Error from server (Forbidden): error when creating \"gpu-pod.yaml\": pods \"gpu-pod-f7z2w\" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1", "oc get quota -n demoproject", "NAME AGE besteffort 11m compute-resources 2m core-object-counts 29m", "oc describe quota core-object-counts -n demoproject", "Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "- apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption namespace: USD{PROJECT_NAME} spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7", "oc create -f template.yaml -n openshift-config", "oc get templates -n openshift-config", "oc edit template <project_request_template> -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: project-request", "oc new-project <project_name>", "oc get resourcequotas", "oc describe resourcequotas <resource_quota_name>", "oc create clusterquota for-user --project-annotation-selector openshift.io/requester=<user_name> --hard pods=10 --hard secrets=20", "apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-user spec: quota: 1 hard: pods: \"10\" secrets: \"20\" selector: annotations: 2 openshift.io/requester: <user_name> labels: null 3 status: namespaces: 4 - namespace: ns-one status: hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\" total: 5 hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\"", "oc create clusterresourcequota for-name \\ 1 --project-label-selector=name=frontend \\ 2 --hard=pods=10 --hard=secrets=20", "apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: \"10\" secrets: \"20\" selector: annotations: null labels: matchLabels: name: frontend", "oc describe AppliedClusterResourceQuota", "Name: for-user Namespace: <none> Created: 19 hours ago Labels: <none> Annotations: <none> Label Selector: <null> AnnotationSelector: map[openshift.io/requester:<user-name>] Resource Used Hard -------- ---- ---- pods 1 10 secrets 9 20" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/building_applications/quotas
probe::netdev.change_mac
probe::netdev.change_mac Name probe::netdev.change_mac - Called when the netdev_name has the MAC changed Synopsis netdev.change_mac Values mac_len The MAC length old_mac The current MAC address dev_name The device that will have the MAC changed new_mac The new MAC address
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-netdev-change-mac
Chapter 15. Enabling Red Hat build of Keycloak Health checks
Chapter 15. Enabling Red Hat build of Keycloak Health checks Red Hat build of Keycloak has built in support for health checks. This chapter describes how to enable and use the Red Hat build of Keycloak health checks. 15.1. Red Hat build of Keycloak health check endpoints Red Hat build of Keycloak exposes 4 health endpoints: /health/live /health/ready /health/started /health See the Quarkus SmallRye Health docs for information on the meaning of each endpoint. These endpoints respond with HTTP status 200 OK on success or 503 Service Unavailable on failure, and a JSON object like the following: Successful response for endpoints without additional per-check information: { "status": "UP", "checks": [] } Successful response for endpoints with information on the database connection: { "status": "UP", "checks": [ { "name": "Keycloak database connections health check", "status": "UP" } ] } 15.2. Enabling the health checks It is possible to enable the health checks using the build time option health-enabled : bin/kc.[sh|bat] build --health-enabled=true By default, no check is returned from the health endpoints. 15.3. Using the health checks It is recommended that the health endpoints be monitored by external HTTP requests. Due to security measures that remove curl and other packages from the Red Hat build of Keycloak container image, local command-based monitoring will not function easily. If you are not using Red Hat build of Keycloak in a container, use whatever you want to access the health check endpoints. 15.3.1. curl You may use a simple HTTP HEAD request to determine the live or ready state of Red Hat build of Keycloak. curl is a good HTTP client for this purpose. If Red Hat build of Keycloak is deployed in a container, you must run this command from outside it due to the previously mentioned security measures. For example: curl --head -fsS http://localhost:8080/health/ready If the command returns with status 0, then Red Hat build of Keycloak is live or ready , depending on which endpoint you called. Otherwise there is a problem. 15.3.2. Kubernetes Define a HTTP Probe so that Kubernetes may externally monitor the health endpoints. Do not use a liveness command. 15.3.3. HEALTHCHECK The Dockerfile image HEALTHCHECK instruction defines a command that will be periodically executed inside the container as it runs. The Red Hat build of Keycloak container does not have any CLI HTTP clients installed. Consider installing curl as an additional RPM, as detailed by the Running Red Hat build of Keycloak in a container chapter. Note that your container may be less secure because of this. 15.4. Available Checks The table below shows the available checks. Check Description Requires Metrics Database Returns the status of the database connection pool. Yes For some checks, you'll need to also enable metrics as indicated by the Requires Metrics column. To enable metrics use the metrics-enabled option as follows: bin/kc.[sh|bat] build --health-enabled=true --metrics-enabled=true 15.5. Relevant options Value health-enabled 🛠 If the server should expose health check endpoints. If enabled, health checks are available at the /health , /health/ready and /health/live endpoints. CLI: --health-enabled Env: KC_HEALTH_ENABLED true , false (default)
[ "{ \"status\": \"UP\", \"checks\": [] }", "{ \"status\": \"UP\", \"checks\": [ { \"name\": \"Keycloak database connections health check\", \"status\": \"UP\" } ] }", "bin/kc.[sh|bat] build --health-enabled=true", "curl --head -fsS http://localhost:8080/health/ready", "bin/kc.[sh|bat] build --health-enabled=true --metrics-enabled=true" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_guide/health-
Chapter 46. Header
Chapter 46. Header The Header Expression Language allows you to extract values of named headers. 46.1. Dependencies The Header language is part of camel-core . When using header with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency> 46.2. Header Options The Header language supports 1 options, which are listed below. Name Default Java Type Description trim Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 46.3. Example usage The recipientList EIP can utilize a header: <route> <from uri="direct:a" /> <recipientList> <header>myHeader</header> </recipientList> </route> In this case, the list of recipients are contained in the header 'myHeader'. And the same example in Java DSL: from("direct:a").recipientList(header("myHeader")); 46.4. Spring Boot Auto-Configuration The component supports 147 options, which are listed below. Name Description Default Type camel.cloud.consul.service-discovery.acl-token Sets the ACL token to be used with Consul. String camel.cloud.consul.service-discovery.block-seconds The seconds to wait for a watch event, default 10 seconds. 10 Integer camel.cloud.consul.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.consul.service-discovery.connect-timeout-millis Connect timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.datacenter The data center. String camel.cloud.consul.service-discovery.enabled Enable the component. true Boolean camel.cloud.consul.service-discovery.password Sets the password to be used for basic authentication. String camel.cloud.consul.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.consul.service-discovery.read-timeout-millis Read timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.url The Consul agent URL. String camel.cloud.consul.service-discovery.user-name Sets the username to be used for basic authentication. String camel.cloud.consul.service-discovery.write-timeout-millis Write timeout for OkHttpClient. Long camel.cloud.dns.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.dns.service-discovery.domain The domain name;. String camel.cloud.dns.service-discovery.enabled Enable the component. true Boolean camel.cloud.dns.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.dns.service-discovery.proto The transport protocol of the desired service. _tcp String camel.cloud.etcd.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.etcd.service-discovery.enabled Enable the component. true Boolean camel.cloud.etcd.service-discovery.password The password to use for basic authentication. String camel.cloud.etcd.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.etcd.service-discovery.service-path The path to look for for service discovery. /services/ String camel.cloud.etcd.service-discovery.timeout To set the maximum time an action could take to complete. Long camel.cloud.etcd.service-discovery.type To set the discovery type, valid values are on-demand and watch. on-demand String camel.cloud.etcd.service-discovery.uris The URIs the client can connect to. String camel.cloud.etcd.service-discovery.user-name The user name to use for basic authentication. String camel.cloud.kubernetes.service-discovery.api-version Sets the API version when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-data Sets the Certificate Authority data when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-file Sets the Certificate Authority data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-data Sets the Client Certificate data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-file Sets the Client Certificate data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-algo Sets the Client Keystore algorithm, such as RSA when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-data Sets the Client Keystore data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-file Sets the Client Keystore data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-passphrase Sets the Client Keystore passphrase when using client lookup. String camel.cloud.kubernetes.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.kubernetes.service-discovery.dns-domain Sets the DNS domain to use for DNS lookup. String camel.cloud.kubernetes.service-discovery.enabled Enable the component. true Boolean camel.cloud.kubernetes.service-discovery.lookup How to perform service lookup. Possible values: client, dns, environment. When using client, then the client queries the kubernetes master to obtain a list of active pods that provides the service, and then random (or round robin) select a pod. When using dns the service name is resolved as name.namespace.svc.dnsDomain. When using dnssrv the service name is resolved with SRV query for . ... svc... When using environment then environment variables are used to lookup the service. By default environment is used. environment String camel.cloud.kubernetes.service-discovery.master-url Sets the URL to the master when using client lookup. String camel.cloud.kubernetes.service-discovery.namespace Sets the namespace to use. Will by default use namespace from the ENV variable KUBERNETES_MASTER. String camel.cloud.kubernetes.service-discovery.oauth-token Sets the OAUTH token for authentication (instead of username/password) when using client lookup. String camel.cloud.kubernetes.service-discovery.password Sets the password for authentication when using client lookup. String camel.cloud.kubernetes.service-discovery.port-name Sets the Port Name to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.port-protocol Sets the Port Protocol to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.kubernetes.service-discovery.trust-certs Sets whether to turn on trust certificate check when using client lookup. false Boolean camel.cloud.kubernetes.service-discovery.username Sets the username for authentication when using client lookup. String camel.cloud.ribbon.load-balancer.client-name Sets the Ribbon client name. String camel.cloud.ribbon.load-balancer.configurations Define additional configuration definitions. Map camel.cloud.ribbon.load-balancer.enabled Enable the component. true Boolean camel.cloud.ribbon.load-balancer.namespace The namespace. String camel.cloud.ribbon.load-balancer.password The password. String camel.cloud.ribbon.load-balancer.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.ribbon.load-balancer.username The username. String camel.hystrix.allow-maximum-size-to-diverge-from-core-size Allows the configuration for maximumSize to take effect. That value can then be equal to, or higher, than coreSize. false Boolean camel.hystrix.circuit-breaker-enabled Whether to use a HystrixCircuitBreaker or not. If false no circuit-breaker logic will be used and all requests permitted. This is similar in effect to circuitBreakerForceClosed() except that continues tracking metrics and knowing whether it should be open/closed, this property results in not even instantiating a circuit-breaker. true Boolean camel.hystrix.circuit-breaker-error-threshold-percentage Error percentage threshold (as whole number such as 50) at which point the circuit breaker will trip open and reject requests. It will stay tripped for the duration defined in circuitBreakerSleepWindowInMilliseconds; The error percentage this is compared against comes from HystrixCommandMetrics.getHealthCounts(). 50 Integer camel.hystrix.circuit-breaker-force-closed If true the HystrixCircuitBreaker#allowRequest() will always return true to allow requests regardless of the error percentage from HystrixCommandMetrics.getHealthCounts(). The circuitBreakerForceOpen() property takes precedence so if it set to true this property does nothing. false Boolean camel.hystrix.circuit-breaker-force-open If true the HystrixCircuitBreaker.allowRequest() will always return false, causing the circuit to be open (tripped) and reject all requests. This property takes precedence over circuitBreakerForceClosed();. false Boolean camel.hystrix.circuit-breaker-request-volume-threshold Minimum number of requests in the metricsRollingStatisticalWindowInMilliseconds() that must exist before the HystrixCircuitBreaker will trip. If below this number the circuit will not trip regardless of error percentage. 20 Integer camel.hystrix.circuit-breaker-sleep-window-in-milliseconds The time in milliseconds after a HystrixCircuitBreaker trips open that it should wait before trying requests again. 5000 Integer camel.hystrix.configurations Define additional configuration definitions. Map camel.hystrix.core-pool-size Core thread-pool size that gets passed to java.util.concurrent.ThreadPoolExecutor#setCorePoolSize(int). 10 Integer camel.hystrix.enabled Enable the component. true Boolean camel.hystrix.execution-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.run(). Requests beyond the concurrent limit will be rejected. Applicable only when executionIsolationStrategy == SEMAPHORE. 20 Integer camel.hystrix.execution-isolation-strategy What isolation strategy HystrixCommand.run() will be executed with. If THREAD then it will be executed on a separate thread and concurrent requests limited by the number of threads in the thread-pool. If SEMAPHORE then it will be executed on the calling thread and concurrent requests limited by the semaphore count. THREAD String camel.hystrix.execution-isolation-thread-interrupt-on-timeout Whether the execution thread should attempt an interrupt (using Future#cancel ) when a thread times out. Applicable only when executionIsolationStrategy() == THREAD. true Boolean camel.hystrix.execution-timeout-enabled Whether the timeout mechanism is enabled for this command. true Boolean camel.hystrix.execution-timeout-in-milliseconds Time in milliseconds at which point the command will timeout and halt execution. If executionIsolationThreadInterruptOnTimeout == true and the command is thread-isolated, the executing thread will be interrupted. If the command is semaphore-isolated and a HystrixObservableCommand, that command will get unsubscribed. 1000 Integer camel.hystrix.fallback-enabled Whether HystrixCommand.getFallback() should be attempted when failure occurs. true Boolean camel.hystrix.fallback-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.getFallback(). Requests beyond the concurrent limit will fail-fast and not attempt retrieving a fallback. 10 Integer camel.hystrix.group-key Sets the group key to use. The default value is CamelHystrix. CamelHystrix String camel.hystrix.keep-alive-time Keep-alive time in minutes that gets passed to ThreadPoolExecutor#setKeepAliveTime(long,TimeUnit). 1 Integer camel.hystrix.max-queue-size Max queue size that gets passed to BlockingQueue in HystrixConcurrencyStrategy.getBlockingQueue(int) This should only affect the instantiation of a threadpool - it is not eliglible to change a queue size on the fly. For that, use queueSizeRejectionThreshold(). -1 Integer camel.hystrix.maximum-size Maximum thread-pool size that gets passed to ThreadPoolExecutor#setMaximumPoolSize(int) . This is the maximum amount of concurrency that can be supported without starting to reject HystrixCommands. Please note that this setting only takes effect if you also set allowMaximumSizeToDivergeFromCoreSize. 10 Integer camel.hystrix.metrics-health-snapshot-interval-in-milliseconds Time in milliseconds to wait between allowing health snapshots to be taken that calculate success and error percentages and affect HystrixCircuitBreaker.isOpen() status. On high-volume circuits the continual calculation of error percentage can become CPU intensive thus this controls how often it is calculated. 500 Integer camel.hystrix.metrics-rolling-percentile-bucket-size Maximum number of values stored in each bucket of the rolling percentile. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-percentile-enabled Whether percentile metrics should be captured using HystrixRollingPercentile inside HystrixCommandMetrics. true Boolean camel.hystrix.metrics-rolling-percentile-window-buckets Number of buckets the rolling percentile window is broken into. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 6 Integer camel.hystrix.metrics-rolling-percentile-window-in-milliseconds Duration of percentile rolling window in milliseconds. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10000 Integer camel.hystrix.metrics-rolling-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-statistical-window-in-milliseconds This property sets the duration of the statistical rolling window, in milliseconds. This is how long metrics are kept for the thread pool. The window is divided into buckets and rolls by those increments. 10000 Integer camel.hystrix.queue-size-rejection-threshold Queue size rejection threshold is an artificial max size at which rejections will occur even if maxQueueSize has not been reached. This is done because the maxQueueSize of a BlockingQueue can not be dynamically changed and we want to support dynamically changing the queue size that affects rejections. This is used by HystrixCommand when queuing a thread for execution. 5 Integer camel.hystrix.request-log-enabled Whether HystrixCommand execution and events should be logged to HystrixRequestLog. true Boolean camel.hystrix.thread-pool-key Sets the thread pool key to use. Will by default use the same value as groupKey has been configured to use. CamelHystrix String camel.hystrix.thread-pool-rolling-number-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10 Integer camel.hystrix.thread-pool-rolling-number-statistical-window-in-milliseconds Duration of statistical rolling window in milliseconds. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10000 Integer camel.language.constant.enabled Whether to enable auto configuration of the constant language. This is enabled by default. Boolean camel.language.constant.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.csimple.enabled Whether to enable auto configuration of the csimple language. This is enabled by default. Boolean camel.language.csimple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.exchangeproperty.enabled Whether to enable auto configuration of the exchangeProperty language. This is enabled by default. Boolean camel.language.exchangeproperty.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.file.enabled Whether to enable auto configuration of the file language. This is enabled by default. Boolean camel.language.file.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.header.enabled Whether to enable auto configuration of the header language. This is enabled by default. Boolean camel.language.header.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.ref.enabled Whether to enable auto configuration of the ref language. This is enabled by default. Boolean camel.language.ref.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.simple.enabled Whether to enable auto configuration of the simple language. This is enabled by default. Boolean camel.language.simple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.tokenize.enabled Whether to enable auto configuration of the tokenize language. This is enabled by default. Boolean camel.language.tokenize.group-delimiter Sets the delimiter to use when grouping. If this has not been set then token will be used as the delimiter. String camel.language.tokenize.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.resilience4j.automatic-transition-from-open-to-half-open-enabled Enables automatic transition from OPEN to HALF_OPEN state once the waitDurationInOpenState has passed. false Boolean camel.resilience4j.circuit-breaker-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreaker instance to lookup and use from the registry. When using this, then any other circuit breaker options are not in use. String camel.resilience4j.config-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreakerConfig instance to lookup and use from the registry. String camel.resilience4j.configurations Define additional configuration definitions. Map camel.resilience4j.enabled Enable the component. true Boolean camel.resilience4j.failure-rate-threshold Configures the failure rate threshold in percentage. If the failure rate is equal or greater than the threshold the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 50 percentage. Float camel.resilience4j.minimum-number-of-calls Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. For example, if minimumNumberOfCalls is 10, then at least 10 calls must be recorded, before the failure rate can be calculated. If only 9 calls have been recorded the CircuitBreaker will not transition to open even if all 9 calls have failed. Default minimumNumberOfCalls is 100. 100 Integer camel.resilience4j.permitted-number-of-calls-in-half-open-state Configures the number of permitted calls when the CircuitBreaker is half open. The size must be greater than 0. Default size is 10. 10 Integer camel.resilience4j.sliding-window-size Configures the size of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. slidingWindowSize configures the size of the sliding window. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. The slidingWindowSize must be greater than 0. The minimumNumberOfCalls must be greater than 0. If the slidingWindowType is COUNT_BASED, the minimumNumberOfCalls cannot be greater than slidingWindowSize . If the slidingWindowType is TIME_BASED, you can pick whatever you want. Default slidingWindowSize is 100. 100 Integer camel.resilience4j.sliding-window-type Configures the type of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. Default slidingWindowType is COUNT_BASED. COUNT_BASED String camel.resilience4j.slow-call-duration-threshold Configures the duration threshold (seconds) above which calls are considered as slow and increase the slow calls percentage. Default value is 60 seconds. 60 Integer camel.resilience4j.slow-call-rate-threshold Configures a threshold in percentage. The CircuitBreaker considers a call as slow when the call duration is greater than slowCallDurationThreshold Duration. When the percentage of slow calls is equal or greater the threshold, the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 100 percentage which means that all recorded calls must be slower than slowCallDurationThreshold. Float camel.resilience4j.wait-duration-in-open-state Configures the wait duration (in seconds) which specifies how long the CircuitBreaker should stay open, before it switches to half open. Default value is 60 seconds. 60 Integer camel.resilience4j.writable-stack-trace-enabled Enables writable stack traces. When set to false, Exception.getStackTrace returns a zero length array. This may be used to reduce log spam when the circuit breaker is open as the cause of the exceptions is already known (the circuit breaker is short-circuiting calls). true Boolean camel.rest.api-component The name of the Camel component to use as the REST API (such as swagger) If no API Component has been explicit configured, then Camel will lookup if there is a Camel component responsible for servicing and generating the REST API documentation, or if a org.apache.camel.spi.RestApiProcessorFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.api-context-path Sets a leading API context-path the REST API services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. String camel.rest.api-context-route-id Sets the route id to use for the route that services the REST API. The route will by default use an auto assigned route id. String camel.rest.api-host To use an specific hostname for the API documentation (eg swagger) This can be used to override the generated host with this configured hostname. String camel.rest.api-property Allows to configure as many additional properties for the api documentation (swagger). For example set property api.title to my cool stuff. Map camel.rest.api-vendor-extension Whether vendor extension is enabled in the Rest APIs. If enabled then Camel will include additional information as vendor extension (eg keys starting with x-) such as route ids, class names etc. Not all 3rd party API gateways and tools supports vendor-extensions when importing your API docs. false Boolean camel.rest.binding-mode Sets the binding mode to use. The default value is off. RestBindingMode camel.rest.client-request-validation Whether to enable validation of the client request to check whether the Content-Type and Accept headers from the client is supported by the Rest-DSL configuration of its consumes/produces settings. This can be turned on, to enable this check. In case of validation error, then HTTP Status codes 415 or 406 is returned. The default value is false. false Boolean camel.rest.component The Camel Rest component to use for the REST transport (consumer), such as netty-http, jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.component-property Allows to configure as many additional properties for the rest component in use. Map camel.rest.consumer-property Allows to configure as many additional properties for the rest consumer in use. Map camel.rest.context-path Sets a leading context-path the REST services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. Or for components such as camel-jetty or camel-netty-http that includes a HTTP server. String camel.rest.cors-headers Allows to configure custom CORS headers. Map camel.rest.data-format-property Allows to configure as many additional properties for the data formats in use. For example set property prettyPrint to true to have json outputted in pretty mode. The properties can be prefixed to denote the option is only for either JSON or XML and for either the IN or the OUT. The prefixes are: json.in. json.out. xml.in. xml.out. For example a key with value xml.out.mustBeJAXBElement is only for the XML data format for the outgoing. A key without a prefix is a common key for all situations. Map camel.rest.enable-cors Whether to enable CORS headers in the HTTP response. The default value is false. false Boolean camel.rest.endpoint-property Allows to configure as many additional properties for the rest endpoint in use. Map camel.rest.host The hostname to use for exposing the REST service. String camel.rest.host-name-resolver If no hostname has been explicit configured, then this resolver is used to compute the hostname the REST service will be using. RestHostNameResolver camel.rest.json-data-format Name of specific json data format to use. By default json-jackson will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.port The port number to use for exposing the REST service. Notice if you use servlet component then the port number configured here does not apply, as the port number in use is the actual port number the servlet component is using. eg if using Apache Tomcat its the tomcat http port, if using Apache Karaf its the HTTP service in Karaf that uses port 8181 by default etc. Though in those situations setting the port number here, allows tooling and JMX to know the port number, so its recommended to set the port number to the number that the servlet engine uses. String camel.rest.producer-api-doc Sets the location of the api document (swagger api) the REST producer will use to validate the REST uri and query parameters are valid accordingly to the api document. This requires adding camel-swagger-java to the classpath, and any miss configuration will let Camel fail on startup and report the error(s). The location of the api document is loaded from classpath by default, but you can use file: or http: to refer to resources to load from file or http url. String camel.rest.producer-component Sets the name of the Camel component to use as the REST producer. String camel.rest.scheme The scheme to use for exposing the REST service. Usually http or https is supported. The default value is http. String camel.rest.skip-binding-on-error-code Whether to skip binding on output if there is a custom HTTP error code header. This allows to build custom error messages that do not bind to json / xml etc, as success messages otherwise will do. false Boolean camel.rest.use-x-forward-headers Whether to use X-Forward headers for Host and related setting. The default value is true. true Boolean camel.rest.xml-data-format Name of specific XML data format to use. By default jaxb will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.api-context-id-pattern Deprecated Sets an CamelContext id pattern to only allow Rest APIs from rest services within CamelContext's which name matches the pattern. The pattern name refers to the CamelContext name, to match on the current CamelContext only. For any other value, the pattern uses the rules from PatternHelper#matchPattern(String,String). String camel.rest.api-context-listing Deprecated Sets whether listing of all available CamelContext's with REST services in the JVM is enabled. If enabled it allows to discover these contexts, if false then only the current CamelContext is in use. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency>", "<route> <from uri=\"direct:a\" /> <recipientList> <header>myHeader</header> </recipientList> </route>", "from(\"direct:a\").recipientList(header(\"myHeader\"));" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-header-language-starter
10.3.8. Establishing an IP-over-InfiniBand (IPoIB) Connection
10.3.8. Establishing an IP-over-InfiniBand (IPoIB) Connection You can use NetworkManager to create an InfiniBand connection. Procedure 10.13. Adding a New InfiniBand Connection You can configure an InfiniBand connection by opening the Network Connections window, clicking Add , and selecting InfiniBand from the list. Right-click on the NetworkManager applet icon in the Notification Area and click Edit Connections . The Network Connections window appears. Click the Add button to open the selection list. Select InfiniBand and then click Create . The Editing InfiniBand Connection 1 window appears. On the InfiniBand tab, select the transport mode from the drop-down list you want to use for the InfiniBand connection. Enter the InfiniBand MAC address. Review and confirm the settings and then click the Apply button. Edit the InfiniBand-specific settings by referring to the Configuring the InfiniBand Tab description below . Figure 10.15. Editing the newly created InfiniBand connection 1 Procedure 10.14. Editing an Existing InfiniBand Connection Follow these steps to edit an existing InfiniBand connection. Right-click on the NetworkManager applet icon in the Notification Area and click Edit Connections . The Network Connections window appears. Select the connection you want to edit and click the Edit button. Select the InfiniBand tab. Configure the connection name, auto-connect behavior, and availability settings. Three settings in the Editing dialog are common to all connection types: Connection name - Enter a descriptive name for your network connection. This name will be used to list this connection in the InfiniBand section of the Network Connections window. Connect automatically - Check this box if you want NetworkManager to auto-connect to this connection when it is available. See Section 10.2.3, "Connecting to a Network Automatically" for more information. Available to all users - Check this box to create a connection available to all users on the system. Changing this setting may require root privileges. See Section 10.2.4, "User and System Connections" for details. Edit the InfiniBand-specific settings by referring to the Configuring the InfiniBand Tab description below . Saving Your New (or Modified) Connection and Making Further Configurations Once you have finished editing your InfiniBand connection, click the Apply button and NetworkManager will immediately save your customized configuration. Given a correct configuration, you can connect to your new or customized connection by selecting it from the NetworkManager Notification Area applet. See Section 10.2.1, "Connecting to a Network" for information on using your new or altered connection. You can further configure an existing connection by selecting it in the Network Connections window and clicking Edit to return to the Editing dialog. Then, to configure: IPv4 settings for the connection, click the IPv4 Settings tab and proceed to Section 10.3.9.4, "Configuring IPv4 Settings" ; or, IPv6 settings for the connection, click the IPv6 Settings tab and proceed to Section 10.3.9.5, "Configuring IPv6 Settings" . Configuring the InfiniBand Tab If you have already added a new InfiniBand connection (see Procedure 10.13, "Adding a New InfiniBand Connection" for instructions), you can edit the InfiniBand tab to set the parent interface and the InfiniBand ID. Transport mode Datagram or Connected mode can be selected from the drop-down list. Select the same mode the rest of your IPoIB network is using. Device MAC address The MAC address of the InfiniBand capable device to be used for the InfiniBand network traffic.This hardware address field will be pre-filled if you have InfiniBand hardware installed. MTU Optionally sets a Maximum Transmission Unit (MTU) size to be used for packets to be sent over the InfiniBand connection.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-establishing_an_infiniband_connection
Chapter 9. Installation and Booting
Chapter 9. Installation and Booting Add-on repositories are now handled correctly when generating and reading kickstart files. Previously, installation would stop and display an error when performing an installation from a kickstart file generated by a installation which used optical media, and enabled one or more add-on repositories. With this update, generated kickstart files will include commands to automatically enable add-on repositories when necessary. (BZ#1099178) The zerombr command is now correctly added to anaconda-ks.cfg when installing using kickstart Previously, when an installation was performed with the kickstart utility using the zerombr option, this option was not added to the generated /root/anaconda-ks.cfg kickstart file. This bug has been fixed, and zerombr is now correctly added to anaconda-ks.cfg . (BZ#1246663) When using the network service, default routes are now correctly created on an installed system. Previously, device-specific GATEWAY values were being included in the /etc/sysconfig/network configuration file, which applies to all devices. As a consequence, for some network configurations using the network service, default routes were not created. With this update, the GATEWAY parameter is no longer created in /etc/sysconfig/network , and default routes are now created correctly. (BZ#1181290) The DEFROUTE option is now handled correctly when the installer generates a kickstart file. Previously, if the DEFROUTE option was set in an ifcfg configuration file during installation, this was not reflected in the kickstart file subsequently generated by the installer. This bug has been fixed, and now the installer generates kickstart files which reflect DEFROUTE settings used during installation by setting the --nodefroute network command option accordingly. (BZ#1274686) The kdump kernel is no longer added to /etc/zipl.conf when kernel-kdump is marked for installation Previously, when installing kernel-kdump , an entry for the kdump kernel was added to the list of kernels in the /etc/zipl.conf configuration file. This bug is now fixed, and the kdump kernel is no longer added to the list. (BZ#1256211)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_technical_notes/bug_fixes_installation_and_booting
Architecture
Architecture Red Hat Advanced Cluster Security for Kubernetes 4.5 System architecture Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/architecture/index
function::pid2task
function::pid2task Name function::pid2task - The task_struct of the given process identifier Synopsis Arguments pid process identifier Description Return the task struct of the given process id.
[ "pid2task:long(pid:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-pid2task
B.17. dmidecode
B.17. dmidecode B.17.1. RHBA-2011:1396 - dmidecode bug fix update An updated dmidecode package that fixes one bug is now available for Red Hat Enterprise Linux 6 Extended Update Support. The dmidecode package provides utilities for extracting x86 and Intel Itanium hardware information from the system BIOS or EFI (Extensible Firmware Interface), depending on the SMBIOS/DMI standard. This information typically includes system manufacturer, model name, serial number, BIOS version, and asset tag, as well as other details, depending on the manufacturer. Bug Fix BZ# 745558 Prior to this update, the extended records for the DMI types Memory Device (DMI type 17) and Memory Array Mapped Address (DMI type 19) were missing from the dmidecode utility output. With this update, dmidecode has been upgraded to upstream version 2.11, which updates support for the SMBIOS specification to version 2.7.1, thus fixing this bug. Now, the dmidecode output contains the extended records for DMI type 17 and DMI type 19. All users of dmidecode are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/dmidecode
Preface
Preface Red Hat offers administrators tools for gathering data for your Red Hat Quay deployment. You can use this data to troubleshoot your Red Hat Quay deployment yourself, or file a support ticket.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/troubleshooting_red_hat_quay/pr01
function::int_arg
function::int_arg Name function::int_arg - Return function argument as signed int Synopsis Arguments n index of argument to return Description Return the value of argument n as a signed int (i.e., a 32-bit integer sign-extended to 64 bits).
[ "int_arg:long(n:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-int-arg
4.3. Main Configuration File
4.3. Main Configuration File The /etc/selinux/config file is the main SELinux configuration file. It controls whether SELinux is enabled or disabled and which SELinux mode and SELinux policy is used: SELINUX= The SELINUX option sets whether SELinux is disabled or enabled and in which mode - enforcing or permissive - it is running: When using SELINUX=enforcing , SELinux policy is enforced, and SELinux denies access based on SELinux policy rules. Denial messages are logged. When using SELINUX=permissive , SELinux policy is not enforced. SELinux does not deny access, but denials are logged for actions that would have been denied if running SELinux in enforcing mode. When using SELINUX=disabled , SELinux is disabled, the SELinux module is not registered with the Linux kernel, and only DAC rules are used. SELINUXTYPE= The SELINUXTYPE option sets the SELinux policy to use. Targeted policy is the default policy. Only change this option if you want to use the MLS policy. For information on how to enable the MLS policy, see Section 4.13.2, "Enabling MLS in SELinux" .
[ "This file controls the state of SELinux on the system. SELINUX= can take one of these three values: enforcing - SELinux security policy is enforced. permissive - SELinux prints warnings instead of enforcing. disabled - No SELinux policy is loaded. SELINUX=enforcing SELINUXTYPE= can take one of these two values: targeted - Targeted processes are protected, mls - Multi Level Security protection. SELINUXTYPE=targeted" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-working_with_selinux-main_configuration_file
Chapter 11. Adding managed datasources to Data Grid Server
Chapter 11. Adding managed datasources to Data Grid Server Optimize connection pooling and performance for JDBC database connections by adding managed datasources to your Data Grid Server configuration. 11.1. Configuring managed datasources Create managed datasources as part of your Data Grid Server configuration to optimize connection pooling and performance for JDBC database connections. You can then specify the JDNI name of the managed datasources in your caches, which centralizes JDBC connection configuration for your deployment. Prerequisites Copy database drivers to the server/lib directory in your Data Grid Server installation. Tip Use the install command with the Data Grid Command Line Interface (CLI) to download the required drivers to the server/lib directory, for example: Procedure Open your Data Grid Server configuration for editing. Add a new data-source to the data-sources section. Uniquely identify the datasource with the name attribute or field. Specify a JNDI name for the datasource with the jndi-name attribute or field. Tip You use the JNDI name to specify the datasource in your JDBC cache store configuration. Set true as the value of the statistics attribute or field to enable statistics for the datasource through the /metrics endpoint. Provide JDBC driver details that define how to connect to the datasource in the connection-factory section. Specify the name of the database driver with the driver attribute or field. Specify the JDBC connection url with the url attribute or field. Specify credentials with the username and password attributes or fields. Provide any other configuration as appropriate. Define how Data Grid Server nodes pool and reuse connections with connection pool tuning properties in the connection-pool section. Save the changes to your configuration. Verification Use the Data Grid Command Line Interface (CLI) to test the datasource connection, as follows: Start a CLI session. List all datasources and confirm the one you created is available. Test a datasource connection. Managed datasource configuration XML <server xmlns="urn:infinispan:server:14.0"> <data-sources> <!-- Defines a unique name for the datasource and JNDI name that you reference in JDBC cache store configuration. Enables statistics for the datasource, if required. --> <data-source name="ds" jndi-name="jdbc/postgres" statistics="true"> <!-- Specifies the JDBC driver that creates connections. --> <connection-factory driver="org.postgresql.Driver" url="jdbc:postgresql://localhost:5432/postgres" username="postgres" password="changeme"> <!-- Sets optional JDBC driver-specific connection properties. --> <connection-property name="name">value</connection-property> </connection-factory> <!-- Defines connection pool tuning properties. --> <connection-pool initial-size="1" max-size="10" min-size="3" background-validation="1000" idle-removal="1" blocking-timeout="1000" leak-detection="10000"/> </data-source> </data-sources> </server> JSON { "server": { "data-sources": [{ "name": "ds", "jndi-name": "jdbc/postgres", "statistics": true, "connection-factory": { "driver": "org.postgresql.Driver", "url": "jdbc:postgresql://localhost:5432/postgres", "username": "postgres", "password": "changeme", "connection-properties": { "name": "value" } }, "connection-pool": { "initial-size": 1, "max-size": 10, "min-size": 3, "background-validation": 1000, "idle-removal": 1, "blocking-timeout": 1000, "leak-detection": 10000 } }] } } YAML server: dataSources: - name: ds jndiName: 'jdbc/postgres' statistics: true connectionFactory: driver: "org.postgresql.Driver" url: "jdbc:postgresql://localhost:5432/postgres" username: "postgres" password: "changeme" connectionProperties: name: value connectionPool: initialSize: 1 maxSize: 10 minSize: 3 backgroundValidation: 1000 idleRemoval: 1 blockingTimeout: 1000 leakDetection: 10000 11.2. Configuring caches with JNDI names When you add a managed datasource to Data Grid Server you can add the JNDI name to a JDBC-based cache store configuration. Prerequisites Configure Data Grid Server with a managed datasource. Procedure Open your cache configuration for editing. Add the data-source element or field to the JDBC-based cache store configuration. Specify the JNDI name of the managed datasource as the value of the jndi-url attribute. Configure the JDBC-based cache stores as appropriate. Save the changes to your configuration. JNDI name in cache configuration XML <distributed-cache> <persistence> <jdbc:string-keyed-jdbc-store> <!-- Specifies the JNDI name of a managed datasource on Data Grid Server. --> <jdbc:data-source jndi-url="jdbc/postgres"/> <jdbc:string-keyed-table drop-on-exit="true" create-on-start="true" prefix="TBL"> <jdbc:id-column name="ID" type="VARCHAR(255)"/> <jdbc:data-column name="DATA" type="BYTEA"/> <jdbc:timestamp-column name="TS" type="BIGINT"/> <jdbc:segment-column name="S" type="INT"/> </jdbc:string-keyed-table> </jdbc:string-keyed-jdbc-store> </persistence> </distributed-cache> JSON { "distributed-cache": { "persistence": { "string-keyed-jdbc-store": { "data-source": { "jndi-url": "jdbc/postgres" }, "string-keyed-table": { "prefix": "TBL", "drop-on-exit": true, "create-on-start": true, "id-column": { "name": "ID", "type": "VARCHAR(255)" }, "data-column": { "name": "DATA", "type": "BYTEA" }, "timestamp-column": { "name": "TS", "type": "BIGINT" }, "segment-column": { "name": "S", "type": "INT" } } } } } } YAML distributedCache: persistence: stringKeyedJdbcStore: dataSource: jndi-url: "jdbc/postgres" stringKeyedTable: prefix: "TBL" dropOnExit: true createOnStart: true idColumn: name: "ID" type: "VARCHAR(255)" dataColumn: name: "DATA" type: "BYTEA" timestampColumn: name: "TS" type: "BIGINT" segmentColumn: name: "S" type: "INT" 11.3. Connection pool tuning properties You can tune JDBC connection pools for managed datasources in your Data Grid Server configuration. Property Description initial-size Initial number of connections the pool should hold. max-size Maximum number of connections in the pool. min-size Minimum number of connections the pool should hold. blocking-timeout Maximum time in milliseconds to block while waiting for a connection before throwing an exception. This will never throw an exception if creating a new connection takes an inordinately long period of time. Default is 0 meaning that a call will wait indefinitely. background-validation Time in milliseconds between background validation runs. A duration of 0 means that this feature is disabled. validate-on-acquisition Connections idle for longer than this time, specified in milliseconds, are validated before being acquired (foreground validation). A duration of 0 means that this feature is disabled. idle-removal Time in minutes a connection has to be idle before it can be removed. leak-detection Time in milliseconds a connection has to be held before a leak warning.
[ "install org.postgresql:postgresql:42.4.3", "bin/cli.sh", "server datasource ls", "server datasource test my-datasource", "<server xmlns=\"urn:infinispan:server:14.0\"> <data-sources> <!-- Defines a unique name for the datasource and JNDI name that you reference in JDBC cache store configuration. Enables statistics for the datasource, if required. --> <data-source name=\"ds\" jndi-name=\"jdbc/postgres\" statistics=\"true\"> <!-- Specifies the JDBC driver that creates connections. --> <connection-factory driver=\"org.postgresql.Driver\" url=\"jdbc:postgresql://localhost:5432/postgres\" username=\"postgres\" password=\"changeme\"> <!-- Sets optional JDBC driver-specific connection properties. --> <connection-property name=\"name\">value</connection-property> </connection-factory> <!-- Defines connection pool tuning properties. --> <connection-pool initial-size=\"1\" max-size=\"10\" min-size=\"3\" background-validation=\"1000\" idle-removal=\"1\" blocking-timeout=\"1000\" leak-detection=\"10000\"/> </data-source> </data-sources> </server>", "{ \"server\": { \"data-sources\": [{ \"name\": \"ds\", \"jndi-name\": \"jdbc/postgres\", \"statistics\": true, \"connection-factory\": { \"driver\": \"org.postgresql.Driver\", \"url\": \"jdbc:postgresql://localhost:5432/postgres\", \"username\": \"postgres\", \"password\": \"changeme\", \"connection-properties\": { \"name\": \"value\" } }, \"connection-pool\": { \"initial-size\": 1, \"max-size\": 10, \"min-size\": 3, \"background-validation\": 1000, \"idle-removal\": 1, \"blocking-timeout\": 1000, \"leak-detection\": 10000 } }] } }", "server: dataSources: - name: ds jndiName: 'jdbc/postgres' statistics: true connectionFactory: driver: \"org.postgresql.Driver\" url: \"jdbc:postgresql://localhost:5432/postgres\" username: \"postgres\" password: \"changeme\" connectionProperties: name: value connectionPool: initialSize: 1 maxSize: 10 minSize: 3 backgroundValidation: 1000 idleRemoval: 1 blockingTimeout: 1000 leakDetection: 10000", "<distributed-cache> <persistence> <jdbc:string-keyed-jdbc-store> <!-- Specifies the JNDI name of a managed datasource on Data Grid Server. --> <jdbc:data-source jndi-url=\"jdbc/postgres\"/> <jdbc:string-keyed-table drop-on-exit=\"true\" create-on-start=\"true\" prefix=\"TBL\"> <jdbc:id-column name=\"ID\" type=\"VARCHAR(255)\"/> <jdbc:data-column name=\"DATA\" type=\"BYTEA\"/> <jdbc:timestamp-column name=\"TS\" type=\"BIGINT\"/> <jdbc:segment-column name=\"S\" type=\"INT\"/> </jdbc:string-keyed-table> </jdbc:string-keyed-jdbc-store> </persistence> </distributed-cache>", "{ \"distributed-cache\": { \"persistence\": { \"string-keyed-jdbc-store\": { \"data-source\": { \"jndi-url\": \"jdbc/postgres\" }, \"string-keyed-table\": { \"prefix\": \"TBL\", \"drop-on-exit\": true, \"create-on-start\": true, \"id-column\": { \"name\": \"ID\", \"type\": \"VARCHAR(255)\" }, \"data-column\": { \"name\": \"DATA\", \"type\": \"BYTEA\" }, \"timestamp-column\": { \"name\": \"TS\", \"type\": \"BIGINT\" }, \"segment-column\": { \"name\": \"S\", \"type\": \"INT\" } } } } } }", "distributedCache: persistence: stringKeyedJdbcStore: dataSource: jndi-url: \"jdbc/postgres\" stringKeyedTable: prefix: \"TBL\" dropOnExit: true createOnStart: true idColumn: name: \"ID\" type: \"VARCHAR(255)\" dataColumn: name: \"DATA\" type: \"BYTEA\" timestampColumn: name: \"TS\" type: \"BIGINT\" segmentColumn: name: \"S\" type: \"INT\"" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_server_guide/managed-datasources
Chapter 3. Usage
Chapter 3. Usage This chapter describes the necessary steps for using Red Hat Software Collections 3.7, and deploying applications that use Red Hat Software Collections. 3.1. Using Red Hat Software Collections 3.1.1. Running an Executable from a Software Collection To run an executable from a particular Software Collection, type the following command at a shell prompt: scl enable software_collection ... ' command ...' Or, alternatively, use the following command: scl enable software_collection ... -- command ... Replace software_collection with a space-separated list of Software Collections you want to use and command with the command you want to run. For example, to execute a Perl program stored in a file named hello.pl with the Perl interpreter from the perl526 Software Collection, type: You can execute any command using the scl utility, causing it to be run with the executables from a selected Software Collection in preference to their possible Red Hat Enterprise Linux system equivalents. For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections Components" . 3.1.2. Running a Shell Session with a Software Collection as Default To start a new shell session with executables from a selected Software Collection in preference to their Red Hat Enterprise Linux equivalents, type the following at a shell prompt: scl enable software_collection ... bash Replace software_collection with a space-separated list of Software Collections you want to use. For example, to start a new shell session with the python27 and rh-postgresql12 Software Collections as default, type: The list of Software Collections that are enabled in the current session is stored in the USDX_SCLS environment variable, for instance: For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections Components" . 3.1.3. Running a System Service from a Software Collection In Red Hat Enterprise Linux 7, init scripts have been replaced by systemd service unit files, which end with the .service file extension and serve a similar purpose as init scripts. To start a service in the current session, execute the following command as root : systemctl start software_collection - service_name .service Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : systemctl enable software_collection - service_name .service For example, to start the postgresql service from the rh-postgresql12 Software Collection and enable it at boot time, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 7, refer to the Red Hat Enterprise Linux 7 System Administrator's Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections Components" . 3.2. Accessing a Manual Page from a Software Collection Every Software Collection contains a general manual page that describes the content of this component. Each manual page has the same name as the component and it is located in the /opt/rh directory. To read a manual page for a Software Collection, type the following command: scl enable software_collection 'man software_collection ' Replace software_collection with the particular Red Hat Software Collections component. For example, to display the manual page for rh-mariadb105 , type: 3.3. Deploying Applications That Use Red Hat Software Collections In general, you can use one of the following two approaches to deploy an application that depends on a component from Red Hat Software Collections in production: Install all required Software Collections and packages manually and then deploy your application, or Create a new Software Collection for your application and specify all required Software Collections and other packages as dependencies. For more information on how to manually install individual Red Hat Software Collections components, see Section 2.2, "Installing Red Hat Software Collections" . For further details on how to use Red Hat Software Collections, see Section 3.1, "Using Red Hat Software Collections" . For a detailed explanation of how to create a custom Software Collection or extend an existing one, read the Red Hat Software Collections Packaging Guide . 3.4. Red Hat Software Collections Container Images Container images based on Red Hat Software Collections include applications, daemons, and databases. The images can be run on Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux Atomic Host. For information about their usage, see Using Red Hat Software Collections 3 Container Images . For details regarding container images based on Red Hat Software Collections versions 2.4 and earlier, see Using Red Hat Software Collections 2 Container Images . Note that only the latest version of each container image is supported. The following container images are available with Red Hat Software Collections 3.7: rhscl/mariadb-105-rhel7 rhscl/postgresql-13-rhel7 rhscl/ruby-30-rhel7 rhscl/devtoolset-10-toolchain-rhel7 rhscl/devtoolset-10-perftools-rhel7 rhscl/ruby-27-rhel7 rhscl/ruby-26-rhel7 The following container images are based on Red Hat Software Collections 3.6: rhscl/httpd-24-rhel7 rhscl/nginx-118-rhel7 rhscl/nodej-14-rhel7 rhscl/perl-530-rhel7 rhscl/php-73-rhel7 The following container images are based on Red Hat Software Collections 3.5: rhscl/python-38-rhel7 rhscl/varnish-6-rhel7 The following container images are based on Red Hat Software Collections 3.4: rhscl/nginx-116-rhel7 rhscl/nodejs-12-rhel7 rhscl/postgresql-12-rhel7 The following container images are based on Red Hat Software Collections 3.3: rhscl/mariadb-103-rhel7 rhscl/redis-5-rhel7 The following container image is based on Red Hat Software Collections 3.2: rhscl/mysql-80-rhel7 The following container image is based on Red Hat Software Collections 3.1: rhscl/postgresql-10-rhel7 The following container image is based on Red Hat Software Collections 2: rhscl/python-27-rhel7 rhscl/s2i-base-rhel7
[ "~]USD scl enable rh-perl526 'perl hello.pl' Hello, World!", "~]USD scl enable python27 rh-postgresql12 bash", "~]USD echo USDX_SCLS python27 rh-postgresql12", "~]# systemctl start rh-postgresql12-postgresql.service ~]# systemctl enable rh-postgresql12-postgresql.service", "~]USD scl enable rh-mariadb105 \"man rh-mariadb105\"" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.7_release_notes/chap-usage
Chapter 3. Managing cluster PVC size
Chapter 3. Managing cluster PVC size 3.1. Configuring the default PVC size for your cluster To configure how resources are claimed within your OpenShift AI cluster, you can change the default size of the cluster's persistent volume claim (PVC) ensuring that the storage requested matches your common storage workflow. PVCs are requests for resources in your cluster and also act as claim checks to the resource. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. Note Changing the PVC setting restarts the Jupyter pod and makes Jupyter unavailable for up to 30 seconds. As a workaround, it is recommended that you perform this action outside of your organization's typical working day. Procedure From the OpenShift AI dashboard, click Settings Cluster settings . Under PVC size , enter a new size in gibibytes or mebibytes. Click Save changes . Verification New PVCs are created with the default storage size that you configured. Additional resources Understanding persistent storage 3.2. Restoring the default PVC size for your cluster To change the size of resources utilized within your OpenShift AI cluster, you can restore the default size of your cluster's persistent volume claim (PVC). Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. Procedure From the OpenShift AI dashboard, click Settings Cluster settings . Click Restore Default to restore the default PVC size of 20GiB. Click Save changes . Verification New PVCs are created with the default storage size of 20 GiB. Additional resources Understanding persistent storage
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/managing_resources/managing-cluster-pvc-size
Chapter 21. Shells and command-line tools
Chapter 21. Shells and command-line tools The following chapters contain the most notable changes to shells and command-line tools between RHEL 8 and RHEL 9. 21.1. Notable changes to system management Data Encryption Standard (DES) algorithm is not available for net-snmp communication in Red Hat Enterprise Linux 9 In versions of RHEL, DES was used as an encryption algorithm for secure communication between net-snmp clients and servers. In RHEL 9, the DES algorithm isn't supported by the OpenSSL library. The algorithm is marked as insecure and the DES support for net-snmp has therefore been removed. The ABRT tool has been removed The Automatic Bug Reporting Tool (ABRT) for detecting and reporting application crashes is not available in RHEL 9. As a replacement, use the systemd-coredump tool to log and store core dumps, which are automatically generated files after a program crashes. The hidepid=n mount option is not supported in RHEL 9 systemd The mount option hidepid=n , which controls who can access information in /proc/[pid] directories, is not compatible with systemd infrastructure provided in RHEL 9. In addition, using this option might cause certain services started by systemd to produce SELinux AVC denial messages and prevent other operations from being completed. The dump utility from the dump package has been removed The dump utility used for backup of file systems has been deprecated in Red Hat Enterprise Linux 8 and is not available in RHEL 9. In RHEL 9, Red Hat recommends using the tar , or dd as a backup tool for ext2, ext3, and ext4 file systems. The dump utility will be a part of the EPEL 9 repository. Note that the restore utility from the dump package remains available and supported in RHEL 9 and is available as the restore package. RHEL 9 does not contain ReaR crontab The /etc/cron.d/rear crontab in the rear package, which runs rear mkrescue after the disk layout changes, has been removed in RHEL 9. If you relied on the /etc/cron.d/rear crontab to run rear mkrescue , you can manually configure periodic runs of ReaR instead. Note The rear package in RHEL contains the following examples for scheduling jobs: the /usr/share/doc/rear/rear.cron example crontab the /usr/share/doc/rear/rear.{service,timer} example systemd unit Do not use these examples without site-specific modifications or other actions to take updated backups for system recovery. You must take regular backups in addition to re-creating the rescue image. The steps to take a backup depend on the local configuration. If you run the rear mkrescue command without taking an updated backup at the same time, the system recovery process would use a backup that might be inconsistent with the saved layout. 21.2. Notable changes to command-line tools Support for the raw command-line tool has been removed With this release, the raw ( /usr/bin/raw ) command-line tool has been removed from the util-linux package, because Linux kernel does not support raw devices since version 5.14. Currently, there is no replacement available. cgroupsv1 is deprecated in RHEL 9 cgroups is a kernel subsystem used for process tracking, system resource allocation and partitioning. Systemd service manager supports booting in the cgroups v1 mode and in cgroups v2 mode. In Red Hat Enterprise Linux 9, the default mode is v2 . In the major release, systemd will not support booting in the cgroups v1 mode and only cgroups v2 mode will be available. The lsb-release binary is not available in RHEL 9 The information in the /etc/os-release file was previously available by calling the lsb-release binary. This binary was included in the redhat-lsb package, which was removed in RHEL 9. Now, you can display information about the operating system, such as the distribution, version, code name, and associated metadata, by reading the /etc/os-release file. This file is provided by Red Hat and any changes to it are overwritten with each update of the redhat-release package. The format of the file is KEY=VALUE , and you can safely source the data for a shell script.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/considerations_in_adopting_rhel_9/assembly_shells-and-command-line-tools_considerations-in-adopting-rhel-9
Chapter 5. Red Hat Developer Hub support
Chapter 5. Red Hat Developer Hub support If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal . You can use the Red Hat Customer Portal for the following purposes: To search or browse through the Red Hat Knowledgebase of technical support articles about Red Hat products. To create a support case for Red Hat Global Support Services (GSS). For support case creation, select Red Hat Developer Hub as the product and select the appropriate product version. For detailed information about supported platforms and life cycle details, see Red Hat Developer Hub Life Cycle . steps Installing Red Hat Developer Hub on Amazon Elastic Kubernetes Service Installing Red Hat Developer Hub on Google Cloud Platform on Google Cloud Platform Installing Red Hat Developer Hub on Google Kubernetes Engine Installing Red Hat Developer Hub on Microsoft Azure Kubernetes Service Installing Red Hat Developer Hub on OpenShift Container Platform
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/about_red_hat_developer_hub/ref-customer-support-info_about-rhdh
Chapter 3. Heat parameters
Chapter 3. Heat parameters Each heat template in the director template collection contains a parameters section. This section contains definitions for all parameters specific to a particular overcloud service. This includes the following: overcloud.j2.yaml - Default base parameters roles_data.yaml - Default parameters for composable roles deployment/*.yaml - Default parameters for specific services You can modify the values for these parameters using the following method: Create an environment file for your custom parameters. Include your custom parameters in the parameter_defaults section of the environment file. Include the environment file with the openstack overcloud deploy command. 3.1. Example 1: Configuring the time zone The Heat template for setting the timezone ( puppet/services/time/timezone.yaml ) contains a TimeZone parameter. If you leave the TimeZone parameter blank, the overcloud sets the time to UTC as a default. To obtain lists of timezones run the timedatectl list-timezones command. The following example command retrieves the timezones for Asia: After you identify your timezone, set the TimeZone parameter in an environment file. The following example environment file sets the value of TimeZone to Asia/Tokyo : 3.2. Example 2: Configuring RabbitMQ file descriptor limit For certain configurations, you might need to increase the file descriptor limit for the RabbitMQ server. Use the deployment/rabbitmq/rabbitmq-container-puppet.yaml heat template to set a new limit in the RabbitFDLimit parameter. Add the following entry to an environment file: 3.3. Example 3: Enabling and disabling parameters You might need to initially set a parameter during a deployment, then disable the parameter for a future deployment operation, such as updates or scaling operations. For example, to include a custom RPM during the overcloud creation, include the following entry in an environment file: To disable this parameter from a future deployment, it is not sufficient to remove the parameter. Instead, you must set the parameter to an empty value: This ensures the parameter is no longer set for subsequent deployments operations. 3.4. Example 4: Role-based parameters Use the [ROLE]Parameters parameters, replacing [ROLE] with a composable role, to set parameters for a specific role. For example, director configures sshd on both Controller and Compute nodes. To set a different sshd parameters for Controller and Compute nodes, create an environment file that contains both the ControllerParameters and ComputeParameters parameter and set the sshd parameters for each specific role: 3.5. Identifying parameters that you want to modify Red Hat OpenStack Platform director provides many parameters for configuration. In some cases, you might experience difficulty identifying a certain option that you want to configure, and the corresponding director parameter. If there is an option that you want to configure with director, use the following workflow to identify and map the option to a specific overcloud parameter: Identify the option that you want to configure. Make a note of the service that uses the option. Check the corresponding Puppet module for this option. The Puppet modules for Red Hat OpenStack Platform are located under /etc/puppet/modules on the director node. Each module corresponds to a particular service. For example, the keystone module corresponds to the OpenStack Identity (keystone). If the Puppet module contains a variable that controls the chosen option, move to the step. If the Puppet module does not contain a variable that controls the chosen option, no hieradata exists for this option. If possible, you can set the option manually after the overcloud completes deployment. Check the core heat template collection for the Puppet variable in the form of hieradata. The templates in deployment/* usually correspond to the Puppet modules of the same services. For example, the deployment/keystone/keystone-container-puppet.yaml template provides hieradata to the keystone module. If the heat template sets hieradata for the Puppet variable, the template should also disclose the director-based parameter that you can modify. If the heat template does not set hieradata for the Puppet variable, use the configuration hooks to pass the hieradata using an environment file. See Section 4.5, "Puppet: Customizing hieradata for roles" for more information on customizing hieradata. Procedure To change the notification format for OpenStack Identity (keystone), use the workflow and complete the following steps: Identify the OpenStack parameter that you want to configure ( notification_format ). Search the keystone Puppet module for the notification_format setting: In this case, the keystone module manages this option using the keystone::notification_format variable. Search the keystone service template for this variable: The output shows that director uses the KeystoneNotificationFormat parameter to set the keystone::notification_format hieradata. The following table shows the eventual mapping: Director parameter Puppet hieradata OpenStack Identity (keystone) option KeystoneNotificationFormat keystone::notification_format notification_format You set the KeystoneNotificationFormat in an overcloud environment file, which then sets the notification_format option in the keystone.conf file during the overcloud configuration.
[ "sudo timedatectl list-timezones|grep \"Asia\"", "parameter_defaults: TimeZone: 'Asia/Tokyo'", "parameter_defaults: RabbitFDLimit: 65536", "parameter_defaults: DeployArtifactURLs: [\"http://www.example.com/myfile.rpm\"]", "parameter_defaults: DeployArtifactURLs: []", "parameter_defaults: ControllerParameters: BannerText: \"This is a Controller node\" ComputeParameters: BannerText: \"This is a Compute node\"", "grep notification_format /etc/puppet/modules/keystone/manifests/*", "grep \"keystone::notification_format\" /usr/share/openstack-tripleo-heat-templates/deployment/keystone/keystone-container-puppet.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/advanced_overcloud_customization/assembly_heat-parameters
Chapter 13. ImageContentSourcePolicy [operator.openshift.io/v1alpha1]
Chapter 13. ImageContentSourcePolicy [operator.openshift.io/v1alpha1] Description ImageContentSourcePolicy holds cluster-wide information about how to handle registry mirror rules. When multiple policies are defined, the outcome of the behavior is defined on each field. Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object Required spec 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration 13.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description repositoryDigestMirrors array repositoryDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in RepositoryDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. Only image pull specifications that have an image digest will have this behavior applied to them - tags will continue to be pulled from the specified repository in the pull spec. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. repositoryDigestMirrors[] object RepositoryDigestMirrors holds cluster-wide information about how to handle mirros in the registries config. Note: the mirrors only work when pulling the images that are referenced by their digests. 13.1.2. .spec.repositoryDigestMirrors Description repositoryDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in RepositoryDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. Only image pull specifications that have an image digest will have this behavior applied to them - tags will continue to be pulled from the specified repository in the pull spec. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Type array 13.1.3. .spec.repositoryDigestMirrors[] Description RepositoryDigestMirrors holds cluster-wide information about how to handle mirros in the registries config. Note: the mirrors only work when pulling the images that are referenced by their digests. Type object Required source Property Type Description mirrors array (string) mirrors is one or more repositories that may also contain the same images. The order of mirrors in this list is treated as the user's desired priority, while source is by default considered lower priority than all mirrors. Other cluster configuration, including (but not limited to) other repositoryDigestMirrors objects, may impact the exact order mirrors are contacted in, or some mirrors may be contacted in parallel, so this should be considered a preference rather than a guarantee of ordering. source string source is the repository that users refer to, e.g. in image pull specifications. 13.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies DELETE : delete collection of ImageContentSourcePolicy GET : list objects of kind ImageContentSourcePolicy POST : create an ImageContentSourcePolicy /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name} DELETE : delete an ImageContentSourcePolicy GET : read the specified ImageContentSourcePolicy PATCH : partially update the specified ImageContentSourcePolicy PUT : replace the specified ImageContentSourcePolicy /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name}/status GET : read status of the specified ImageContentSourcePolicy PATCH : partially update status of the specified ImageContentSourcePolicy PUT : replace status of the specified ImageContentSourcePolicy 13.2.1. /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies HTTP method DELETE Description delete collection of ImageContentSourcePolicy Table 13.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ImageContentSourcePolicy Table 13.2. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicyList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageContentSourcePolicy Table 13.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.4. Body parameters Parameter Type Description body ImageContentSourcePolicy schema Table 13.5. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 201 - Created ImageContentSourcePolicy schema 202 - Accepted ImageContentSourcePolicy schema 401 - Unauthorized Empty 13.2.2. /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name} Table 13.6. Global path parameters Parameter Type Description name string name of the ImageContentSourcePolicy HTTP method DELETE Description delete an ImageContentSourcePolicy Table 13.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 13.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageContentSourcePolicy Table 13.9. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageContentSourcePolicy Table 13.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.11. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageContentSourcePolicy Table 13.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.13. Body parameters Parameter Type Description body ImageContentSourcePolicy schema Table 13.14. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 201 - Created ImageContentSourcePolicy schema 401 - Unauthorized Empty 13.2.3. /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name}/status Table 13.15. Global path parameters Parameter Type Description name string name of the ImageContentSourcePolicy HTTP method GET Description read status of the specified ImageContentSourcePolicy Table 13.16. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ImageContentSourcePolicy Table 13.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.18. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ImageContentSourcePolicy Table 13.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.20. Body parameters Parameter Type Description body ImageContentSourcePolicy schema Table 13.21. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 201 - Created ImageContentSourcePolicy schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operator_apis/imagecontentsourcepolicy-operator-openshift-io-v1alpha1
Chapter 5. Preparing Storage for Red Hat Virtualization
Chapter 5. Preparing Storage for Red Hat Virtualization You need to prepare storage to be used for storage domains in the new environment. A Red Hat Virtualization environment must have at least one data storage domain, but adding more is recommended. Warning When installing or reinstalling the host's operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center, and cannot be shared across data centers while active (but can be migrated between data centers). Data domains of multiple storage types can be added to the same data center, provided they are all shared, rather than local, domains. You can use one of the following storage types: NFS iSCSI Fibre Channel (FCP) POSIX-compliant file system Local storage Red Hat Gluster Storage 5.1. Preparing NFS Storage Set up NFS shares on your file storage or remote server to serve as storage domains on Red Hat Enterprise Virtualization Host systems. After exporting the shares on the remote storage and configuring them in the Red Hat Virtualization Manager, the shares will be automatically imported on the Red Hat Virtualization hosts. For information on setting up, configuring, mounting and exporting NFS, see Managing file systems for Red Hat Enterprise Linux 8. Specific system user accounts and system user groups are required by Red Hat Virtualization so the Manager can store data in the storage domains represented by the exported directories. The following procedure sets the permissions for one directory. You must repeat the chown and chmod steps for all of the directories you intend to use as storage domains in Red Hat Virtualization. Prerequisites Install the NFS utils package. # dnf install nfs-utils -y To check the enabled versions: # cat /proc/fs/nfsd/versions Enable the following services: # systemctl enable nfs-server # systemctl enable rpcbind Procedure Create the group kvm : # groupadd kvm -g 36 Create the user vdsm in the group kvm : # useradd vdsm -u 36 -g kvm Create the storage directory and modify the access rights. Add the storage directory to /etc/exports with the relevant permissions. # vi /etc/exports # cat /etc/exports /storage *(rw) Restart the following services: # systemctl restart rpcbind # systemctl restart nfs-server To see which export are available for a specific IP address: # exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server <world> Note If changes in /etc/exports have been made after starting the services, the exportfs -ra command can be used to reload the changes. After performing all the above stages, the exports directory should be ready and can be tested on a different host to check that it is usable. 5.2. Preparing iSCSI Storage Red Hat Virtualization supports iSCSI storage, which is a storage domain created from a volume group made up of LUNs. Volume groups and LUNs cannot be attached to more than one storage domain at a time. For information on setting up and configuring iSCSI storage, see Configuring an iSCSI target in Managing storage devices for Red Hat Enterprise Linux 8. Important If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm-tool config-lvm-filter command to create filters for the LVM. See Creating an LVM filter Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: # cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } 5.3. Preparing FCP Storage Red Hat Virtualization supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time. Red Hat Virtualization system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage. For information on setting up and configuring FCP or multipathing on Red Hat Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide . Important If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm-tool config-lvm-filter command to create filters for the LVM. See Creating an LVM filter Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: # cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } } 5.4. Preparing POSIX-compliant File System Storage POSIX file system support allows you to mount file systems using the same mount options that you would normally use when mounting them manually from the command line. This functionality is intended to allow access to storage not exposed using NFS, iSCSI, or FCP. Any POSIX-compliant file system used as a storage domain in Red Hat Virtualization must be a clustered file system, such as Global File System 2 (GFS2), and must support sparse files and direct I/O. The Common Internet File System (CIFS), for example, does not support direct I/O, making it incompatible with Red Hat Virtualization. For information on setting up and configuring POSIX-compliant file system storage, see Red Hat Enterprise Linux Global File System 2 . Important Do not mount NFS storage by creating a POSIX-compliant file system storage domain. Always create an NFS storage domain instead. 5.5. Preparing local storage On Red Hat Virtualization Host (RHVH), local storage should always be defined on a file system that is separate from / (root). Use a separate logical volume or disk, to prevent possible loss of data during upgrades. Procedure for Red Hat Enterprise Linux hosts On the host, create the directory to be used for the local storage: # mkdir -p /data/images Ensure that the directory has permissions allowing read/write access to the vdsm user (UID 36) and kvm group (GID 36): # chown 36:36 /data /data/images # chmod 0755 /data /data/images Procedure for Red Hat Virtualization Hosts Create the local storage on a logical volume: Create a local storage directory: # mkdir /data # lvcreate -L USDSIZE rhvh -n data # mkfs.ext4 /dev/mapper/rhvh-data # echo "/dev/mapper/rhvh-data /data ext4 defaults,discard 1 2" >> /etc/fstab # mount /data Mount the new local storage: # mount -a Ensure that the directory has permissions allowing read/write access to the vdsm user (UID 36) and kvm group (GID 36): # chown 36:36 /data /rhvh-data # chmod 0755 /data /rhvh-data 5.6. Preparing Red Hat Gluster Storage For information on setting up and configuring Red Hat Gluster Storage, see the Red Hat Gluster Storage Installation Guide . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see Red Hat Gluster Storage Version Compatibility and Support . 5.7. Customizing Multipath Configurations for SAN Vendors If your RHV environment is configured to use multipath connections with SANs, you can customize the multipath configuration settings to meet requirements specified by your storage vendor. These customizations can override both the default settings and settings that are specified in /etc/multipath.conf . To override the multipath settings, do not customize /etc/multipath.conf . Because VDSM owns /etc/multipath.conf , installing or upgrading VDSM or Red Hat Virtualization can overwrite this file including any customizations it contains. This overwriting can cause severe storage failures. Instead, you create a file in the /etc/multipath/conf.d directory that contains the settings you want to customize or override. VDSM executes the files in /etc/multipath/conf.d in alphabetical order. So, to control the order of execution, you begin the filename with a number that makes it come last. For example, /etc/multipath/conf.d/90-myfile.conf . To avoid causing severe storage failures, follow these guidelines: Do not modify /etc/multipath.conf . If the file contains user modifications, and the file is overwritten, it can cause unexpected storage problems. Do not override the user_friendly_names and find_multipaths settings. For details, see Recommended Settings for Multipath.conf . Avoid overriding the no_path_retry and polling_interval settings unless a storage vendor specifically requires you to do so. For details, see Recommended Settings for Multipath.conf . Warning Not following these guidelines can cause catastrophic storage errors. Prerequisites VDSM is configured to use the multipath module. To verify this, enter: Procedure Create a new configuration file in the /etc/multipath/conf.d directory. Copy the individual setting you want to override from /etc/multipath.conf to the new configuration file in /etc/multipath/conf.d/<my_device>.conf . Remove any comment marks, edit the setting values, and save your changes. Apply the new configuration settings by entering: Note Do not restart the multipathd service. Doing so generates errors in the VDSM logs. Verification steps Test that the new configuration performs as expected on a non-production cluster in a variety of failure scenarios. For example, disable all of the storage connections. Enable one connection at a time and verify that doing so makes the storage domain reachable. Additional resources Recommended Settings for Multipath.conf Red Hat Enterprise Linux DM Multipath Configuring iSCSI Multipathing How do I customize /etc/multipath.conf on my RHVH hypervisors? What values must not change and why? 5.8. Recommended Settings for Multipath.conf Do not override the following settings: user_friendly_names no Device names must be consistent across all hypervisors. For example, /dev/mapper/{WWID} . The default value of this setting, no , prevents the assignment of arbitrary and inconsistent device names such as /dev/mapper/mpath{N} on various hypervisors, which can lead to unpredictable system behavior. Warning Do not change this setting to user_friendly_names yes . User-friendly names are likely to cause unpredictable system behavior or failures, and are not supported. find_multipaths no This setting controls whether RHVH tries to access devices through multipath only if more than one path is available. The current value, no , allows RHV to access devices through multipath even if only one path is available. Warning Do not override this setting. Avoid overriding the following settings unless required by the storage system vendor: no_path_retry 4 This setting controls the number of polling attempts to retry when no paths are available. Before RHV version 4.2, the value of no_path_retry was fail because QEMU had trouble with the I/O queuing when no paths were available. The fail value made it fail quickly and paused the virtual machine. RHV version 4.2 changed this value to 4 so when multipathd detects the last path has failed, it checks all of the paths four more times. Assuming the default 5-second polling interval, checking the paths takes 20 seconds. If no path is up, multipathd tells the kernel to stop queuing and fails all outstanding and future I/O until a path is restored. When a path is restored, the 20-second delay is reset for the time all paths fail. For more details, see the commit that changed this setting . polling_interval 5 This setting determines the number of seconds between polling attempts to detect whether a path is open or has failed. Unless the vendor provides a clear reason for increasing the value, keep the VDSM-generated default so the system responds to path failures sooner.
[ "dnf install nfs-utils -y", "cat /proc/fs/nfsd/versions", "systemctl enable nfs-server systemctl enable rpcbind", "groupadd kvm -g 36", "useradd vdsm -u 36 -g kvm", "mkdir /storage chmod 0755 /storage chown 36:36 /storage/", "vi /etc/exports cat /etc/exports /storage *(rw)", "systemctl restart rpcbind systemctl restart nfs-server", "exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server <world>", "cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue }", "cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } }", "mkdir -p /data/images", "chown 36:36 /data /data/images chmod 0755 /data /data/images", "mkdir /data lvcreate -L USDSIZE rhvh -n data mkfs.ext4 /dev/mapper/rhvh-data echo \"/dev/mapper/rhvh-data /data ext4 defaults,discard 1 2\" >> /etc/fstab mount /data", "mount -a", "chown 36:36 /data /rhvh-data chmod 0755 /data /rhvh-data", "vdsm-tool is-configured --module multipath", "systemctl reload multipathd" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/Preparing_Storage_for_RHV_SM_remoteDB_deploy
14.4. Configuring a Multihomed DHCP Server
14.4. Configuring a Multihomed DHCP Server A multihomed DHCP server serves multiple networks, that is, multiple subnets. The examples in these sections detail how to configure a DHCP server to serve multiple networks, select which network interfaces to listen on, and how to define network settings for systems that move networks. Before making any changes, back up the existing /etc/dhcp/dhcpd.conf file. The DHCP daemon will only listen on interfaces for which it finds a subnet declaration in the /etc/dhcp/dhcpd.conf file. The following is a basic /etc/dhcp/dhcpd.conf file, for a server that has two network interfaces, enp1s0 in a 10.0.0.0/24 network, and enp2s0 in a 172.16.0.0/24 network. Multiple subnet declarations allow you to define different settings for multiple networks: subnet 10.0.0.0 netmask 255.255.255.0 ; A subnet declaration is required for every network your DHCP server is serving. Multiple subnets require multiple subnet declarations. If the DHCP server does not have a network interface in a range of a subnet declaration, the DHCP server does not serve that network. If there is only one subnet declaration, and no network interfaces are in the range of that subnet, the DHCP daemon fails to start, and an error such as the following is logged to /var/log/messages : option subnet-mask 255.255.255.0 ; The option subnet-mask option defines a subnet mask, and overrides the netmask value in the subnet declaration. In simple cases, the subnet and netmask values are the same. option routers 10.0.0.1 ; The option routers option defines the default gateway for the subnet. This is required for systems to reach internal networks on a different subnet, as well as external networks. range 10.0.0.5 10.0.0.15 ; The range option specifies the pool of available IP addresses. Systems are assigned an address from the range of specified IP addresses. For further information, see the dhcpd.conf(5) man page. Warning To avoid misconfiguration when DHCP server gives IP addresses from one IP range to another physical Ethernet segment, make sure you do not enclose more subnets in a shared-network declaration. 14.4.1. Host Configuration Before making any changes, back up the existing /etc/sysconfig/dhcpd and /etc/dhcp/dhcpd.conf files. Configuring a Single System for Multiple Networks The following /etc/dhcp/dhcpd.conf example creates two subnets, and configures an IP address for the same system, depending on which network it connects to: host example0 The host declaration defines specific parameters for a single system, such as an IP address. To configure specific parameters for multiple hosts, use multiple host declarations. Most DHCP clients ignore the name in host declarations, and as such, this name can be anything, as long as it is unique to other host declarations. To configure the same system for multiple networks, use a different name for each host declaration, otherwise the DHCP daemon fails to start. Systems are identified by the hardware ethernet option, not the name in the host declaration. hardware ethernet 00:1A:6B:6A:2E:0B ; The hardware ethernet option identifies the system. To find this address, run the ip link command. fixed-address 10.0.0.20 ; The fixed-address option assigns a valid IP address to the system specified by the hardware ethernet option. This address must be outside the IP address pool specified with the range option. If option statements do not end with a semicolon, the DHCP daemon fails to start, and an error such as the following is logged to /var/log/messages : Configuring Systems with Multiple Network Interfaces The following host declarations configure a single system, which has multiple network interfaces, so that each interface receives the same IP address. This configuration will not work if both network interfaces are connected to the same network at the same time: For this example, interface0 is the first network interface, and interface1 is the second interface. The different hardware ethernet options identify each interface. If such a system connects to another network, add more host declarations, remembering to: assign a valid fixed-address for the network the host is connecting to. make the name in the host declaration unique. When a name given in a host declaration is not unique, the DHCP daemon fails to start, and an error such as the following is logged to /var/log/messages : This error was caused by having multiple host interface0 declarations defined in /etc/dhcp/dhcpd.conf .
[ "default-lease-time 600 ; max-lease-time 7200 ; subnet 10.0.0.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option routers 10.0.0.1; range 10.0.0.5 10.0.0.15; } subnet 172.16.0.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option routers 172.16.0.1; range 172.16.0.5 172.16.0.15; }", "dhcpd: No subnet declaration for enp1s0 (0.0.0.0). dhcpd: ** Ignoring requests on enp1s0. If this is not what dhcpd: you want, please write a subnet declaration dhcpd: in your dhcpd.conf file for the network segment dhcpd: to which interface enp2s0 is attached. ** dhcpd: dhcpd: dhcpd: Not configured to listen on any interfaces!", "default-lease-time 600 ; max-lease-time 7200 ; subnet 10.0.0.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option routers 10.0.0.1; range 10.0.0.5 10.0.0.15; } subnet 172.16.0.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option routers 172.16.0.1; range 172.16.0.5 172.16.0.15; } host example0 { hardware ethernet 00:1A:6B:6A:2E:0B; fixed-address 10.0.0.20; } host example1 { hardware ethernet 00:1A:6B:6A:2E:0B; fixed-address 172.16.0.20; }", "/etc/dhcp/dhcpd.conf line 20: semicolon expected. dhcpd: } dhcpd: ^ dhcpd: /etc/dhcp/dhcpd.conf line 38: unexpected end of file dhcpd: dhcpd: ^ dhcpd: Configuration file errors encountered -- exiting", "host interface0 { hardware ethernet 00:1a:6b:6a:2e:0b; fixed-address 10.0.0.18; } host interface1 { hardware ethernet 00:1A:6B:6A:27:3A; fixed-address 10.0.0.18; }", "dhcpd: /etc/dhcp/dhcpd.conf line 31: host interface0: already exists dhcpd: } dhcpd: ^ dhcpd: Configuration file errors encountered -- exiting" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-Configuring_a_Multihomed_DHCP_Server
A.19. Common libvirt Errors and Troubleshooting
A.19. Common libvirt Errors and Troubleshooting This appendix documents common libvirt -related problems and errors along with instructions for dealing with them. Locate the error on the table below and follow the corresponding link under Solution for detailed troubleshooting information. Table A.1. Common libvirt errors Error Description of problem Solution libvirtd failed to start The libvirt daemon failed to start. However, there is no information about this error in /var/log/messages . Section A.19.1, " libvirtd failed to start" Cannot read CA certificate This is one of several errors that occur when the URI fails to connect to the hypervisor. Section A.19.2, "The URI Failed to Connect to the Hypervisor" Other connectivity errors These are other errors that occur when the URI fails to connect to the hypervisor. Section A.19.2, "The URI Failed to Connect to the Hypervisor" PXE boot (or DHCP) on guest failed A guest virtual machine starts successfully, but is unable to acquire an IP address from DHCP, boot using the PXE protocol, or both. This is often a result of a long forward delay time set for the bridge, or when the iptables package and kernel do not support checksum mangling rules. Section A.19.3, "PXE Boot (or DHCP) on Guest Failed" Guest can reach outside network, but cannot reach host when using macvtap interface A guest can communicate with other guests, but cannot connect to the host machine after being configured to use a macvtap (or type='direct' ) network interface. This is actually not an error - it is the defined behavior of macvtap. Section A.19.4, "Guest Can Reach Outside Network, but Cannot Reach Host When Using macvtap interface" Could not add rule to fixup DHCP response checksums on network 'default' This warning message is almost always harmless, but is often mistakenly seen as evidence of a problem. Section A.19.5, "Could not add rule to fixup DHCP response checksums on network 'default' " Unable to add bridge br0 port vnet0: No such device This error message or the similar Failed to add tap interface to bridge 'br0' : No such device reveal that the bridge device specified in the guest's (or domain's) <interface> definition does not exist. Section A.19.6, "Unable to add bridge br0 port vnet0: No such device" Unable to resolve address name_of_host service '49155': Name or service not known QEMU guest migration fails and this error message appears with an unfamiliar host name. Section A.19.7, "Migration Fails with error: unable to resolve address " Unable to allow access for disk path /var/lib/libvirt/images/qemu.img: No such file or directory A guest virtual machine cannot be migrated because libvirt cannot access the disk image(s). Section A.19.8, "Migration Fails with Unable to allow access for disk path: No such file or directory " No guest virtual machines are present when libvirtd is started The libvirt daemon is successfully started, but no guest virtual machines appear to be present when running virsh list --all . Section A.19.9, "No Guest Virtual Machines are Present when libvirtd is Started" Common XML errors libvirt uses XML documents to store structured data. Several common errors occur with XML documents when they are passed to libvirt through the API. This entry provides instructions for editing guest XML definitions, and details common errors in XML syntax and configuration. Section A.19.10, "Common XML Errors" A.19.1. libvirtd failed to start Symptom The libvirt daemon does not start automatically. Starting the libvirt daemon manually fails as well: Moreover, there is not 'more info' about this error in /var/log/messages . Investigation Change libvirt's logging in /etc/libvirt/libvirtd.conf by enabling the line below. To enable the setting the line, open the /etc/libvirt/libvirtd.conf file in a text editor, remove the hash (or # ) symbol from the beginning of the following line, and save the change: Note This line is commented out by default to prevent libvirt from producing excessive log messages. After diagnosing the problem, it is recommended to comment this line again in the /etc/libvirt/libvirtd.conf file. Restart libvirt to determine if this has solved the problem. If libvirtd still does not start successfully, an error similar to the following will be printed: The libvirtd man page shows that the missing cacert.pem file is used as TLS authority when libvirt is run in Listen for TCP/IP connections mode. This means the --listen parameter is being passed. Solution Configure the libvirt daemon's settings with one of the following methods: Install a CA certificate. Note For more information on CA certificates and configuring system authentication, see the Managing Certificates and Certificate Authorities chapter in the Red Hat Enterprise Linux 7 Domain Identity, Authentication, and Policy Guide . Do not use TLS; use bare TCP instead. In /etc/libvirt/libvirtd.conf set listen_tls = 0 and listen_tcp = 1 . The default values are listen_tls = 1 and listen_tcp = 0 . Do not pass the --listen parameter. In /etc/sysconfig/libvirtd.conf change the LIBVIRTD_ARGS variable. A.19.2. The URI Failed to Connect to the Hypervisor Several different errors can occur when connecting to the server (for example, when running virsh ). A.19.2.1. Cannot read CA certificate Symptom When running a command, the following error (or similar) appears: Investigation The error message is misleading about the actual cause. This error can be caused by a variety of factors, such as an incorrectly specified URI, or a connection that is not configured. Solution Incorrectly specified URI When specifying qemu://system or qemu://session as a connection URI, virsh attempts to connect to host names' system or session respectively. This is because virsh recognizes the text after the second forward slash as the host. Use three forward slashes to connect to the local host. For example, specifying qemu:///system instructs virsh connect to the system instance of libvirtd on the local host. When a host name is specified, the QEMU transport defaults to TLS . This results in certificates. Connection is not configured The URI is correct (for example, qemu[+tls]://server/system ) but the certificates are not set up properly on your machine. For information on configuring TLS, see the upstream libvirt website . A.19.2.2. unable to connect to server at 'host:16509': Connection refused Symptom While libvirtd should listen on TCP ports for connections, the connections fail: The libvirt daemon is not listening on TCP ports even after changing configuration in /etc/libvirt/libvirtd.conf : However, the TCP ports for libvirt are still not open after changing configuration: Investigation The libvirt daemon was started without the --listen option. Verify this by running this command: The output does not contain the --listen option. Solution Start the daemon with the --listen option. To do this, modify the /etc/sysconfig/libvirtd file and uncomment the following line: Then, restart the libvirtd service with this command: A.19.2.3. Authentication Failed Symptom When running a command, the following error (or similar) appears: Investigation If authentication fails even when the correct credentials are used, it is possible that the SASL authentication is not configured. Solution Edit the /etc/libvirt/libvirtd.conf file and set the value of the auth_tcp parameter to sasl . To verify: Edit the /etc/sasl2/libvirt.conf file and add the following lines to the file: Ensure the cyrus-sasl-md5 package is installed: Restart the libvirtd service: Set a user name and password for libvirt SASL: A.19.2.4. Permission Denied Symptom When running a virsh command as a non-root user, the following error (or similar) appears: Solution Edit the /etc/libvirt/libvirt.conf file and add the following lines to the file: Restart the libvirtd service: A.19.3. PXE Boot (or DHCP) on Guest Failed Symptom A guest virtual machine starts successfully, but is then either unable to acquire an IP address from DHCP or boot using the PXE protocol, or both. There are two common causes of this error: having a long forward delay time set for the bridge, and when the iptables package and kernel do not support checksum mangling rules. Long forward delay time on bridge Investigation This is the most common cause of this error. If the guest network interface is connecting to a bridge device that has STP (Spanning Tree Protocol) enabled, as well as a long forward delay set, the bridge will not forward network packets from the guest virtual machine onto the bridge until at least that number of forward delay seconds have elapsed since the guest connected to the bridge. This delay allows the bridge time to watch traffic from the interface and determine the MAC addresses behind it, and prevent forwarding loops in the network topology. If the forward delay is longer than the timeout of the guest's PXE or DHCP client, the client's operation will fail, and the guest will either fail to boot (in the case of PXE) or fail to acquire an IP address (in the case of DHCP). Solution If this is the case, change the forward delay on the bridge to 0, disable STP on the bridge, or both. Note This solution applies only if the bridge is not used to connect multiple networks, but just to connect multiple endpoints to a single network (the most common use case for bridges used by libvirt ). If the guest has interfaces connecting to a libvirt -managed virtual network, edit the definition for the network, and restart it. For example, edit the default network with the following command: Add the following attributes to the <bridge> element: < name_of_bridge ='virbr0' delay='0' stp='on' /> Note delay='0' and stp='on' are the default settings for virtual networks, so this step is only necessary if the configuration has been modified from the default. If the guest interface is connected to a host bridge that was configured outside of libvirt , change the delay setting. Add or edit the following lines in the /etc/sysconfig/network-scripts/ifcfg- name_of_bridge file to turn STP on with a 0 second delay: STP=on DELAY=0 After changing the configuration file, restart the bridge device: /usr/sbin/ifdown name_of_bridge /usr/sbin/ifup name_of_bridge Note If name_of_bridge is not the root bridge in the network, that bridge's delay will be eventually reset to the delay time configured for the root bridge. To prevent this from occurring, disable STP on name_of_bridge . The iptables package and kernel do not support checksum mangling rules Investigation This message is only a problem if all four of the following conditions are true: The guest is using virtio network devices. If so, the configuration file will contain model type='virtio' The host has the vhost-net module loaded. This is true if ls /dev/vhost-net does not return an empty result. The guest is attempting to get an IP address from a DHCP server that is running directly on the host. The iptables version on the host is older than 1.4.10. iptables 1.4.10 was the first version to add the libxt_CHECKSUM extension. This is the case if the following message appears in the libvirtd logs: Important Unless all of the other three conditions in this list are also true, the above warning message can be disregarded, and is not an indicator of any other problems. When these conditions occur, UDP packets sent from the host to the guest have uncomputed checksums. This makes the host's UDP packets seem invalid to the guest's network stack. Solution To solve this problem, invalidate any of the four points above. The best solution is to update the host iptables and kernel to iptables-1.4.10 or newer where possible. Otherwise, the most specific fix is to disable the vhost-net driver for this particular guest. To do this, edit the guest configuration with this command: Change or add a <driver> line to the <interface> section: <interface type='network'> <model type='virtio'/> <driver name='qemu'/> ... </interface> Save the changes, shut down the guest, and then restart it. If this problem is still not resolved, the issue may be due to a conflict between firewalld and the default libvirt network. To fix this, stop firewalld with the service firewalld stop command, then restart libvirt with the service libvirtd restart command. Note In addition, if the /etc/sysconfig/network-scripts/ifcfg- network_name file is configured correctly, you can ensure that the guest acquires an IP address by using the dhclient command as root on the guest. A.19.4. Guest Can Reach Outside Network, but Cannot Reach Host When Using macvtap interface Symptom A guest virtual machine can communicate with other guests, but cannot connect to the host machine after being configured to use a macvtap (also known as type='direct' ) network interface. Investigation Even when not connecting to a Virtual Ethernet Port Aggregator (VEPA) or VN-Link capable switch, macvtap interfaces can be useful. Setting the mode of such an interface to bridge allows the guest to be directly connected to the physical network in a very simple manner without the setup issues (or NetworkManager incompatibility) that can accompany the use of a traditional host bridge device. However, when a guest virtual machine is configured to use a type='direct' network interface such as macvtap, despite having the ability to communicate with other guests and other external hosts on the network, the guest cannot communicate with its own host. This situation is actually not an error - it is the defined behavior of macvtap. Due to the way in which the host's physical Ethernet is attached to the macvtap bridge, traffic into that bridge from the guests that is forwarded to the physical interface cannot be bounced back up to the host's IP stack. Additionally, traffic from the host's IP stack that is sent to the physical interface cannot be bounced back up to the macvtap bridge for forwarding to the guests. Solution Use libvirt to create an isolated network, and create a second interface for each guest virtual machine that is connected to this network. The host and guests can then directly communicate over this isolated network, while also maintaining compatibility with NetworkManager . Procedure A.8. Creating an isolated network with libvirt Add and save the following XML in the /tmp/isolated.xml file. If the 192.168.254.0/24 network is already in use elsewhere on your network, you can choose a different network. ... <network> <name>isolated</name> <ip address='192.168.254.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.254.2' end='192.168.254.254'/> </dhcp> </ip> </network> ... Figure A.3. Isolated Network XML Create the network with this command: virsh net-define /tmp/isolated.xml Set the network to autostart with the virsh net-autostart isolated command. Start the network with the virsh net-start isolated command. Using virsh edit name_of_guest , edit the configuration of each guest that uses macvtap for its network connection and add a new <interface> in the <devices> section similar to the following (note the <model type='virtio'/> line is optional to include): ... <interface type='network' trustGuestRxFilters='yes'> <source network='isolated'/> <model type='virtio'/> </interface> Figure A.4. Interface Device XML Shut down, then restart each of these guests. The guests are now able to reach the host at the address 192.168.254.1, and the host will be able to reach the guests at the IP address they acquired from DHCP (alternatively, you can manually configure the IP addresses for the guests). Since this new network is isolated to only the host and guests, all other communication from the guests will use the macvtap interface. For more information, see Section 23.17.8, "Network Interfaces" . A.19.5. Could not add rule to fixup DHCP response checksums on network 'default' Symptom This message appears: Investigation Although this message appears to be evidence of an error, it is almost always harmless. Solution Unless the problem you are experiencing is that the guest virtual machines are unable to acquire IP addresses through DHCP, this message can be ignored. If this is the case, see Section A.19.3, "PXE Boot (or DHCP) on Guest Failed" for further details on this situation. A.19.6. Unable to add bridge br0 port vnet0: No such device Symptom The following error message appears: For example, if the bridge name is br0 , the error message appears as: In libvirt versions 0.9.6 and earlier, the same error appears as: Or for example, if the bridge is named br0 : Investigation Both error messages reveal that the bridge device specified in the guest's (or domain's) <interface> definition does not exist. To verify the bridge device listed in the error message does not exist, use the shell command ip addr show br0 . A message similar to this confirms the host has no bridge by that name: If this is the case, continue to the solution. However, if the resulting message is similar to the following, the issue exists elsewhere: Solution Edit the existing bridge or create a new bridge with virsh Use virsh to either edit the settings of an existing bridge or network, or to add the bridge device to the host system configuration. Edit the existing bridge settings using virsh Use virsh edit name_of_guest to change the <interface> definition to use a bridge or network that already exists. For example, change type='bridge' to type='network' , and <source bridge='br0'/> to <source network='default'/> . Create a host bridge using virsh For libvirt version 0.9.8 and later, a bridge device can be created with the virsh iface-bridge command. This creates a bridge device br0 with eth0 , the physical network interface that is set as part of a bridge, attached: Optional: If needed, remove this bridge and restore the original eth0 configuration with this command: Create a host bridge manually For older versions of libvirt , it is possible to manually create a bridge device on the host. For instructions, see Section 6.4.3, "Bridged Networking with libvirt" . A.19.7. Migration Fails with error: unable to resolve address Symptom QEMU guest migration fails and this error message appears: For example, if the destination host name is newyork , the error message appears as: However, this error looks strange as we did not use newyork host name anywhere. Investigation During migration, libvirtd running on the destination host creates a URI from an address and port where it expects to receive migration data and sends it back to libvirtd running on the source host. In this case, the destination host ( 192.168.122.12 ) has its name set to 'newyork' . For some reason, libvirtd running on that host is unable to resolve the name to an IP address that could be sent back and still be useful. For this reason, it returned the 'newyork' host name hoping the source libvirtd would be more successful with resolving the name. This can happen if DNS is not properly configured or /etc/hosts has the host name associated with local loopback address ( 127.0.0.1 ). Note that the address used for migration data cannot be automatically determined from the address used for connecting to destination libvirtd (for example, from qemu+tcp://192.168.122.12/system ). This is because to communicate with the destination libvirtd , the source libvirtd may need to use network infrastructure different from the type that virsh (possibly running on a separate machine) requires. Solution The best solution is to configure DNS correctly so that all hosts involved in migration are able to resolve all host names. If DNS cannot be configured to do this, a list of every host used for migration can be added manually to the /etc/hosts file on each of the hosts. However, it is difficult to keep such lists consistent in a dynamic environment. If the host names cannot be made resolvable by any means, virsh migrate supports specifying the migration host: Destination libvirtd will take the tcp://192.168.122.12 URI and append an automatically generated port number. If this is not desirable (because of firewall configuration, for example), the port number can be specified in this command: Another option is to use tunneled migration. Tunneled migration does not create a separate connection for migration data, but instead tunnels the data through the connection used for communication with destination libvirtd (for example, qemu+tcp://192.168.122.12/system ): A.19.8. Migration Fails with Unable to allow access for disk path: No such file or directory Symptom A guest virtual machine (or domain) cannot be migrated because libvirt cannot access the disk image(s): For example, if the destination host name is newyork , the error message appears as: Investigation By default, migration only transfers the in-memory state of a running guest (such as memory or CPU state). Although disk images are not transferred during migration, they need to remain accessible at the same path by both hosts. Solution Set up and mount shared storage at the same location on both hosts. The simplest way to do this is to use NFS: Procedure A.9. Setting up shared storage Set up an NFS server on a host serving as shared storage. The NFS server can be one of the hosts involved in the migration, as long as all hosts involved are accessing the shared storage through NFS. Mount the exported directory at a common location on all hosts running libvirt . For example, if the IP address of the NFS server is 192.168.122.1, mount the directory with the following commands: Note It is not possible to export a local directory from one host using NFS and mount it at the same path on another host - the directory used for storing disk images must be mounted from shared storage on both hosts. If this is not configured correctly, the guest virtual machine may lose access to its disk images during migration, because the source host's libvirt daemon may change the owner, permissions, and SELinux labels on the disk images after it successfully migrates the guest to its destination. If libvirt detects that the disk images are mounted from a shared storage location, it will not make these changes. A.19.9. No Guest Virtual Machines are Present when libvirtd is Started Symptom The libvirt daemon is successfully started, but no guest virtual machines appear to be present. Investigation There are various possible causes of this problem. Performing these tests will help to determine the cause of this situation: Verify KVM kernel modules Verify that KVM kernel modules are inserted in the kernel: If you are using an AMD machine, verify the kvm_amd kernel modules are inserted in the kernel instead, using the similar command lsmod | grep kvm_amd in the root shell. If the modules are not present, insert them using the modprobe <modulename> command. Note Although it is uncommon, KVM virtualization support may be compiled into the kernel. In this case, modules are not needed. Verify virtualization extensions Verify that virtualization extensions are supported and enabled on the host: Enable virtualization extensions in your hardware's firmware configuration within the BIOS setup. See your hardware documentation for further details on this. Verify client URI configuration Verify that the URI of the client is configured as intended: For example, this message shows the URI is connected to the VirtualBox hypervisor, not QEMU , and reveals a configuration error for a URI that is otherwise set to connect to a QEMU hypervisor. If the URI was correctly connecting to QEMU , the same message would appear instead as: This situation occurs when there are other hypervisors present, which libvirt may speak to by default. Solution After performing these tests, use the following command to view a list of guest virtual machines: A.19.10. Common XML Errors The libvirt tool uses XML documents to store structured data. A variety of common errors occur with XML documents when they are passed to libvirt through the API. Several common XML errors - including erroneous XML tags, inappropriate values, and missing elements - are detailed below. A.19.10.1. Editing domain definition Although it is not recommended, it is sometimes necessary to edit a guest virtual machine's (or a domain's) XML file manually. To access the guest's XML for editing, use the following command: This command opens the file in a text editor with the current definition of the guest virtual machine. After finishing the edits and saving the changes, the XML is reloaded and parsed by libvirt . If the XML is correct, the following message is displayed: Important When using the edit command in virsh to edit an XML document, save all changes before exiting the editor. After saving the XML file, use the xmllint command to validate that the XML is well-formed, or the virt-xml-validate command to check for usage problems: If no errors are returned, the XML description is well-formed and matches the libvirt schema. While the schema does not catch all constraints, fixing any reported errors will further troubleshooting. XML documents stored by libvirt These documents contain definitions of states and configurations for the guests. These documents are automatically generated and should not be edited manually. Errors in these documents contain the file name of the broken document. The file name is valid only on the host machine defined by the URI, which may see the machine the command was run on. Errors in files created by libvirt are rare. However, one possible source of these errors is a downgrade of libvirt - while newer versions of libvirt can always read XML generated by older versions, older versions of libvirt may be confused by XML elements added in a newer version. A.19.10.2. XML syntax errors Syntax errors are caught by the XML parser. The error message contains information for identifying the problem. This example error message from the XML parser consists of three lines - the first line denotes the error message, and the two following lines contain the context and location of the XML code containing the error. The third line contains an indicator showing approximately where the error lies on the line above it: Information contained in this message: ( name_of_guest.xml ) This is the file name of the document that contains the error. File names in parentheses are symbolic names to describe XML documents parsed from memory, and do not directly correspond to files on disk. File names that are not contained in parentheses are local files that reside on the target of the connection. 6 This is the line number in the XML file that contains the error. StartTag: invalid element name This is the error message from the libxml2 parser, which describes the specific XML error. A.19.10.2.1. Stray < in the document Symptom The following error occurs: Investigation This error message shows that the parser expects a new element name after the < symbol on line 6 of a guest's XML file. Ensure line number display is enabled in your text editor. Open the XML file, and locate the text on line 6: This snippet of a guest's XML file contains an extra < in the document: Solution Remove the extra < or finish the new element. A.19.10.2.2. Unterminated attribute Symptom The following error occurs: Investigation This snippet of a guest's XML file contains an unterminated element attribute value: In this case, 'kvm' is missing a second quotation mark. Attribute values must be opened and closed with quotation marks or apostrophes, similar to XML start and end tags. Solution Correctly open and close all attribute value strings. A.19.10.2.3. Opening and ending tag mismatch Symptom The following error occurs: Investigation The error message above contains three clues to identify the offending tag: The message following the last colon, clock line 16 and domain , reveals that <clock> contains a mismatched tag on line 16 of the document. The last hint is the pointer in the context part of the message, which identifies the second offending tag. Unpaired tags must be closed with /> . The following snippet does not follow this rule and has produced the error message shown above: This error is caused by mismatched XML tags in the file. Every XML tag must have a matching start and end tag. Other examples of mismatched XML tags The following examples produce similar error messages and show variations of mismatched XML tags. This snippet contains an mismatch error for <features> because there is no end tag ( </name> ): This snippet contains an end tag ( </name> ) without a corresponding start tag: Solution Ensure all XML tags start and end correctly. A.19.10.2.4. Typographical errors in tags Symptom The following error message appears: Investigation XML errors are easily caused by a simple typographical error. This error message highlights the XML error - in this case, an extra white space within the word type - with a pointer. These XML examples will not parse correctly because of typographical errors such as a missing special character, or an additional character: Solution To identify the problematic tag, read the error message for the context of the file, and locate the error with the pointer. Correct the XML and save the changes. A.19.10.3. Logic and configuration errors A well-formatted XML document can contain errors that are correct in syntax but libvirt cannot parse. Many of these errors exist, with two of the most common cases outlined below. A.19.10.3.1. Vanishing parts Symptom Parts of the change you have made do not show up and have no effect after editing or defining the domain. The define or edit command works, but when dumping the XML once again, the change disappears. Investigation This error likely results from a broken construct or syntax that libvirt does not parse. The libvirt tool will generally only look for constructs it knows but ignore everything else, resulting in some of the XML changes vanishing after libvirt parses the input. Solution Validate the XML input before passing it to the edit or define commands. The libvirt developers maintain a set of XML schemas bundled with libvirt that define the majority of the constructs allowed in XML documents used by libvirt . Validate libvirt XML files using the following command: If this command passes, libvirt will likely understand all constructs from your XML, except if the schemas cannot detect options that are valid only for a given hypervisor. For example, any XML generated by libvirt as a result of a virsh dump command should validate without error. A.19.10.3.2. Incorrect drive device type Symptom The definition of the source image for the CD-ROM virtual drive is not present, despite being added: Solution Correct the XML by adding the missing <source> parameter as follows: A type='block' disk device expects that the source is a physical device. To use the disk with an image file, use type='file' instead.
[ "systemctl start libvirtd.service * Caching service dependencies ... [ ok ] * Starting libvirtd /usr/sbin/libvirtd: error: Unable to initialize network sockets. Check /var/log/messages or run without --daemon for more info. * start-stop-daemon: failed to start `/usr/sbin/libvirtd' [ !! ] * ERROR: libvirtd failed to start", "log_outputs=\"3:syslog:libvirtd\"", "systemctl restart libvirtd Job for libvirtd.service failed because the control process exited with error code. See \"systemctl status libvirtd.service\" and \"journalctl -xe\" for details. Sep 19 16:06:02 jsrh libvirtd[30708]: 2017-09-19 14:06:02.097+0000: 30708: info : libvirt version: 3.7.0, package: 1.el7 (Unknown, 2017-09-06-09:01:55, js Sep 19 16:06:02 jsrh libvirtd[30708]: 2017-09-19 14:06:02.097+0000: 30708: info : hostname: jsrh Sep 19 16:06:02 jsrh libvirtd[30708]: 2017-09-19 14:06:02.097+0000: 30708: error : daemonSetupNetworking:502 : unsupported configuration: No server certif Sep 19 16:06:02 jsrh systemd[1]: libvirtd.service: main process exited, code=exited, status=6/NOTCONFIGURED Sep 19 16:06:02 jsrh systemd[1]: Failed to start Virtualization daemon. -- Subject: Unit libvirtd.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit libvirtd.service has failed. -- -- The result is failed.", "virsh -c qemu://USD hostname /system_list error: failed to connect to the hypervisor error: Cannot read CA certificate '/etc/pki/CA/cacert.pem': No such file or directory", "virsh -c qemu+tcp://host/system error: failed to connect to the hypervisor error: unable to connect to server at 'host:16509': Connection refused", "grep listen_ /etc/libvirt/libvirtd.conf listen_tls = 1 listen_tcp = 1 listen_addr = \"0.0.0.0\"", "netstat -lntp | grep libvirtd #", "ps aux | grep libvirtd root 10749 0.1 0.2 558276 18280 ? Ssl 23:21 0:00 /usr/sbin/libvirtd", "LIBVIRTD_ARGS=\"--listen\"", "/bin/systemctl restart libvirtd.service", "virsh -c qemu://USD hostname /system_list error: failed to connect to the hypervisor error: authentication failed: authentication failed", "cat /etc/libvirt/libvirtd.conf | grep auth_tcp auth_tcp = \"sasl\"", "mech_list: digest-md5 sasldb_path: /etc/libvirt/passwd.db", "yum install cyrus-sasl-md5", "systemctl restart libvirtd", "saslpasswd2 -a libvirt 1", "virsh -c qemu://USD hostname /system_list error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied error: failed to connect to the hypervisor", "#unix_sock_group = \"libvirt\" #unix_sock_ro_perms = \"0777\" #unix_sock_rw_perms = \"0770\"", "systemctl restart libvirtd", "virsh net-edit default", "< name_of_bridge ='virbr0' delay='0' stp='on' />", "STP=on DELAY=0", "/usr/sbin/ifdown name_of_bridge /usr/sbin/ifup name_of_bridge", "warning: Could not add rule to fixup DHCP response checksums on network default warning: May need to update iptables package and kernel to support CHECKSUM rule.", "virsh edit name_of_guest", "<interface type='network'> <model type='virtio'/> <driver name='qemu'/> </interface>", "<network> <name>isolated</name> <ip address='192.168.254.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.254.2' end='192.168.254.254'/> </dhcp> </ip> </network>", "<interface type='network' trustGuestRxFilters='yes'> <source network='isolated'/> <model type='virtio'/> </interface>", "Could not add rule to fixup DHCP response checksums on network 'default'", "Unable to add bridge name_of_bridge port vnet0: No such device", "Unable to add bridge br0 port vnet0: No such device", "Failed to add tap interface to bridge name_of_bridge : No such device", "Failed to add tap interface to bridge 'br0' : No such device", "br0 : error fetching interface information: Device not found", "br0 Link encap:Ethernet HWaddr 00:00:5A:11:70:48 inet addr:10.22.1.5 Bcast:10.255.255.255 Mask:255.0.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:249841 errors:0 dropped:0 overruns:0 frame:0 TX packets:281948 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:106327234 (101.4 MiB) TX bytes:21182634 (20.2 MiB)", "virsh iface-bridge eth0 br0", "virsh iface-unbridge br0", "virsh migrate qemu qemu+tcp://192.168.122.12/system error: Unable to resolve address name_of_host service '49155': Name or service not known", "virsh migrate qemu qemu+tcp://192.168.122.12/system error: Unable to resolve address 'newyork' service '49155': Name or service not known", "virsh migrate qemu qemu+tcp://192.168.122.12/system tcp://192.168.122.12", "virsh migrate qemu qemu+tcp://192.168.122.12/system tcp://192.168.122.12:12345", "virsh migrate qemu qemu+tcp://192.168.122.12/system --p2p --tunnelled", "virsh migrate qemu qemu+tcp:// name_of_host /system error: Unable to allow access for disk path /var/lib/libvirt/images/qemu.img: No such file or directory", "virsh migrate qemu qemu+tcp:// newyork /system error: Unable to allow access for disk path /var/lib/libvirt/images/qemu.img: No such file or directory", "mkdir -p /exports/images cat >>/etc/exports <<EOF /exports/images 192.168.122.0/24(rw,no_root_squash) EOF", "cat >>/etc/fstab <<EOF 192.168.122.1:/exports/images /var/lib/libvirt/images nfs auto 0 0 EOF mount /var/lib/libvirt/images", "virsh list --all Id Name State ----------------------------------------------------", "lsmod | grep kvm kvm_intel 121346 0 kvm 328927 1 kvm_intel", "egrep \"(vmx|svm)\" /proc/cpuinfo flags : fpu vme de pse tsc ... svm ... skinit wdt npt lbrv svm_lock nrip_save flags : fpu vme de pse tsc ... svm ... skinit wdt npt lbrv svm_lock nrip_save", "virsh uri vbox:///system", "virsh uri qemu:///system", "virsh list --all", "virsh edit name_of_guest.xml", "virsh edit name_of_guest.xml Domain name_of_guest.xml XML configuration edited.", "xmllint --noout config.xml", "virt-xml-validate config.xml", "error: ( name_of_guest.xml ):6: StartTag: invalid element name <vcpu>2</vcpu>< -----------------^", "error: ( name_of_guest.xml ):6: StartTag: invalid element name <vcpu>2</vcpu>< -----------------^", "<domain type='kvm'> <name> name_of_guest </name> <memory>524288</memory> <vcpu>2</vcpu><", "error: ( name_of_guest.xml ):2: Unescaped '<' not allowed in attributes values <name> name_of_guest </name> --^", "<domain type='kvm> <name> name_of_guest </name>", "error: ( name_of_guest.xml ):61: Opening and ending tag mismatch: clock line 16 and domain </domain> ---------^", "<domain type='kvm'> <clock offset='utc'>", "<domain type='kvm'> <features> <acpi/> <pae/> </domain>", "<domain type='kvm'> </name> </domain>", "error: (name_of_guest.xml):1: Specification mandate value for attribute ty <domain ty pe='kvm'> -----------^", "<domain ty pe='kvm'>", "<domain type 'kvm'>", "<dom#ain type='kvm'>", "virt-xml-validate libvirt.xml", "virsh dumpxml domain <domain type='kvm'> <disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide'/> <readonly/> </disk> </domain>", "<disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/path/to/image.iso'/> <target dev='hdc' bus='ide'/> <readonly/> </disk>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-troubleshooting-common_libvirt_errors_and_troubleshooting
Appendix C. Supported Data Sources and Translators
Appendix C. Supported Data Sources and Translators C.1. Recommended Translators for Data Sources For a list of supported data sources and translators for this version of JDV, see the Red Hat JBoss Data Virtualization 6.x Supported Configurations article on the Red Hat Customer Portal. Note MS Excel is supported in so much as there is a write procedure. Note The MySQL InnoDB storage engine is not suitable for use as an external materialization target.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/appe-supported_sources
7.33. createrepo
7.33. createrepo 7.33.1. RHBA-2013:0328 - createrepo bug fix and enhancement update Updated createrepo packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The createrepo packages contain the utility that generates a common metadata repository from a directory of RPM packages. Note The createrepo packages have been upgraded to upstream version 0.9.9, which provides a number of bug fixes and enhancements over the version, including support for multitasking in the createrepo utility. This update also modifies the "--update" option to use the SQLite database instead of the XML files in order to reduce memory usage. (BZ#631989, BZ#716235) Bug Fix BZ# 833350 Previously, the createrepo utility ignored the "umask" command for files created in the createrepo cache directory. This behavior caused problems when more than one user was updating repositories. The bug has been fixed, and multiple users can now update repositories without complications. Enhancements BZ#646644 It is now possible to use the "createrepo" command with both the "--split" and the "--pkglist" options simultaneously. BZ# 714094 It is now possible to remove metadata from the repodata directory using the modifyrepo program. This update also enhances updating of the existing metadata. All users of createrepo are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/createrepo
Chapter 2. AMQ Streams 2.5 Long Term Support
Chapter 2. AMQ Streams 2.5 Long Term Support AMQ Streams 2.5 is a Long Term Support (LTS) offering for AMQ Streams. For information on the LTS terms and dates, see the AMQ Streams LTS Support Policy .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/release_notes_for_amq_streams_2.5_on_openshift/ref-lts-str
Chapter 9. Command-Line Utilities
Chapter 9. Command-Line Utilities This chapter contains reference information on command-line utilities used with Red Hat Directory Server (Directory Server). These command-line utilities make it easy to perform administration tasks on the Directory Server. 9.1. ds-replcheck The ds-replcheck utility compares two Directory Server instances or LDIF-formatted files to identify if they are synchronized. For further details, see the Comparing Two Directory Server Instances section in the Red Hat Directory Server Administration Guide . For details about the syntax and command-line options, see the ds-replcheck (1) man page. 9.2. ldif ldif automatically formats LDIF files and creates base-64 encoded attribute values. Base-64 encoding makes it possible to represent binary data, such as a JPEG image, in LDIF. Base-64 encoded data is represented using a double colon ( :: ) symbol. For example: In addition to binary data, other values that must be base-64 encoded can identified with other symbols, including the following: Any value that begins with a space. Any value that begins with a single colon (:). Any value that contains non-ASCII data, including newlines. The ldif command-line utility will take any input and format it with the correct line continuation and appropriate attribute information. The ldif utility also senses whether the input requires base-64 encoding. For details about the syntax and command-line options, see the ldif (5) man page. 9.3. dbscan The dbscan tool analyzes and extracts information from a Directory Server database file. There are four kinds of database files that can be scanned with dbscan : id2entry.db , the main database file for a user database entryrdn.db for a user database secondary index files for a user database, like cn.db numeric_string .db for the changelog in /var/lib/dirsrv/slapd- instance /changelogdb See Section 2.2.2, "Database Files" for more information on database files. Database files use the .db2 , .db3 , .db4 , and .db extensions in their filename, depending on the version of Directory Server. For details about the syntax and command-line options, see the dbscan (1) man page. Examples The following are command-line examples of different situations using dbscan to examine the Directory Server databases. Example 9.1. Dumping the Entry File Example 9.2. Displaying the Index Keys in cn.db Example 9.3. Displaying the Index Keys and the Count of Entries with the Key in mail.db Example 9.4. Displaying the Index Keys and the All IDs with More Than 20 IDs in sn.db Example 9.5. Displaying the Summary of objectclass.db Example 9.6. Displaying VLV Index File Contents Example 9.7. Displaying the Changelog File Contents Example 9.8. Dumping the Index File uid.db with Raw Mode Example 9.9. Displaying the entryID with the Common Name Key "=hr managers" In this example, the common name key is =hr managers , and the equals sign (=) means the key is an equality index. Example 9.10. Displaying an Entry with the entry ID of 7 Example 9.11. Displaying the Contents of entryrdn Index 9.4. ds-logpipe.py The named pipe log script can replace any of the Directory Server log files (access, errors, and audit) with a named pipe. That pipe can be attached to another script which can process the log data before sending it to output, such as only writing lines that match a certain pattern or are of a certain event type. Using a named pipe script provides flexibility: The error log level can be set very high for diagnosing an issue to create a log of only the last few hundred or thousand log messages, without a performance hit. Messages can be filtered to keep only certain events of interest. For example, the named pipe script can record only failed BIND attempts in the access log, and other events are discarded. The script can be used to send notifications when events happen, like adding or deleting a user entry or when a specific error occurs. For details about the syntax and command-line options, see the ds-logpipe.py (1) man page. Examples The procedures for configuring the server for named pipe logging are covered in Section 7.5, "Replacing Log Files with a Named Pipe" . The most basic usage of the named pipe log script points to only the named pipe. Example 9.12. Basic Named Pipe Log Script Note When the script exits (either because it completes or because it is terminated through a SIGTERM or Ctrl+C), the script dumps the last 1000 lines of the error log to standard output. The script can be run in the background, and you can interactively monitor the output. In that case, the command kill -1 %1 can be used to tell the script to dump the last 1000 lines of the buffer to stdout, and continue running in the background. Example 9.13. Running the Named Pipe Log Script in the Background To simply dump the last 1000 lines when the script exits (or is killed or interrupted) and save the output to a file automatically, redirect the script output to a user-defined file. Example 9.14. Saving the Output from the Named Pipe Log Script The named pipe script can be configured to start and stop automatically with the Directory Server process. This requires the name of the server's PID file to which to write the script's PID when the script is running, with the -s argument. The PID for the server can be reference either by pointing to the server PID file or by giving the actual process ID number (if the server process is already running). Example 9.15. Specifying the Serve PID A plug-in can be called to read the log data from the named pipe and perform some operation on it. Example 9.16. Named Pipe Log Script with a Related Plug-in In Example 9.16, "Named Pipe Log Script with a Related Plug-in" , only log lines containing the string warning are stored in the internal buffer and printed when the script exits. If no plug-in is passed with the script arguments, the script just buffers 1000 log lines (by default) and prints them upon exit. There are two plug-ins provided with the script: logregex.py keeps only log lines that match the given regular expression. The plug-in argument has the format logregex.regex= pattern to specify the string or regular expression to use. There can be multiple logregex.regex arguments which are all treated as AND statements. The error log line must match all given arguments. To allow any matching log lines to be records (OR), use a single logregex.regex argument with a pipe (|) between the strings or expressions. See the pcre or Python regular expression documentation for more information about regular expressions and their syntax. failedbinds.py logs only failed BIND attempts, so this plug-in is only used for the access log. This takes the option failedbinds.logfile= /path/to/ access.log , which is the file that the actual log messages are written to. This plug-in is an example of a complex plug-in that does quite a bit of processing and is a good place to reference to do other types of access log processing. 9.5. dn2rdn Versions of Directory Server older than 9.0 used the entrydn index to help map the entry IDs in the id2entry.db4 database to the full DNs of the entry. (One side effect of this was that modrdn operations could only be done on leaf entries, because there was no way to identify the children of an entry and update their DNs if the parent DN changed.) When subtree-level renames are allowed, then the ID-to-entry mapping is done using the entryrdn index with the id2entry.db database. After an upgrade, instances of Directory Server may still be using the entrydn index. The dn2rdn tool has one purpose: to convert the entry index mapping from a DN-based format to an RDN-based format, by converting the entrydn index to entryrdn . Note The dn2rdn tool is in the /usr/sbin/ directory, since it is always run on the local Directory Server instance. 9.6. pwdhash The pwdhash utility encrypts a specified plain text password. If a user or the Directory Manager cannot log in, use pwdhash to compare the encrypted passwords. You can also use the generated hash to manually reset the Directory Manager's password. The pwdhash utility uses the following storage scheme to encrypt the password: If you pass the -s storage_scheme parameter to pwdhash , the specified scheme will be used. If you pass the -D config_directory parameter to pwdhash , the scheme set in the nsslapd-rootpwstoragescheme attribute will be used. If you neither specify the path to a valid Directory Server configuration directory nor pass a scheme to pwdhash , the utility uses the Directory Server default storage scheme. For further details about storage schemes, a list of supported values, and the default settings, see Section 4.1.43, "Password Storage Schemes" . For details about the syntax and command-line options, see the pwdhash (1) man page.
[ "jpegPhoto:: encoded data", "dbscan -f /var/lib/dirsrv/slapd- instance /db/userRoot/id2entry.db", "dbscan -f /var/lib/dirsrv/slapd- instance /db/userRoot/cn.db", "dbscan -r -f /var/lib/dirsrv/slapd- instance /db/userRoot/mail.db", "dbscan -r -G 20 -f /var/lib/dirsrv/slapd- instance /db/userRoot/sn.db", "dbscan -s -f /var/lib/dirsrv/slapd- instance /db/userRoot/objectclass.db", "dbscan -r -f /var/lib/dirsrv/slapd- instance /db/userRoot/vlv#bymccoupeopledcpeopledccom.db", "dbscan -f /var/lib/dirsrv/slapd- instance /changelogdb/c1a2fc02-1d11b2-8018afa7-fdce000_424c8a000f00.db", "dbscan -R -f /var/lib/dirsrv/slapd- instance /db/userRoot/uid.db", "dbscan -k \"=hr managers\" -r -f /var/lib/dirsrv/slapd- instance /db/userRoot/cn.db =hr%20managers 7", "dbscan -K 7 -f /var/lib/dirsrv/slapd- instance /db/userRoot/id2entry.db id 7 dn: cn=HR Managers,ou=groups,dc=example,dc=com objectClass: top objectClass: groupOfUniqueNames cn: HR Manager ou: groups description: People who can manage HR entries creatorsName: cn=Directory Manager modifiersName: cn=Directory Manager createTimestamp: 20050408230424Z modifyTimestamp: 20050408230424Z nsUniqueId: 8b465f73-1dd211b2-807fd340-d7f40000 parentid: 3 entryid: 7 entrydn: cn=hr managers,ou=groups,dc=example,dc=com", "dbscan -f /var/lib/dirsrv/slapd- instance /db/userRoot/entryrdn.db -k \"dc=example,dc=com\" dc=example,dc=com ID: 1; RDN: \"dc=example,dc=com\"; NRDN: \"dc=example,dc=com\" C1:dc=example,dc=com ID: 2; RDN: \"cn=Directory Administrators\"; NRDN: \"cn=directory administrators\" 2:cn=directory administrators ID: 2; RDN: \"cn=Directory Administrators\"; NRDN: \"cn=directory administrators\" P2:cn=directory administrators ID: 1; RDN: \"dc=example,dc=com\"; NRDN: \"dc=example,dc=com\" C1:dc=example,dc=com ID: 3; RDN: \"ou=Groups\"; NRDN: \"ou=groups\" 3:ou=groups ID: 3; RDN: \"ou=Groups\"; NRDN: \"ou=groups\" [...]", "ds-logpipe.py /var/log/dirsrc/slapd-example/errors.pipe", "ds-logpipe.py /var/log/dirsrc/slapd-example/errors.pipe &", "ds-logpipe.py /var/log/dirsrc/slapd-example/errors.pipe > /etc/dirsrv/myerrors.log 2>&1", "ds-logpipe.py /var/log/dirsrc/slapd-example/errors.pipe --serverpidfile /var/run/dirsrv/slapd-example.pid", "ds-logpipe.py /var/log/dirsrc/slapd-example/errors.pipe --plugin=/usr/share/dirsrv/data/logregex.py logregex.regex=\"warning\"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/configuration_command_and_file_reference/command_line_utilities
Chapter 3. Projects
Chapter 3. Projects Projects are a logical collection of rulebooks. They must be a git repository and only http protocol is supported. The rulebooks of a project must be located in the path defined for Event-Driven Ansible content in Ansible collections: /extensions/eda/rulebooks at the root of the project. 3.1. Setting up a new project Prerequisites You are logged in to the Event-Driven Ansible controller Dashboard as a Content Consumer. You have set up a credential, if necessary. For more information, see the Setting up credentials section. You have an existing repository containing rulebooks that are integrated with playbooks contained in a repository to be used by automation controller. Procedure Log in to the Event-Driven Ansible controller Dashboard. From the navigation panel, select Projects Create project . Insert the following: Name Enter project name. Description This field is optional. SCM type Git is the only SCM type available for use. SCM URL HTTP[S] protocol address of a repository, such as GitHub or GitLab. Note You cannot edit the SCM URL after you create the project. Credential This field is optional. This is the token needed to utilize the SCM URL. Options The Verify SSL option is enabled by default. Enabling this option verifies the SSL with HTTPS when the project is imported. Note You can disable this option if you have a local repository that uses self-signed certificates. Select Create project . Your project is now created and can be managed in the Projects screen. After saving the new project, the project's details page is displayed. From there or the Projects list view, you can edit or delete it. 3.2. Projects list view On the Projects page, you can view the projects that you have created along with the Status and the Git hash . Note If a rulebook changes in source control you can re-sync a project by selecting the sync icon to the project from the Projects list view. The Git hash updates represent the latest commit on that repository. An activation must be restarted or recreated if you want to use the updated project. 3.3. Editing a project Procedure From the Projects list view, select the More Actions icon ... to the desired project. Select Edit project . Enter the required changes and select Save project . 3.4. Deleting a project Procedure From the Projects list view, select the More Actions icon ... to the desired project. Select Delete project . In the popup window, select Yes, I confirm that I want to delete this project . Select Delete project .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/event-driven_ansible_controller_user_guide/eda-projects
4.5. Uploading the Image into Google Cloud Storage
4.5. Uploading the Image into Google Cloud Storage You must log in using gcloud auth login command before uploading the image to the Google cloud. Running the command will open a browser and prompts for google account credentials. The PROJECT_ID is set by default and follow the subsequent CLI instructions and make changes if required. Use Google's gsutil command to create the storage bucket and upload the image.
[ "gsutil mb gs://rhgs_image_upload gsutil cp disk.raw.tar.gz gs://rhgs_image_upload" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/deployment_guide_for_public_cloud/sect-documentation-deployment_guide_for_public_cloud-google_cloud_platform-upload_image
9.4. Updating a Configuration
9.4. Updating a Configuration Updating the cluster configuration consists of editing the cluster configuration file ( /etc/cluster/cluster.conf ) and propagating it to each node in the cluster. You can update the configuration using either of the following procedures: Section 9.4.1, "Updating a Configuration Using cman_tool version -r " Section 9.4.2, "Updating a Configuration Using scp " 9.4.1. Updating a Configuration Using cman_tool version -r To update the configuration using the cman_tool version -r command, perform the following steps: At any node in the cluster, edit the /etc/cluster/cluster.conf file. Update the config_version attribute by incrementing its value (for example, changing from config_version="2" to config_version="3" ). Save /etc/cluster/cluster.conf . Run the cman_tool version -r command to propagate the configuration to the rest of the cluster nodes. It is necessary that ricci be running in each cluster node to be able to propagate updated cluster configuration information. Verify that the updated cluster.conf configuration file has been propagated. If not, use the scp command to propagate it to /etc/cluster/ in each cluster node. You may skip this step (restarting cluster software) if you have made only the following configuration changes: Deleting a node from the cluster configuration- except where the node count changes from greater than two nodes to two nodes. For information about deleting a node from a cluster and transitioning from greater than two nodes to two nodes, see Section 9.2, "Deleting or Adding a Node" . Adding a node to the cluster configuration- except where the node count changes from two nodes to greater than two nodes. For information about adding a node to a cluster and transitioning from two nodes to greater than two nodes, see Section 9.2.2, "Adding a Node to a Cluster" . Changes to how daemons log information. HA service/VM maintenance (adding, editing, or deleting). Resource maintenance (adding, editing, or deleting). Failover domain maintenance (adding, editing, or deleting). Otherwise, you must restart the cluster software as follows: At each node, stop the cluster software according to Section 9.1.2, "Stopping Cluster Software" . At each node, start the cluster software according to Section 9.1.1, "Starting Cluster Software" . Stopping and starting the cluster software ensures that any configuration changes that are checked only at startup time are included in the running configuration. At any cluster node, run cman_tool nodes to verify that the nodes are functioning as members in the cluster (signified as "M" in the status column, "Sts"). For example: At any node, using the clustat utility, verify that the HA services are running as expected. In addition, clustat displays status of the cluster nodes. For example: If the cluster is running as expected, you are done updating the configuration.
[ "cman_tool nodes Node Sts Inc Joined Name 1 M 548 2010-09-28 10:52:21 node-01.example.com 2 M 548 2010-09-28 10:52:21 node-02.example.com 3 M 544 2010-09-28 10:52:21 node-03.example.com", "clustat Cluster Status for mycluster @ Wed Nov 17 05:40:00 2010 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ node-03.example.com 3 Online, rgmanager node-02.example.com 2 Online, rgmanager node-01.example.com 1 Online, Local, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:example_apache node-01.example.com started service:example_apache2 (none) disabled" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-admin-updating-config-CA
Chapter 10. Migrating your applications
Chapter 10. Migrating your applications You can migrate your applications by using the Migration Toolkit for Containers (MTC) web console or from the command line . You can use stage migration and cutover migration to migrate an application between clusters: Stage migration copies data from the source cluster to the target cluster without stopping the application. You can run a stage migration multiple times to reduce the duration of the cutover migration. Cutover migration stops the transactions on the source cluster and moves the resources to the target cluster. You can use state migration to migrate an application's state: State migration copies selected persistent volume claims (PVCs). You can use state migration to migrate a namespace within the same cluster. Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you might have to create them manually on the target cluster. During migration, MTC preserves the following namespace annotations: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. 10.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Internal images If your application uses internal images from the openshift namespace, you must ensure that the required versions of the images are present on the target cluster. You can manually update an image stream tag in order to use a deprecated OpenShift Container Platform 3 image on an OpenShift Container Platform 4.13 cluster. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 3 cluster: 8443 (API server) 443 (routes) 53 (DNS) You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. Additional resources for migration prerequisites Manually exposing a secure registry for OpenShift Container Platform 3 Updating deprecated internal images 10.2. Migrating your applications by using the MTC web console You can configure clusters and a replication repository by using the MTC web console. Then, you can create and run a migration plan. 10.2.1. Launching the MTC web console You can launch the Migration Toolkit for Containers (MTC) web console in a browser. Prerequisites The MTC web console must have network access to the OpenShift Container Platform web console. The MTC web console must have network access to the OAuth authorization server. Procedure Log in to the OpenShift Container Platform cluster on which you have installed MTC. Obtain the MTC web console URL by entering the following command: USD oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}' The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com . Launch a browser and navigate to the MTC web console. Note If you try to access the MTC web console immediately after installing the Migration Toolkit for Containers Operator, the console might not load because the Operator is still configuring the cluster. Wait a few minutes and retry. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster API server. The web page guides you through the process of accepting the remaining certificates. Log in with your OpenShift Container Platform username and password . 10.2.2. Adding a cluster to the MTC web console You can add a cluster to the Migration Toolkit for Containers (MTC) web console. Prerequisites Cross-origin resource sharing must be configured on the source cluster. If you are using Azure snapshots to copy data: You must specify the Azure resource group name for the cluster. The clusters must be in the same Azure resource group. The clusters must be in the same geographic location. If you are using direct image migration, you must expose a route to the image registry of the source cluster. Procedure Log in to the cluster. Obtain the migration-controller service account token: USD oc create token migration-controller -n openshift-migration Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ Log in to the MTC web console. In the MTC web console, click Clusters . Click Add cluster . Fill in the following fields: Cluster name : The cluster name can contain lower-case letters ( a-z ) and numbers ( 0-9 ). It must not contain spaces or international characters. URL : Specify the API server URL, for example, https://<www.example.com>:8443 . Service account token : Paste the migration-controller service account token. Exposed route host to image registry : If you are using direct image migration, specify the exposed route to the image registry of the source cluster. To create the route, run the following command: For OpenShift Container Platform 3: USD oc create route passthrough --service=docker-registry --port=5000 -n default For OpenShift Container Platform 4: USD oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry Azure cluster : You must select this option if you use Azure snapshots to copy your data. Azure resource group : This field is displayed if Azure cluster is selected. Specify the Azure resource group. When an {OCP} cluster is created on Microsoft Azure, an Azure Resource Group is created to contain all resources associated with the cluster. In the Azure CLI, you can display all resource groups by issuing the following command: USD az group list ResourceGroups associated with OpenShift Container Platform clusters are tagged, where sample-rg-name is the value you would extract and supply to the UI: { "id": "/subscriptions/...//resourceGroups/sample-rg-name", "location": "centralus", "name": "...", "properties": { "provisioningState": "Succeeded" }, "tags": { "kubernetes.io_cluster.sample-ld57c": "owned", "openshift_creationDate": "2019-10-25T23:28:57.988208+00:00" }, "type": "Microsoft.Resources/resourceGroups" }, This information is also available from the Azure Portal in the Resource groups blade. Require SSL verification : Optional: Select this option to verify the Secure Socket Layer (SSL) connection to the cluster. CA bundle file : This field is displayed if Require SSL verification is selected. If you created a custom CA certificate bundle file for self-signed certificates, click Browse , select the CA bundle file, and upload it. Click Add cluster . The cluster appears in the Clusters list. 10.2.3. Adding a replication repository to the MTC web console You can add an object storage as a replication repository to the Migration Toolkit for Containers (MTC) web console. MTC supports the following storage providers: Amazon Web Services (AWS) S3 Multi-Cloud Object Gateway (MCG) Generic S3 object storage, for example, Minio or Ceph S3 Google Cloud Provider (GCP) Microsoft Azure Blob Prerequisites You must configure the object storage as a replication repository. Procedure In the MTC web console, click Replication repositories . Click Add repository . Select a Storage provider type and fill in the following fields: AWS for S3 providers, including AWS and MCG: Replication repository name : Specify the replication repository name in the MTC web console. S3 bucket name : Specify the name of the S3 bucket. S3 bucket region : Specify the S3 bucket region. Required for AWS S3. Optional for some S3 providers. Check the product documentation of your S3 provider for expected values. S3 endpoint : Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com> . Required for a generic S3 provider. You must use the https:// prefix. S3 provider access key : Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG and other S3 providers. S3 provider secret access key : Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG and other S3 providers. Require SSL verification : Clear this checkbox if you are using a generic S3 provider. If you created a custom CA certificate bundle for self-signed certificates, click Browse and browse to the Base64-encoded file. GCP : Replication repository name : Specify the replication repository name in the MTC web console. GCP bucket name : Specify the name of the GCP bucket. GCP credential JSON blob : Specify the string in the credentials-velero file. Azure : Replication repository name : Specify the replication repository name in the MTC web console. Azure resource group : Specify the resource group of the Azure Blob storage. Azure storage account name : Specify the Azure Blob storage account name. Azure credentials - INI file contents : Specify the string in the credentials-velero file. Click Add repository and wait for connection validation. Click Close . The new repository appears in the Replication repositories list. 10.2.4. Creating a migration plan in the MTC web console You can create a migration plan in the Migration Toolkit for Containers (MTC) web console. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must ensure that the same MTC version is installed on all clusters. You must add the clusters and the replication repository to the MTC web console. If you want to use the move data copy method to migrate a persistent volume (PV), the source and target clusters must have uninterrupted network access to the remote volume. If you want to use direct image migration, you must specify the exposed route to the image registry of the source cluster. This can be done by using the MTC web console or by updating the MigCluster custom resource manifest. Procedure In the MTC web console, click Migration plans . Click Add migration plan . Enter the Plan name . The migration plan name must not exceed 253 lower-case alphanumeric characters ( a-z, 0-9 ) and must not contain spaces or underscores ( _ ). Select a Source cluster , a Target cluster , and a Repository . Click . Select the projects for migration. Optional: Click the edit icon beside a project to change the target namespace. Click . Select a Migration type for each PV: The Copy option copies the data from the PV of a source cluster to the replication repository and then restores the data on a newly created PV, with similar characteristics, in the target cluster. The Move option unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. Click . Select a Copy method for each PV: Snapshot copy backs up and restores data using the cloud provider's snapshot functionality. It is significantly faster than Filesystem copy . Filesystem copy backs up the files on the source cluster and restores them on the target cluster. The file system copy method is required for direct volume migration. You can select Verify copy to verify data migrated with Filesystem copy . Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance. Select a Target storage class . If you selected Filesystem copy , you can change the target storage class. Click . On the Migration options page, the Direct image migration option is selected if you specified an exposed image registry route for the source cluster. The Direct PV migration option is selected if you are migrating data with Filesystem copy . The direct migration options copy images and files directly from the source cluster to the target cluster. This option is much faster than copying images and files from the source cluster to the replication repository and then from the replication repository to the target cluster. Click . Optional: Click Add Hook to add a hook to the migration plan. A hook runs custom code. You can add up to four hooks to a single migration plan. Each hook runs during a different migration step. Enter the name of the hook to display in the web console. If the hook is an Ansible playbook, select Ansible playbook and click Browse to upload the playbook or paste the contents of the playbook in the field. Optional: Specify an Ansible runtime image if you are not using the default hook image. If the hook is not an Ansible playbook, select Custom container image and specify the image name and path. A custom container image can include Ansible playbooks. Select Source cluster or Target cluster . Enter the Service account name and the Service account namespace . Select the migration step for the hook: preBackup : Before the application workload is backed up on the source cluster postBackup : After the application workload is backed up on the source cluster preRestore : Before the application workload is restored on the target cluster postRestore : After the application workload is restored on the target cluster Click Add . Click Finish . The migration plan is displayed in the Migration plans list. Additional resources MTC file system copy method MTC snapshot copy method 10.2.5. Running a migration plan in the MTC web console You can migrate applications and data with the migration plan you created in the Migration Toolkit for Containers (MTC) web console. Note During migration, MTC sets the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup custom resource contains a PVOriginalReclaimPolicy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Prerequisites The MTC web console must contain the following: Source cluster in a Ready state Target cluster in a Ready state Replication repository Valid migration plan Procedure Log in to the MTC web console and click Migration plans . Click the Options menu to a migration plan and select one of the following options under Migration : Stage copies data from the source cluster to the target cluster without stopping the application. Cutover stops the transactions on the source cluster and moves the resources to the target cluster. Optional: In the Cutover migration dialog, you can clear the Halt transactions on the source cluster during migration checkbox. State copies selected persistent volume claims (PVCs). Important Do not use state migration to migrate a namespace between clusters. Use stage or cutover migration instead. Select one or more PVCs in the State migration dialog and click Migrate . When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volumes are correctly provisioned.
[ "oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'", "oc create token migration-controller -n openshift-migration", "eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ", "oc create route passthrough --service=docker-registry --port=5000 -n default", "oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry", "az group list", "{ \"id\": \"/subscriptions/...//resourceGroups/sample-rg-name\", \"location\": \"centralus\", \"name\": \"...\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": { \"kubernetes.io_cluster.sample-ld57c\": \"owned\", \"openshift_creationDate\": \"2019-10-25T23:28:57.988208+00:00\" }, \"type\": \"Microsoft.Resources/resourceGroups\" }," ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/migrating_from_version_3_to_4/migrating-applications-3-4
Chapter 27. Adding the IdM CA service to an IdM server in a deployment with a CA
Chapter 27. Adding the IdM CA service to an IdM server in a deployment with a CA If your Identity Management (IdM) environment already has the IdM certificate authority (CA) service installed but a particular IdM server, idmserver , was installed as an IdM replica without a CA, you can add the CA service to idmserver by using the ipa-ca-install command. Note This procedure is identical for both the following scenarios: The IdM CA is a root CA. The IdM CA is subordinate to an external, root CA. Prerequisites You have root permissions on idmserver . The IdM server is installed on idmserver . Your IdM deployment has a CA installed on another IdM server. You know the IdM Directory Manager password. Procedure On idmserver , install the IdM Certificate Server CA:
[ "[root@idmserver ~] ipa-ca-install" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_identity_management/adding-the-idm-ca-service-to-an-idm-server-in-a-deployment-with-a-ca_installing-identity-management
Preface
Preface Preface
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_the_streams_for_apache_kafka_proxy/preface
Chapter 2. Sidecar injection
Chapter 2. Sidecar injection To use Istio's capabilities within a service mesh, each pod needs a sidecar proxy, configured and managed by the Istio control plane. 2.1. About sidecar injection Sidecar injection is enabled using labels at the namespace or pod level. These labels also indicate the specific control plane managing the proxy. When you apply a valid injection label to the pod template defined in a deployment, any new pods created by that deployment automatically receive a sidecar. Similarly, applying a pod injection label at the namespace level ensures any new pods in that namespace include a sidecar. Note Injection happens at pod creation through an admission controller, so changes appear on individual pods rather than the deployment resources. To confirm sidecar injection, check the pod details directly using oc describe , where you can see the injected Istio proxy container. 2.2. Identifying the revision name The label required to enable sidecar injection is determined by the specific control plane instance, known as a revision. Each revision is managed by an IstioRevision resource, which is automatically created and managed by the Istio resource, so manual creation or modification of IstioRevision resources is generally unnecessary. The naming of an IstioRevision depends on the spec.updateStrategy.type setting in the Istio resource. If set to InPlace , the revision shares the Istio resource name. If set to RevisionBased , the revision name follows the format <Istio resource name>-v<version> . Typically, each Istio resource corresponds to a single IstioRevision . However, during a revision-based upgrade, multiple IstioRevision resources may exist, each representing a distinct control plane instance. To see available revision names, use the following command: USD oc get istiorevisions You should see output similar to the following example: Example output NAME READY STATUS IN USE VERSION AGE my-mesh-v1-23-0 True Healthy False v1.23.0 114s 2.2.1. Enabling sidecar injection with default revision When the service mesh's IstioRevision name is default , it's possible to use the following labels on a namespace or a pod to enable sidecar injection: Resource Label Enabled value Disabled value Namespace istio-injection enabled disabled Pod sidecar.istio.io/inject true false Note You can also enable injection by setting the istio.io/rev: default label in the namespace or pod. 2.2.2. Enabling sidecar injection with other revisions When the IstioRevision name is not default , use the specific IstioRevision name with the istio.io/rev label to map the pod to the desired control plane and enable sidecar injection. To enable injection, set the istio.io/rev: default label in either the namespace or the pod, as adding it to both is not required. For example, with the revision shown above, the following labels would enable sidecar injection: Resource Enabled label Disabled label Namespace istio.io/rev=my-mesh-v1-23-0 istio-injection=disabled Pod istio.io/rev=my-mesh-v1-23-0 sidecar.istio.io/inject="false" Note When both istio-injection and istio.io/rev labels are applied, the istio-injection label takes precedence and treats the namespace as part of the default revision. 2.3. Enabling sidecar injection To demonstrate different approaches for configuring sidecar injection, the following procedures use the Bookinfo application. Prerequisites You have installed the Red Hat OpenShift Service Mesh Operator, created an Istio resource, and the Operator has deployed Istio. You have created the IstioCNI resource, and the Operator has deployed the necessary IstioCNI pods. You have created the namespaces that are to be part of the mesh, and they are discoverable by the Istio control plane. Optional: You have deployed the workloads to be included in the mesh. In the following examples, the Bookinfo has been deployed to the bookinfo namespace, but sidecar injection (step 5) has not been configured. 2.3.1. Enabling sidecar injection with namespace labels In this example, all workloads within a namespace receive a sidecar proxy injection, making it the best approach when the majority of workloads in the namespace should be included in the mesh. Procedure Verify the revision name of the Istio control plane using the following command: USD oc get istiorevisions You should see output similar to the following example: Example output NAME TYPE READY STATUS IN USE VERSION AGE default Local True Healthy False v1.23.0 4m57s Since the revision name is default, you can use the default injection labels without referencing the exact revision name. Verify that workloads already running in the desired namespace show 1/1 containers as READY by using the following command. This confirms that the pods are running without sidecars. USD oc get pods -n bookinfo You should see output similar to the following example: Example output NAME READY STATUS RESTARTS AGE details-v1-65cfcf56f9-gm6v7 1/1 Running 0 4m55s productpage-v1-d5789fdfb-8x6bk 1/1 Running 0 4m53s ratings-v1-7c9bd4b87f-6v7hg 1/1 Running 0 4m55s reviews-v1-6584ddcf65-6wqtw 1/1 Running 0 4m54s reviews-v2-6f85cb9b7c-w9l8s 1/1 Running 0 4m54s reviews-v3-6f5b775685-mg5n6 1/1 Running 0 4m54s To apply the injection label to the bookinfo namespace, run the following command at the CLI: USD oc label namespace bookinfo istio-injection=enabled namespace/bookinfo labeled To ensure sidecar injection is applied, redeploy the existing workloads in the bookinfo namespace. Use the following command to perform a rolling update of all workloads: USD oc -n bookinfo rollout restart deployments Verification Verify the rollout by checking that the new pods display 2/2 containers as READY , confirming successful sidecar injection by running the following command: USD oc get pods -n bookinfo You should see output similar to the following example: Example output NAME READY STATUS RESTARTS AGE details-v1-7745f84ff-bpf8f 2/2 Running 0 55s productpage-v1-54f48db985-gd5q9 2/2 Running 0 55s ratings-v1-5d645c985f-xsw7p 2/2 Running 0 55s reviews-v1-bd5f54b8c-zns4v 2/2 Running 0 55s reviews-v2-5d7b9dbf97-wbpjr 2/2 Running 0 55s reviews-v3-5fccc48c8c-bjktn 2/2 Running 0 55sz 2.3.2. Exclude a workload from the mesh You can exclude specific workloads from sidecar injection within a namespace where injection is enabled for all workloads. Note This example is for demonstration purposes only. The bookinfo application requires all workloads to be part of the mesh for proper functionality. Procedure Open the application's Deployment resource in an editor. In this case, exclude the ratings-v1 service. Modify the spec.template.metadata.labels section of your Deployment resource to include the label sidecar.istio.io/inject: false to disable sidecar injection. kind: Deployment apiVersion: apps/v1 metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'false' Note Adding the label to the top-level labels section of the Deployment does not affect sidecar injection. Updating the deployment triggers a rollout, creating a new ReplicaSet with updated pod(s). Verification Verify that the updated pod(s) do not contain a sidecar container and show 1/1 containers as Running by running the following command: USD oc get pods -n bookinfo You should see output similar to the following example: Example output NAME READY STATUS RESTARTS AGE details-v1-6bc7b69776-7f6wz 2/2 Running 0 29m productpage-v1-54f48db985-gd5q9 2/2 Running 0 29m ratings-v1-5d645c985f-xsw7p 1/1 Running 0 7s reviews-v1-bd5f54b8c-zns4v 2/2 Running 0 29m reviews-v2-5d7b9dbf97-wbpjr 2/2 Running 0 29m reviews-v3-5fccc48c8c-bjktn 2/2 Running 0 29m 2.3.3. Enabling sidecar injection with pod labels This approach allows you to include individual workloads for sidecar injection instead of applying it to all workloads within a namespace, making it ideal for scenarios where only a few workloads need to be part of a service mesh. This example also demonstrates the use of a revision label for sidecar injection, where the Istio resource is created with the name my-mesh . A unique Istio resource name is required when multiple Istio control planes are present in the same cluster or during a revision-based control plane upgrade. Procedure Verify the revision name of the Istio control plane by running the following command: USD oc get istiorevisions You should see output similar to the following example: Example output NAME TYPE READY STATUS IN USE VERSION AGE my-mesh Local True Healthy False v1.23.0 47s Since the revision name is my-mesh , use the revision label istio.io/rev=my-mesh to enable sidecar injection. Verify that workloads already running show 1/1 containers as READY , indicating that the pods are running without sidecars by running the following command: USD oc get pods -n bookinfo You should see output similar to the following example: Example output NAME READY STATUS RESTARTS AGE details-v1-65cfcf56f9-gm6v7 1/1 Running 0 4m55s productpage-v1-d5789fdfb-8x6bk 1/1 Running 0 4m53s ratings-v1-7c9bd4b87f-6v7hg 1/1 Running 0 4m55s reviews-v1-6584ddcf65-6wqtw 1/1 Running 0 4m54s reviews-v2-6f85cb9b7c-w9l8s 1/1 Running 0 4m54s reviews-v3-6f5b775685-mg5n6 1/1 Running 0 4m54s Open the application's Deployment resource in an editor. In this case, update the ratings-v1 service. Update the spec.template.metadata.labels section of your Deployment to include the appropriate pod injection or revision label. In this case, istio.io/rev: my-mesh : kind: Deployment apiVersion: apps/v1 metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: istio.io/rev: my-mesh Note Adding the label to the Deployment's top-level `labels section does not impact sidecar injection. Updating the deployment triggers a rollout, creating a new ReplicaSet with the updated pod(s). Verification Verify that only the ratings-v1 pod now shows 2/2 containers READY , indicating that the sidecar has been successfully injected by running the following command: USD oc get pods -n bookinfo You should see output similar to the following example: Example output NAME READY STATUS RESTARTS AGE details-v1-559cd49f6c-b89hw 1/1 Running 0 42m productpage-v1-5f48cdcb85-8ppz5 1/1 Running 0 42m ratings-v1-848bf79888-krdch 2/2 Running 0 9s reviews-v1-6b7444ffbd-7m5wp 1/1 Running 0 42m reviews-v2-67876d7b7-9nmw5 1/1 Running 0 42m reviews-v3-84b55b667c-x5t8s 1/1 Running 0 42m Repeat for other workloads that you wish to include in the mesh. 2.4. Additional resources About admission controllers Istio sidecar injection problems Bookinfo application Scoping service mesh with discovery selectors
[ "oc get istiorevisions", "NAME READY STATUS IN USE VERSION AGE my-mesh-v1-23-0 True Healthy False v1.23.0 114s", "oc get istiorevisions", "NAME TYPE READY STATUS IN USE VERSION AGE default Local True Healthy False v1.23.0 4m57s", "oc get pods -n bookinfo", "NAME READY STATUS RESTARTS AGE details-v1-65cfcf56f9-gm6v7 1/1 Running 0 4m55s productpage-v1-d5789fdfb-8x6bk 1/1 Running 0 4m53s ratings-v1-7c9bd4b87f-6v7hg 1/1 Running 0 4m55s reviews-v1-6584ddcf65-6wqtw 1/1 Running 0 4m54s reviews-v2-6f85cb9b7c-w9l8s 1/1 Running 0 4m54s reviews-v3-6f5b775685-mg5n6 1/1 Running 0 4m54s", "oc label namespace bookinfo istio-injection=enabled namespace/bookinfo labeled", "oc -n bookinfo rollout restart deployments", "oc get pods -n bookinfo", "NAME READY STATUS RESTARTS AGE details-v1-7745f84ff-bpf8f 2/2 Running 0 55s productpage-v1-54f48db985-gd5q9 2/2 Running 0 55s ratings-v1-5d645c985f-xsw7p 2/2 Running 0 55s reviews-v1-bd5f54b8c-zns4v 2/2 Running 0 55s reviews-v2-5d7b9dbf97-wbpjr 2/2 Running 0 55s reviews-v3-5fccc48c8c-bjktn 2/2 Running 0 55sz", "kind: Deployment apiVersion: apps/v1 metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'false'", "oc get pods -n bookinfo", "NAME READY STATUS RESTARTS AGE details-v1-6bc7b69776-7f6wz 2/2 Running 0 29m productpage-v1-54f48db985-gd5q9 2/2 Running 0 29m ratings-v1-5d645c985f-xsw7p 1/1 Running 0 7s reviews-v1-bd5f54b8c-zns4v 2/2 Running 0 29m reviews-v2-5d7b9dbf97-wbpjr 2/2 Running 0 29m reviews-v3-5fccc48c8c-bjktn 2/2 Running 0 29m", "oc get istiorevisions", "NAME TYPE READY STATUS IN USE VERSION AGE my-mesh Local True Healthy False v1.23.0 47s", "oc get pods -n bookinfo", "NAME READY STATUS RESTARTS AGE details-v1-65cfcf56f9-gm6v7 1/1 Running 0 4m55s productpage-v1-d5789fdfb-8x6bk 1/1 Running 0 4m53s ratings-v1-7c9bd4b87f-6v7hg 1/1 Running 0 4m55s reviews-v1-6584ddcf65-6wqtw 1/1 Running 0 4m54s reviews-v2-6f85cb9b7c-w9l8s 1/1 Running 0 4m54s reviews-v3-6f5b775685-mg5n6 1/1 Running 0 4m54s", "kind: Deployment apiVersion: apps/v1 metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: istio.io/rev: my-mesh", "oc get pods -n bookinfo", "NAME READY STATUS RESTARTS AGE details-v1-559cd49f6c-b89hw 1/1 Running 0 42m productpage-v1-5f48cdcb85-8ppz5 1/1 Running 0 42m ratings-v1-848bf79888-krdch 2/2 Running 0 9s reviews-v1-6b7444ffbd-7m5wp 1/1 Running 0 42m reviews-v2-67876d7b7-9nmw5 1/1 Running 0 42m reviews-v3-84b55b667c-x5t8s 1/1 Running 0 42m" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_mesh/3.0.0tp1/html/installing/ossm-sidecar-injection_ossm-customizing-istio-configuration
Chapter 15. User and Group Schema
Chapter 15. User and Group Schema When a user entry is created, it is automatically assigned certain LDAP object classes which, in turn, make available certain attributes. LDAP attributes are the way that information is stored in the directory. (This is discussed in detail in the Directory Server Deployment Guide and the Directory Server Schema Reference .) Table 15.1. Default Identity Management User Object Classes Object Classes Description ipaobject ipasshuser IdM object classes person organizationalperson inetorgperson inetuser posixAccount Person object classes krbprincipalaux krbticketpolicyaux Kerberos object classes mepOriginEntry Managed entries (template) object classes A number of attributes are available to user entries. Some are set manually and some are set based on defaults if a specific value is not set. There is also an option to add any attributes available in the object classes in Table 15.1, "Default Identity Management User Object Classes" , even if there is not a UI or command-line argument for that attribute. Additionally, the values generated or used by the default attributes can be configured, as in Section 15.4, "Specifying Default User and Group Attributes" . Table 15.2. Default Identity Management User Attributes UI Field Command-Line Option Required, Optional, or Default [a] User login username Required First name --first Required Last name --last Required Full name --cn Optional Display name --displayname Optional Initials --initials Default Home directory --homedir Default GECOS field --gecos Default Shell --shell Default Kerberos principal --principal Default Email address --email Optional Password --password [b] Optional User ID number --uid Default Group ID number --gidnumber Default Street address --street Optional City --city Optional State/Province --state Optional Zip code --postalcode Optional Telephone number --phone Optional Mobile telephone number --mobile Optional Pager number --pager Optional Fax number --fax Optional Organizational unit --orgunit Optional Job title --title Optional Manager --manager Optional Car license --carlicense Optional --noprivate Optional SSH Keys --sshpubkey Optional Additional attributes --addattr Optional Department Number --departmentnumber Optional Employee Number --employeenumber Optional Employee Type --employeetype Optional Preferred Language --preferredlanguage Optional [a] Required attributes must be set for every entry. Optional attributes may be set, while default attributes are automatically added with a predefined value unless a specific value is given. [b] The script prompts for the new password, rather than accepting a value with the argument. 15.1. About Changing the Default User and Group Schema It is possible to add or change the object classes and attributes used for user and group entries ( Chapter 15, User and Group Schema ). The IdM configuration provides some validation when object classes are changed: All of the object classes and their specified attributes must be known to the LDAP server. All default attributes that are configured for the entry must be supported by the configured object classes. There are limits to the IdM schema validation, however. Most important, the IdM server does not check that the defined user or group object classes contain all of the required object classes for IdM entries. For example, all IdM entries require the ipaobject object class. However, when the user or group schema is changed, the server does not check to make sure that this object class is included; if the object class is accidentally deleted, then future entry add operations will fail. Also, all object class changes are atomic, not incremental. The entire list of default object classes has to be defined every time there is a change. For example, a company may create a custom object class to store employee information like birthdays and employment start dates. The administrator cannot simply add the custom object class to the list; he must set the entire list of current default object classes plus the new object class. The existing default object classes must always be included when the configuration is updated. Otherwise, the current settings will be overwritten, which causes serious performance problems.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/user-schema
Policy APIs
Policy APIs OpenShift Container Platform 4.15 Reference guide for policy APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/policy_apis/index
Chapter 7. Uninstalling a cluster on Azure Stack Hub
Chapter 7. Uninstalling a cluster on Azure Stack Hub You can remove a cluster that you deployed to Azure Stack Hub. 7.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
[ "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_azure_stack_hub/uninstalling-cluster-azure-stack-hub
Chapter 299. Service Component
Chapter 299. Service Component Available as of Camel version 2.22 299.1. Using the service endpoint 299.2. URI format 299.3. Options The Service component supports 3 options, which are listed below. Name Description Default Type service (advanced) Inject the service to use. ServiceRegistry serviceSelector (advanced) Inject the service selector used to lookup the ServiceRegistry to use. Selector resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Service endpoint is configured using URI syntax: with the following path and query parameters: 299.3.1. Path Parameters (1 parameters): Name Description Default Type delegateUri Required The endpoint uri to expose as service String 299.3.2. Query Parameters (4 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 299.4. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.service.enabled Whether to enable auto configuration of the service component. This is enabled by default. Boolean camel.component.service.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.service.service Inject the service to use. The option is a org.apache.camel.cloud.ServiceRegistry type. String camel.component.service.service-selector Inject the service selector used to lookup the ServiceRegistry to use. The option is a org.apache.camel.cloud.ServiceRegistry.Selector type. String 299.5. Implementations Camel provide the following ServiceRegistry implementations: camel-consul camel-zookeeper camel-spring-cloud 299.6. See Also Configuring Camel Component Endpoint Getting Started
[ "service:serviceName:endpoint[?options]", "service:delegateUri" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/service-component
E.2.20. /proc/modules
E.2.20. /proc/modules This file displays a list of all modules loaded into the kernel. Its contents vary based on the configuration and use of your system, but it should be organized in a similar manner to this sample /proc/modules file output: Note This example has been reformatted into a readable format. Most of this information can also be viewed via the /sbin/lsmod command. The first column contains the name of the module. The second column refers to the memory size of the module, in bytes. The third column lists how many instances of the module are currently loaded. A value of zero represents an unloaded module. The fourth column states if the module depends upon another module to be present in order to function, and lists those other modules. The fifth column lists what load state the module is in: Live , Loading , or Unloading are the only possible values. The sixth column lists the current kernel memory offset for the loaded module. This information can be useful for debugging purposes, or for profiling tools such as oprofile .
[ "nfs 170109 0 - Live 0x129b0000 lockd 51593 1 nfs, Live 0x128b0000 nls_utf8 1729 0 - Live 0x12830000 vfat 12097 0 - Live 0x12823000 fat 38881 1 vfat, Live 0x1287b000 autofs4 20293 2 - Live 0x1284f000 sunrpc 140453 3 nfs,lockd, Live 0x12954000 3c59x 33257 0 - Live 0x12871000 uhci_hcd 28377 0 - Live 0x12869000 md5 3777 1 - Live 0x1282c000 ipv6 211845 16 - Live 0x128de000 ext3 92585 2 - Live 0x12886000 jbd 65625 1 ext3, Live 0x12857000 dm_mod 46677 3 - Live 0x12833000" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-modules
4.9. Configuring Headless Virtual Machines
4.9. Configuring Headless Virtual Machines You can configure a headless virtual machine when it is not necessary to access the machine via a graphical console. This headless machine will run without graphical and video devices. This can be useful in situations where the host has limited resources, or to comply with virtual machine usage requirements such as real-time virtual machines. Headless virtual machines can be administered via a Serial Console, SSH, or any other service for command line access. Headless mode is applied via the Console tab when creating or editing virtual machines and machine pools, and when editing templates. It is also available when creating or editing instance types. If you are creating a new headless virtual machine, you can use the Run Once window to access the virtual machine via a graphical console for the first run only. See Section A.2, "Explanation of Settings in the Run Once Window" for more details. Prerequisites If you are editing an existing virtual machine, and the Red Hat Virtualization guest agent has not been installed, note the machine's IP prior to selecting Headless Mode . Before running a virtual machine in headless mode, the GRUB configuration for this machine must be set to console mode otherwise the guest operating system's boot process will hang. To set console mode, comment out the spashimage flag in the GRUB menu configuration file: Note Restart the virtual machine if it is running when selecting the Headless Mode option. Configuring a Headless Virtual Machine Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the Console tab. Select Headless Mode . All other fields in the Graphical Console section are disabled. Optionally, select Enable VirtIO serial console to enable communicating with the virtual machine via serial console. This is highly recommended. Reboot the virtual machine if it is running. See Section 6.3, "Rebooting a Virtual Machine" .
[ "#splashimage=(hd0,0)/grub/splash.xpm.gz serial --unit=0 --speed=9600 --parity=no --stop=1 terminal --timeout=2 serial" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/Configuring_Headless_Machines
Chapter 5. Monitoring the Load-balancing service
Chapter 5. Monitoring the Load-balancing service In Red Hat OpenStack Services on OpenShift (RHOSO) environments, to keep load balancing operational, you can use the load-balancer management network and create, modify, and delete load-balancing health monitors: Section 5.1, "The Load-balancing service management network" Section 5.2, "Load-balancing service instance monitoring" Section 5.3, "Load-balancing service pool member monitoring" Section 5.4, "Load balancer provisioning status monitoring" Section 5.5, "Load balancer functionality monitoring" Section 5.6, "About Load-balancing service health monitors" Section 5.7, "Creating Load-balancing service health monitors" Section 5.8, "Modifying Load-balancing service health monitors" Section 5.9, "Deleting Load-balancing service health monitors" Section 5.10, "Best practices for Load-balancing service HTTP health monitors" 5.1. The Load-balancing service management network The Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) controller pods require network connectivity across the OpenStack cloud in order to monitor and manage amphora load-balancer virtual machines (VMs). The Load-balancing service management network is actually two OpenStack networks: a project (tenant) network that is connected to the amphora VMs; and a provider network connecting Load-balancing service controllers running in the podified control plane through a network defined by a Red Hat OpenShift network attachment. An OpenStack router routes packets between the project network and the provider network with both the control plane pods and load balancer VMs having routes configured to direct traffic through the router for those networks. 5.2. Load-balancing service instance monitoring In Red Hat OpenStack Services on OpenShift (RHOSO) environments, the Load-balancing service (octavia) monitors the load balancing instances (amphorae) and initiates failovers and replacements if the amphorae malfunction. Any time a failover occurs, the Load-balancing service logs the failover in the corresponding health manager log on the controller in /var/log/containers/octavia . Use log analytics to monitor failover trends to address problems early. Problems such as Networking service (neutron) connectivity issues, Denial of Service attacks, and Compute service (nova) malfunctions often lead to higher failover rates for load balancers. 5.3. Load-balancing service pool member monitoring In Red Hat OpenStack Services on OpenShift (RHOSO) environments, the Load-balancing service (octavia) uses the health information from the underlying load balancing subsystems to determine the health of members of the load-balancing pool. Health information is streamed to the Load-balancing service database, and made available by the status tree or other API methods. For critical applications, you must poll for health information in regular intervals. 5.4. Load balancer provisioning status monitoring In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you can monitor the provisioning status of a load balancer and send alerts if the provisioning status is ERROR . Do not configure an alert to trigger when an application is making regular changes to the pool and enters several PENDING stages. The provisioning status of load balancer objects reflect the ability of the control plane to contact and successfully provision a create, update, and delete request. The operating status of a load balancer object reports on the current functionality of the load balancer. For example, a load balancer might have a provisioning status of ERROR , but an operating status of ONLINE . This might be caused by a Networking service (neutron) failure that blocked that last requested update to the load balancer configuration from successfully completing. In this case, the load balancer continues to process traffic through the load balancer, but might not have applied the latest configuration updates yet. 5.5. Load balancer functionality monitoring You can monitor the operational status of your load balancer and its child objects in your Red Hat OpenStack Services on OpenShift (RHOSO) environment. You can also use an external monitoring service that connects to your load balancer listeners and monitors them from outside of the cloud. An external monitoring service indicates if there is a failure outside of the Load-balancing service (octavia) that might impact the functionality of your load balancer, such as router failures, network connectivity issues, and so on. 5.6. About Load-balancing service health monitors A Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) health monitor is a process that does periodic health checks on each back end member server to pre-emptively detect failed servers and temporarily pull them out of the pool. If the health monitor detects a failed server, it removes the server from the pool and marks the member in ERROR . After you have corrected the server and it is functional again, the health monitor automatically changes the status of the member from ERROR to ONLINE , and resumes passing traffic to it. Always use health monitors in production load balancers. If you do not have a health monitor, failed servers are not removed from the pool. This can lead to service disruption for web clients. There are several types of health monitors, as briefly described here: HTTP by default, probes the / path on the application server. HTTPS operates exactly like HTTP health monitors, but with TLS back end servers. If the servers perform client certificate validation, HAProxy does not have a valid certificate. In these cases, TLS-HELLO health monitoring is an alternative. TLS-HELLO ensures that the back end server responds to SSLv3-client hello messages. A TLS-HELLO health monitor does not check any other health metrics, like status code or body contents. PING sends periodic ICMP ping requests to the back end servers. You must configure back end servers to allow PINGs so that these health checks pass. Important A PING health monitor checks only if the member is reachable and responds to ICMP echo requests. PING health monitors do not detect if the application that runs on an instance is healthy. Use PING health monitors only in cases where an ICMP echo request is a valid health check. TCP opens a TCP connection to the back end server protocol port. The TCP application opens a TCP connection and, after the TCP handshake, closes the connection without sending any data. UDP-CONNECT performs a basic UDP port connect. A UDP-CONNECT health monitor might not work correctly if Destination Unreachable (ICMP type 3) is not enabled on the member server, or if it is blocked by a security rule. In these cases, a member server might be marked as having an operating status of ONLINE when it is actually down. 5.7. Creating Load-balancing service health monitors Use Load-balancing service (octavia) health monitors to avoid service disruptions for your users. The health monitors run periodic health checks on each back end server to pre-emptively detect failed servers and temporarily pull the servers out of the pool in your Red Hat OpenStack Services on OpenShift (RHOSO) environment. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Run the openstack loadbalancer healthmonitor create command, using argument values that are appropriate for your site. All health monitor types require the following configurable arguments: <pool> Name or ID of the pool of back-end member servers to be monitored. --type The type of health monitor. One of HTTP , HTTPS , PING , SCTP , TCP , TLS-HELLO , or UDP-CONNECT . --delay Number of seconds to wait between health checks. --timeout Number of seconds to wait for any given health check to complete. timeout must always be smaller than delay . --max-retries Number of health checks a back-end server must fail before it is considered down. Also, the number of health checks that a failed back-end server must pass to be considered up again. In addition, HTTP health monitor types also require the following arguments, which are set by default: --url-path Path part of the URL that should be retrieved from the back-end server. By default this is / . --http-method HTTP method that is used to retrieve the url_path . By default this is GET . --expected-codes List of HTTP status codes that indicate an OK health check. By default this is 200 . Example Verification Run the openstack loadbalancer healthmonitor list command and verify that your health monitor is running. 5.8. Modifying Load-balancing service health monitors You can modify the configuration for Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) health monitors when you want to change the interval for sending probes to members, the connection timeout interval, the HTTP method for requests, and so on. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Modify your health monitor ( my-health-monitor ). In this example, a user is changing the time in seconds that the health monitor waits between sending probes to members. Example Verification Run the openstack loadbalancer healthmonitor show command to confirm your configuration changes. 5.9. Deleting Load-balancing service health monitors You can remove a Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) health monitor. Tip An alternative to deleting a health monitor is to disable it by using the openstack loadbalancer healthmonitor set --disable command. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Delete the health monitor ( my-health-monitor ). Example Verification Run the openstack loadbalancer healthmonitor list command to verify that the health monitor you deleted no longer exists. 5.10. Best practices for Load-balancing service HTTP health monitors In Red Hat OpenStack Services on OpenShift (RHOSO) environments, when you write the code that generates the health check in your web application, use the following best practices: The health monitor url-path does not require authentication to load. By default, the health monitor url-path returns an HTTP 200 OK status code to indicate a healthy server unless you specify alternate expected-codes . The health check does enough internal checks to ensure that the application is healthy and no more. Ensure that the following conditions are met for the application: Any required database or other external storage connections are up and running. The load is acceptable for the server on which the application runs. Your site is not in maintenance mode. Tests specific to your application are operational. The page generated by the health check should be small in size: It returns in a sub-second interval. It does not induce significant load on the application server. The page generated by the health check is never cached, although the code that runs the health check might reference cached data. For example, you might find it useful to run a more extensive health check using cron and store the results to disk. The code that generates the page at the health monitor url-path incorporates the results of this cron job in the tests it performs. Because the Load-balancing service only processes the HTTP status code returned, and because health checks are run so frequently, you can use the HEAD or OPTIONS HTTP methods to skip processing the entire page.
[ "dnf list installed python-openstackclient", "echo USDOS_CLOUD my_cloud", "export OS_CLOUD=my_other_cloud", "openstack loadbalancer healthmonitor create --name my-health-monitor --delay 10 --max-retries 4 --timeout 5 --type TCP lb-pool-1", "dnf list installed python-openstackclient", "echo USDOS_CLOUD my_cloud", "export OS_CLOUD=my_other_cloud", "openstack loadbalancer healthmonitor set my_health_monitor --delay 600", "openstack loadbalancer healthmonitor show my_health_monitor", "dnf list installed python-openstackclient", "echo USDOS_CLOUD my_cloud", "export OS_CLOUD=my_other_cloud", "openstack loadbalancer healthmonitor delete my-health-monitor" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_load_balancing_as_a_service/monitor-lb-service_rhoso-lbaas
10.5. Dynamically Changing a Host Physical Machine or a Network Bridge that is Attached to a Virtual NIC
10.5. Dynamically Changing a Host Physical Machine or a Network Bridge that is Attached to a Virtual NIC This section demonstrates how to move the vNIC of a guest virtual machine from one bridge to another while the guest virtual machine is running without compromising the guest virtual machine Prepare guest virtual machine with a configuration similar to the following: Prepare an XML file for interface update: Start the guest virtual machine, confirm the guest virtual machine's network functionality, and check that the guest virtual machine's vnetX is connected to the bridge you indicated. Update the guest virtual machine's network with the new interface parameters with the following command: On the guest virtual machine, run service network restart . The guest virtual machine gets a new IP address for virbr1. Check the guest virtual machine's vnet0 is connected to the new bridge(virbr1)
[ "<interface type='bridge'> <mac address='52:54:00:4a:c9:5e'/> <source bridge='virbr0'/> <model type='virtio'/> </interface>", "cat br1.xml", "<interface type='bridge'> <mac address='52:54:00:4a:c9:5e'/> <source bridge='virbr1'/> <model type='virtio'/> </interface>", "brctl show bridge name bridge id STP enabled interfaces virbr0 8000.5254007da9f2 yes virbr0-nic vnet0 virbr1 8000.525400682996 yes virbr1-nic", "virsh update-device test1 br1.xml Device updated successfully", "brctl show bridge name bridge id STP enabled interfaces virbr0 8000.5254007da9f2 yes virbr0-nic virbr1 8000.525400682996 yes virbr1-nic vnet0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-dynamic-vnic
Chapter 2. IPAddress [ipam.cluster.x-k8s.io/v1beta1]
Chapter 2. IPAddress [ipam.cluster.x-k8s.io/v1beta1] Description IPAddress is the Schema for the ipaddress API. Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object IPAddressSpec is the desired state of an IPAddress. 2.1.1. .spec Description IPAddressSpec is the desired state of an IPAddress. Type object Required address claimRef poolRef prefix Property Type Description address string Address is the IP address. claimRef object ClaimRef is a reference to the claim this IPAddress was created for. gateway string Gateway is the network gateway of the network the address is from. poolRef object PoolRef is a reference to the pool that this IPAddress was created from. prefix integer Prefix is the prefix of the address. 2.1.2. .spec.claimRef Description ClaimRef is a reference to the claim this IPAddress was created for. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 2.1.3. .spec.poolRef Description PoolRef is a reference to the pool that this IPAddress was created from. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.2. API endpoints The following API endpoints are available: /apis/ipam.cluster.x-k8s.io/v1beta1/ipaddresses GET : list objects of kind IPAddress /apis/ipam.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/ipaddresses DELETE : delete collection of IPAddress GET : list objects of kind IPAddress POST : create an IPAddress /apis/ipam.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/ipaddresses/{name} DELETE : delete an IPAddress GET : read the specified IPAddress PATCH : partially update the specified IPAddress PUT : replace the specified IPAddress 2.2.1. /apis/ipam.cluster.x-k8s.io/v1beta1/ipaddresses HTTP method GET Description list objects of kind IPAddress Table 2.1. HTTP responses HTTP code Reponse body 200 - OK IPAddressList schema 401 - Unauthorized Empty 2.2.2. /apis/ipam.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/ipaddresses HTTP method DELETE Description delete collection of IPAddress Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind IPAddress Table 2.3. HTTP responses HTTP code Reponse body 200 - OK IPAddressList schema 401 - Unauthorized Empty HTTP method POST Description create an IPAddress Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body IPAddress schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK IPAddress schema 201 - Created IPAddress schema 202 - Accepted IPAddress schema 401 - Unauthorized Empty 2.2.3. /apis/ipam.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/ipaddresses/{name} Table 2.7. Global path parameters Parameter Type Description name string name of the IPAddress HTTP method DELETE Description delete an IPAddress Table 2.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified IPAddress Table 2.10. HTTP responses HTTP code Reponse body 200 - OK IPAddress schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified IPAddress Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.12. HTTP responses HTTP code Reponse body 200 - OK IPAddress schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified IPAddress Table 2.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.14. Body parameters Parameter Type Description body IPAddress schema Table 2.15. HTTP responses HTTP code Reponse body 200 - OK IPAddress schema 201 - Created IPAddress schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/cluster_apis/ipaddress-ipam-cluster-x-k8s-io-v1beta1
Part IV. Networking
Part IV. Networking This part describes how to configure the network on Red Hat Enterprise Linux.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/part-networking
Chapter 3. Deploying collectd and rsyslog
Chapter 3. Deploying collectd and rsyslog Deploy collectd and rsyslog on the hosts to collect logs and metrics. Note You do not need to repeat this procedure for new hosts. The Manager configures the hosts automatically. Procedure Log in to the Manager machine using SSH. Copy /etc/ovirt-engine-metrics/config.yml.example to create /etc/ovirt-engine-metrics/config.yml.d/config.yml : Edit the ovirt_env_name and elasticsearch_host parameters in config.yml and save the file. These parameters are mandatory and are documented in the file. Note If you add a Manager or an Elasticsearch installation, copy the Manager's public key to your Metrics Store virtual machine using the following commands: Deploy collectd and rsyslog on the hosts:
[ "cp /etc/ovirt-engine-metrics/config.yml.example /etc/ovirt-engine-metrics/config.yml.d/config.yml", "mytemp=USD(mktemp -d) cp /etc/pki/ovirt-engine/keys/engine_id_rsa USDmytemp ssh-keygen -y -f USDmytemp/engine_id_rsa > USDmytemp/engine_id_rsa.pub ssh-copy-id -i USDmytemp/engine_id_rsa.pub root@{elasticsearch_host} rm -rf USDmytemp", "/usr/share/ovirt-engine-metrics/setup/ansible/configure_ovirt_machines_for_metrics.sh" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/metrics_store_installation_guide/Deploying_collectd_and_rsyslog
Chapter 9. Managing pointer records (PTRs)
Chapter 9. Managing pointer records (PTRs) A step in configuring the Red Hat OpenStack Platform (RHOSP) DNS service (designate) is to set up IP address-to-domain-name-lookups, also referred to as reverse lookups. The DNS resource, pointer (PTR) records, contain the address-to-name mapping data and are stored in reverse lookup zones. The DNS service also enables you to manage reverse lookups for floating IP addresses. The topics included in this section are: Section 9.1, "PTR record basics" Section 9.2, "Creating reverse lookup zones" Section 9.3, "Creating a PTR record" Section 9.4, "Creating multiple PTR records" Section 9.5, "Setting up PTR records for floating IP addresses" Section 9.6, "Unsetting PTR records for floating IP addresses" 9.1. PTR record basics In the Red Hat OpenStack Platform (RHOSP) DNS service (designate) you use pointer (PTR) records to create a number to name mapping (reverse mapping) from a single IP or set of IP addresses to a fully qualified domain name (FQDN). Because the Domain Name System (DNS) looks up addresses as names, you create a PTR record that contains a name for the IP address. You form this name by following a particular convention: reverse the IP address and append a special string: in-addr.arpa for IPv4 addresses, and ip6.arpa for IPv6 addresses. For example, if the IP address for my-server.example.com is 198.51.100.42 , then you name the corresponding node in the reverse lookup zone, 42.100.51.198.in-addr.arpa . Listing the name of the IP address backwards facilitates its lookup, because like standard fully qualified domain names (FQDNs), a reversed IP address gets less specific as you move from its left side to its right side. The DNS service writes the contents of the PTR record to a special zone called a reverse lookup zone, whose sole purpose is to provide address-to-name lookups. Because the PTR record contains data that is structured similar to standard FQDNs, you can delegate child zones of the reverse lookup zone in the same way as you delegate other zones. In the earlier example, the host, 198.51.100.42 , is a node in the 198.in-addr.arpa zone, and this zone can be delegated to the administrators of the network, 198.51.100.0/8 . The DNS service manages PTR records for floating IP addresses differently than for standard IP addresses, because of the requirement that the user's RHOSP project owns the zone that contains the IP address. In most use cases involving reverse name lookups, this requirement is easily met. When managing reverse lookups for standard IP addresses, you use the openstack recordset command as you do when managing the other DNS resource record types. However, when working with floating IP addresses, it is common for multiple projects to share a pool of floating IP addresses. To solve the project ownership issue of a shared pool of addresses, you must use a different command when managing reverse lookups for floating IPs, the openstack ptr record command. Additional resources Section 9.3, "Creating a PTR record" Section 9.5, "Setting up PTR records for floating IP addresses" 9.2. Creating reverse lookup zones To properly configure the Red Hat OpenStack Platform (RHOSP) DNS service (designate) you must have a reverse lookup zone. A reverse lookup zone contains PTR records that are required for you to perform address-to-name lookups. You must name reverse lookup zones following this convention: <backward_IP_address>.in-addr.arpa for IPv4 addresses, and <backward_IP_address>.ip6.arpa for IPv6 addresses. Typically, you align the zones in your RHOSP deployment to your subnet plan. For example, if you have a /24 subnet for your external network, you create a /24 subnet reverse lookup zone to contain your PTR records. Procedure Source your credentials file. Example Create a reverse lookup zone by using the openstack zone create command and specifying these required arguments: --email <email_address> a valid email address for the person responsible (owner) for the zone. <name> a name for the reverse lookup zone that conforms to the convention: <backward_IP_address>.in-addr.arpa for IPv4 addresses, and <backward_IP_address>.ip6.arpa for IPv6 addresses. Example In this example, the reverse lookup zone is designed for one PTR record, for the 198.51.100.42 address: Sample output Example In another example for a reverse zone that is for a 198.51.100.0/24 subnet, you would create the zone: Sample output Verification Confirm that the reverse lookup zone that you created exists: Sample output For the address-to-name mapping to be complete, the forward zone- the zone that contains the IP address- must exist. If the forward zone does not exist, create that now. Additional resources Creating a zone zone create in the Command Line Interface Reference 9.3. Creating a PTR record In the Red Hat OpenStack Platform (RHOSP) DNS service (designate) you create PTR records to enable reverse lookups (address-to-name mappings). Enabling reverse lookups is a part of properly configuring the DNS service on your RHOSP deployment. Prerequisites Your RHOSP project must own the zone in which you create the PTR record. A reverse lookup zone to store the PTR record. For more information, see Section 9.2, "Creating reverse lookup zones" . Procedure Source your credentials file. Example Create a PTR record by using the openstack recordset create command and specifying these required arguments: --record <domain_name> the target, the domain name, that should be returned when a reverse lookup is performed. --type PTR the kind of record, PTR , that you are creating. <zone_name> the name of the zone, the reverse lookup zone, where the record resides. <record_name> the name of the PTR record. The record name must match the <zone_name> or be a member of the zone. For example, for the reverse lookup zone 100.51.198.in-addr.arpa. , these are valid PTR record names: 1.100.51.198.in-addr.arpa. , 2.100.51.198.in-addr.arpa. , and any other reversed IP addresses in the 198.51.100.0/24 subnet. Example Sample output Verification Perform a reverse lookup to confirm that the IP address ( 198.51.100.42 ) is mapped to the domain name ( www.example.com ). Example In this example, 203.0.113.5 is one of the DNS servers in the deployment: Sample output Additional resources recordset create in the Command Line Interface Reference dig command man page. 9.4. Creating multiple PTR records In the Red Hat OpenStack Platform (RHOSP) DNS service (designate) you can add many PTR records to a larger subnet by using a more broadly defined reverse lookup zone. Prerequisites Your RHOSP project must own the zone in which you create the PTR record. A reverse lookup zone to store the PTR record that is more broadly defined. For example, a 198.51.100.0/24 reverse lookup zone, 100.51.198.in-addr-arpa . For more information, see Section 9.2, "Creating reverse lookup zones" . Procedure Source your credentials file. Example Create the PTR record by using the openstack recordset create command and specifying these required arguments: --record <domain_name> the domain name of the lookup. --type PTR the kind of record, PTR , that you are creating. <zone_name> the name of the reverse lookup zone where the record resides. <record_name> the name of the PTR record. The record name must match the <zone_name> or be a member of the zone. For example, for the reverse lookup zone 100.51.198.in-addr.arpa. , these are valid PTR record names: 1.100.51.198.in-addr.arpa. , 2.100.51.198.in-addr.arpa. , and any other reversed IP addresses in the 198.51.100.0/24 subnet. Example In this example, the reverse lookup zone is more broadly defined, For example, a 100.51.198.0/24 reverse lookup zone, 100.51.198.in-addr-arpa : Sample output Verification Perform a reverse lookup to confirm that the IP address ( 198.51.100.3 ) is mapped to the domain name ( cats.example.com ). Example In this example, 203.0.113.5 is one of the DNS servers in the deployment: Sample output Perform a reverse lookup to confirm that any other IP address ( 198.51.100.0/24 ) is mapped to the domain name ( example.com ). Example In this example, 203.0.113.5 is one of the DNS servers in the deployment: Sample output Additional resources recordset create in the Command Line Interface Reference dig command man page. 9.5. Setting up PTR records for floating IP addresses In the Red Hat OpenStack Platform (RHOSP) DNS service (designate) you can create PTR records for floating IP addresses to allow reverse lookups. Prerequisites One or more floating IPs defined. A reverse lookup zone for the floating IP for which you want to create a PTR record. Procedure Source your credentials file. Example Determine the ID of the floating IP address for which you want to delete a PTR record. You need this information in a later step. Sample output Determine the RHOSP region name of the neutron instance that hosts the floating IP. You need this information in a later step. Sample output Create the PTR record by using the openstack ptr record set command and specifying these required arguments: <floating_IP_ID> the floating IP ID in the format: <region_name>:<floating_IP_ID>. <ptrd_name> the target, the domain name, that should be returned when a reverse lookup is performed. Example Sample output Verification Perform a reverse lookup to confirm that the floating IP address ( 192.0.2.11 ) is mapped to the domain name ( ftp.example.com ). Example In this example, 203.0.113.5 is one of the DNS servers in the deployment: Sample output Additional resources ptr record set in the Command Line Interface Reference dig command man page. 9.6. Unsetting PTR records for floating IP addresses In the Red Hat OpenStack Platform (RHOSP) DNS service (designate) you can remove PTR records associated with floating IP addresses. Prerequisites A PTR record for a floating IP. Procedure Source your credentials file. Example Determine the ID of the floating IP address for which you want to delete a PTR record. You need this information in a later step. Sample output Determine the name of your RHOSP region. You need this information in a later step. Sample output Delete the PTR record by using the openstack ptr record unset command and specifying these required arguments: <floating_IP_ID> the floating IP ID in the format: <region>:<floating_IP_ID>. Example Verification Confirm that you removed the PTR record. Additional resources ptr record unset in the Command Line Interface Reference
[ "source ~/overcloudrc", "openstack zone create --email [email protected] 42.100.51.198.in-addr.arpa.", "+----------------+------------------------------------------+ | Field | Value | +----------------+------------------------------------------+ | action | CREATE | | attributes | | | created_at | 2022-02-02T17:32:47.000000 | | description | None | | email | [email protected] | | id | f5546034-b27e-4326-bf9d-c53ed879f7fa | | masters | | | name | 42.100.51.198.in-addr.arpa. | | pool_id | 794ccc2c-d751-44fe-b57f-8894c9f5c842 | | project_id | 123d51544df443e790b8e95cce52c285 | | serial | 1591119166 | | status | PENDING | | transferred_at | None | | ttl | 3600 | | type | PRIMARY | | updated_at | None | | version | 1 | +----------------+------------------------------------------+", "openstack zone create --email [email protected] 100.51.198.in-addr.arpa.", "+----------------+------------------------------------------+ | Field | Value | +----------------+------------------------------------------+ | action | CREATE | | attributes | | | created_at | 2022-02-02T17:40:23.000000 | | description | None | | email | [email protected] | | id | 5669caad86a04256994cdf755df4d3c1 | | masters | | | name | 100.51.198.in-addr.arpa. | | pool_id | 794ccc2c-d751-44fe-b57f-8894c9f5c842 | | project_id | 123d51544df443e790b8e95cce52c285 | | serial | 1739276248 | | status | PENDING | | transferred_at | None | | ttl | 3600 | | type | PRIMARY | | updated_at | None | | version | 1 | +----------------+------------------------------------------+", "openstack zone list -c id -c name -c status", "+--------------------------------------+-----------------------------+--------+ | id | name | status | +--------------------------------------+-----------------------------+--------+ | f5546034-b27e-4326-bf9d-c53ed879f7fa | 42.100.51.198.in-addr.arpa. | ACTIVE | +--------------------------------------+-----------------------------+--------+", "source ~/overcloudrc", "openstack recordset create --record www.example.com. --type PTR 42.100.51.198.in-addr.arpa. 42.100.51.198.in-addr.arpa.", "+-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | action | CREATE | | created_at | 2022-02-02T19:55:50.000000 | | description | None | | id | ca604f72-83e6-421f-bf1c-bb4dc1df994a | | name | 42.100.51.198.in-addr.arpa. | | project_id | 123d51544df443e790b8e95cce52c285 | | records | www.example.com. | | status | PENDING | | ttl | 3600 | | type | PTR | | updated_at | None | | version | 1 | | zone_id | f5546034-b27e-4326-bf9d-c53ed879f7fa | | zone_name | 42.100.51.198.in-addr.arpa. | +-------------+--------------------------------------+", "dig @203.0.113.5 -x 198.51.100.42 +short", "www.example.com.", "source ~/overcloudrc", "openstack recordset create --record cats.example.com. --type PTR --ttl 3600 100.51.198.in-addr.arpa. 3.100.51.198.in-addr.arpa.", "+-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | action | CREATE | | created_at | 2022-02-02T20:10:54.000000 | | description | None | | id | c843729b-7aaf-4f99-a40a-d9bf70edf271 | | name | 3.100.51.198.in-addr.arpa. | | project_id | 123d51544df443e790b8e95cce52c285 | | records | cats.example.com. | | status | PENDING | | ttl | 3600 | | type | PTR | | updated_at | None | | version | 1 | | zone_id | e9fd0ced-1d3e-43fa-b9aa-6d4b7a73988d | | zone_name | 100.51.198.in-addr.arpa. | +-------------+--------------------------------------+", "dig @203.0.113.5 -x 198.51.100.3 +short", "cats.example.com.", "dig @203.0.113.5 -x 198.51.100.10 +short", "example.com.", "source ~/overcloudrc", "openstack floating ip list -c ID -c \"Floating IP Address\"", "+--------------------------------------+---------------------+ | ID | Floating IP Address | +--------------------------------------+---------------------+ | 5c02c519-4928-4a38-bd10-c748c200912f | 192.0.2.11 | | 89532684-13e1-4af3-bd79-f434c9920cc3 | 192.0.2.12 | | ea3ebc6d-a146-47cd-aaa8-35f06e1e8c3d | 192.0.2.13 | +--------------------------------------+---------------------+", "openstack endpoint list -c ID -c Region -c \"Service Name\"", "+----------------------------------+-----------+--------------+ | ID | Region | Service Name | +----------------------------------+-----------+--------------+ | 16526452effd467a915155ceccf79dae | RegionOne | placement | | 21bf826a62a14456a61bd8f39648e849 | RegionOne | keystone | | 9cb1956999c54001a39d11ea14e037a1 | RegionOne | nova | | bdeec4e2665d4605bb89e16a8b1bc50d | RegionOne | glance | | ced05a1c03ab44caa1a351ace95429e6 | RegionOne | neutron | | e79e3113ea544d039b3a6378e60bdf3f | RegionOne | nova | | f91ee44123954b6c82162dcd2d4fc965 | RegionOne | designate | +----------------------------------+-----------+--------------+", "openstack ptr record set RegionOne:5c02c519-4928-4a38-bd10-c748c200912f ftp.example.com.", "+-------------+------------------------------------------------+ | Field | Value | +-------------+------------------------------------------------+ | action | CREATE | | address | 192.0.2.11 | | description | None | | id | RegionOne:5c02c519-4928-4a38-bd10-c748c200912f | | ptrdname | ftp.example.com. | | status | PENDING | | ttl | 3600 | +-------------+------------------------------------------------+", "dig @203.0.113.5 -x 192.0.2.11 +short", "ftp.example.com.", "source ~/overcloudrc", "openstack floating ip list -c ID -c \"Floating IP Address\"", "+--------------------------------------+---------------------+ | ID | Floating IP Address | +--------------------------------------+---------------------+ | 5c02c519-4928-4a38-bd10-c748c200912f | 192.0.2.11 | | 89532684-13e1-4af3-bd79-f434c9920cc3 | 192.0.2.12 | | ea3ebc6d-a146-47cd-aaa8-35f06e1e8c3d | 192.0.2.13 | +--------------------------------------+---------------------+", "openstack endpoint list -c ID -c Region -c \"Service Name\"", "+----------------------------------+-----------+--------------+ | ID | Region | Service Name | +----------------------------------+-----------+--------------+ | 16526452effd467a915155ceccf79dae | RegionOne | placement | | 21bf826a62a14456a61bd8f39648e849 | RegionOne | keystone | | 9cb1956999c54001a39d11ea14e037a1 | RegionOne | nova | | bdeec4e2665d4605bb89e16a8b1bc50d | RegionOne | glance | | ced05a1c03ab44caa1a351ace95429e6 | RegionOne | neutron | | e79e3113ea544d039b3a6378e60bdf3f | RegionOne | nova | | f91ee44123954b6c82162dcd2d4fc965 | RegionOne | designate | +----------------------------------+-----------+--------------+", "openstack ptr record unset RegionOne:5c02c519-4928-4a38-bd10-c748c200912f", "openstack ptr record list" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/using_designate_for_dns-as-a-service/manage-pointer-records_rhosp-dnsaas
Introduction
Introduction This book describes the Logical Volume Manager (LVM), including information on running LVM in a clustered environment. The content of this document is specific to the LVM2 release. 1. Audience This book is intended to be used by system administrators managing systems running the Linux operating system. It requires familiarity with Red Hat Enterprise Linux and GFS file system administration.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/ch_introduction-clvm
Chapter 1. About disconnected installation mirroring
Chapter 1. About disconnected installation mirroring You can use a mirror registry to ensure that your clusters only use container images that satisfy your organizational controls on external content. Before you install a cluster on infrastructure that you provision in a restricted network, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring. 1.1. Creating a mirror registry If you already have a container image registry, such as Red Hat Quay, you can use it as your mirror registry. If you do not already have a registry, you can create a mirror registry using the mirror registry for Red Hat OpenShift . 1.2. Mirroring images for a disconnected installation You can use one of the following procedures to mirror your OpenShift Container Platform image repository to your mirror registry: Mirroring images for a disconnected installation Mirroring images for a disconnected installation using the oc-mirror plugin
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/disconnected_installation_mirroring/installing-mirroring-disconnected-about
Chapter 31. Installing an Identity Management server using an Ansible playbook
Chapter 31. Installing an Identity Management server using an Ansible playbook Learn more about how to configure a system as an IdM server by using Ansible . Configuring a system as an IdM server establishes an IdM domain and enables the system to offer IdM services to IdM clients. You can manage the deployment by using the ipaserver Ansible role. Prerequisites You understand the general Ansible and IdM concepts. 31.1. Ansible and its advantages for installing IdM Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. Ansible includes support for Identity Management (IdM), and you can use Ansible modules to automate installation tasks such as the setup of an IdM server, replica, client, or an entire IdM topology. Advantages of using Ansible to install IdM The following list presents advantages of installing Identity Management using Ansible in contrast to manual installation. You do not need to log into the managed node. You do not need to configure settings on each host to be deployed individually. Instead, you can have one inventory file to deploy a complete cluster. You can reuse an inventory file later for management tasks, for example to add users and hosts. You can reuse an inventory file even for such tasks as are not related to IdM. Additional resources Automating Red Hat Enterprise Linux Identity Management installation Planning Identity Management Preparing the system for IdM server installation 31.2. Installing the ansible-freeipa package Follow this procedure to install the ansible-freeipa package that provides Ansible roles and modules for installing and managing Identity Management (IdM) . Prerequisites Ensure that the controller is a Red Hat Enterprise Linux system with a valid subscription. If this is not the case, see the official Ansible documentation Installation guide for alternative installation instructions. Ensure that you can reach the managed node over the SSH protocol from the controller. Check that the managed node is listed in the /root/.ssh/known_hosts file of the controller. Procedure Use the following procedure on the Ansible controller. If your system is running on RHEL 8.5 and earlier, enable the required repository: If your system is running on RHEL 8.5 and earlier, install the ansible package: Install the ansible-freeipa package: The roles and modules are installed into the /usr/share/ansible/roles/ and /usr/share/ansible/plugins/modules directories. 31.3. Ansible roles location in the file system By default, the ansible-freeipa roles are installed to the /usr/share/ansible/roles/ directory. The structure of the ansible-freeipa package is as follows: The /usr/share/ansible/roles/ directory stores the ipaserver , ipareplica , and ipaclient roles on the Ansible controller. Each role directory stores examples, a basic overview, the license and documentation about the role in a README.md Markdown file. The /usr/share/doc/ansible-freeipa/ directory stores the documentation about individual roles and the topology in README.md Markdown files. It also stores the playbooks/ subdirectory. The /usr/share/doc/ansible-freeipa/playbooks/ directory stores the example playbooks: 31.4. Setting the parameters for a deployment with an integrated DNS and an integrated CA as the root CA Complete this procedure to configure the inventory file for installing an IdM server with an integrated CA as the root CA in an environment that uses the IdM integrated DNS solution. Note The inventory in this procedure uses the INI format. You can, alternatively, use the YAML or JSON formats. Procedure Create a ~/MyPlaybooks/ directory: Create a ~/MyPlaybooks/inventory file. Open the inventory file for editing. Specify the fully-qualified domain names ( FQDN ) of the host you want to use as an IdM server. Ensure that the FQDN meets the following criteria: Only alphanumeric characters and hyphens (-) are allowed. For example, underscores are not allowed and can cause DNS failures. The host name must be all lower-case. Specify the IdM domain and realm information. Specify that you want to use integrated DNS by adding the following option: Specify the DNS forwarding settings. Choose one of the following options: Use the ipaserver_auto_forwarders=true option if you want the installer to use forwarders from the /etc/resolv.conf file. Do not use this option if the nameserver specified in the /etc/resolv.conf file is the localhost 127.0.0.1 address or if you are on a virtual private network and the DNS servers you are using are normally unreachable from the public internet. Use the ipaserver_forwarders option to specify your forwarders manually. The installation process adds the forwarder IP addresses to the /etc/named.conf file on the installed IdM server. Use the ipaserver_no_forwarders=true option to configure root DNS servers to be used instead. Note With no DNS forwarders, your environment is isolated, and names from other DNS domains in your infrastructure are not resolved. Specify the DNS reverse record and zone settings. Choose from the following options: Use the ipaserver_allow_zone_overlap=true option to allow the creation of a (reverse) zone even if the zone is already resolvable. Use the ipaserver_reverse_zones option to specify your reverse zones manually. Use the ipaserver_no_reverse=true option if you do not want the installer to create a reverse DNS zone. Note Using IdM to manage reverse zones is optional. You can use an external DNS service for this purpose instead. Specify the passwords for admin and for the Directory Manager . Use the Ansible Vault to store the password, and reference the Vault file from the playbook file. Alternatively and less securely, specify the passwords directly in the inventory file. Optional: Specify a custom firewalld zone to be used by the IdM server. If you do not set a custom zone, IdM will add its services to the default firewalld zone. The predefined default zone is public . Important The specified firewalld zone must exist and be permanent. Example of an inventory file with the required server information (excluding the passwords) Example of an inventory file with the required server information (including the passwords) Example of an inventory file with a custom firewalld zone Example playbook to set up an IdM server using admin and Directory Manager passwords stored in an Ansible Vault file Example playbook to set up an IdM server using admin and Directory Manager passwords from an inventory file Additional resources man ipa-server-install(1) /usr/share/doc/ansible-freeipa/README-server.md 31.5. Setting the parameters for a deployment with external DNS and an integrated CA as the root CA Complete this procedure to configure the inventory file for installing an IdM server with an integrated CA as the root CA in an environment that uses an external DNS solution. Note The inventory file in this procedure uses the INI format. You can, alternatively, use the YAML or JSON formats. Procedure Create a ~/MyPlaybooks/ directory: Create a ~/MyPlaybooks/inventory file. Open the inventory file for editing. Specify the fully-qualified domain names ( FQDN ) of the host you want to use as an IdM server. Ensure that the FQDN meets the following criteria: Only alphanumeric characters and hyphens (-) are allowed. For example, underscores are not allowed and can cause DNS failures. The host name must be all lower-case. Specify the IdM domain and realm information. Make sure that the ipaserver_setup_dns option is set to no or that it is absent. Specify the passwords for admin and for the Directory Manager . Use the Ansible Vault to store the password, and reference the Vault file from the playbook file. Alternatively and less securely, specify the passwords directly in the inventory file. Optional: Specify a custom firewalld zone to be used by the IdM server. If you do not set a custom zone, IdM will add its services to the default firewalld zone. The predefined default zone is public . Important The specified firewalld zone must exist and be permanent. Example of an inventory file with the required server information (excluding the passwords) Example of an inventory file with the required server information (including the passwords) Example of an inventory file with a custom firewalld zone Example playbook to set up an IdM server using admin and Directory Manager passwords stored in an Ansible Vault file Example playbook to set up an IdM server using admin and Directory Manager passwords from an inventory file Additional resources man ipa-server-install(1) /usr/share/doc/ansible-freeipa/README-server.md 31.6. Deploying an IdM server with an integrated CA as the root CA using an Ansible playbook Complete this procedure to deploy an IdM server with an integrated certificate authority (CA) as the root CA using an Ansible playbook. Prerequisites The managed node is a Red Hat Enterprise Linux 8 system with a static IP address and a working package manager. You have set the parameters that correspond to your scenario by choosing one of the following procedures: Procedure with integrated DNS Procedure with external DNS Procedure Run the Ansible playbook: Choose one of the following options: If your IdM deployment uses external DNS: add the DNS resource records contained in the /tmp/ipa.system.records.UFRPto.db file to the existing external DNS servers. The process of updating the DNS records varies depending on the particular DNS solution. Important The server installation is not complete until you add the DNS records to the existing DNS servers. If your IdM deployment uses integrated DNS: Add DNS delegation from the parent domain to the IdM DNS domain. For example, if the IdM DNS domain is idm.example.com , add a name server (NS) record to the example.com parent domain. Important Repeat this step each time after an IdM DNS server is installed. Add an _ntp._udp service (SRV) record for your time server to your IdM DNS. The presence of the SRV record for the time server of the newly-installed IdM server in IdM DNS ensures that future replica and client installations are automatically configured to synchronize with the time server used by this primary IdM server. 31.7. Setting the parameters for a deployment with an integrated DNS and an external CA as the root CA Complete this procedure to configure the inventory file for installing an IdM server with an external CA as the root CA in an environment that uses the IdM integrated DNS solution. Note The inventory file in this procedure uses the INI format. You can, alternatively, use the YAML or JSON formats. Procedure Create a ~/MyPlaybooks/ directory: Create a ~/MyPlaybooks/inventory file. Open the inventory file for editing. Specify the fully-qualified domain names ( FQDN ) of the host you want to use as an IdM server. Ensure that the FQDN meets the following criteria: Only alphanumeric characters and hyphens (-) are allowed. For example, underscores are not allowed and can cause DNS failures. The host name must be all lower-case. Specify the IdM domain and realm information. Specify that you want to use integrated DNS by adding the following option: Specify the DNS forwarding settings. Choose one of the following options: Use the ipaserver_auto_forwarders=true option if you want the installation process to use forwarders from the /etc/resolv.conf file. This option is not recommended if the nameserver specified in the /etc/resolv.conf file is the localhost 127.0.0.1 address or if you are on a virtual private network and the DNS servers you are using are normally unreachable from the public internet. Use the ipaserver_forwarders option to specify your forwarders manually. The installation process adds the forwarder IP addresses to the /etc/named.conf file on the installed IdM server. Use the ipaserver_no_forwarders=true option to configure root DNS servers to be used instead. Note With no DNS forwarders, your environment is isolated, and names from other DNS domains in your infrastructure are not resolved. Specify the DNS reverse record and zone settings. Choose from the following options: Use the ipaserver_allow_zone_overlap=true option to allow the creation of a (reverse) zone even if the zone is already resolvable. Use the ipaserver_reverse_zones option to specify your reverse zones manually. Use the ipaserver_no_reverse=true option if you do not want the installation process to create a reverse DNS zone. Note Using IdM to manage reverse zones is optional. You can use an external DNS service for this purpose instead. Specify the passwords for admin and for the Directory Manager . Use the Ansible Vault to store the password, and reference the Vault file from the playbook file. Alternatively and less securely, specify the passwords directly in the inventory file. Optional: Specify a custom firewalld zone to be used by the IdM server. If you do not set a custom zone, IdM adds its services to the default firewalld zone. The predefined default zone is public . Important The specified firewalld zone must exist and be permanent. Example of an inventory file with the required server information (excluding the passwords) Example of an inventory file with the required server information (including the passwords) Example of an inventory file with a custom firewalld zone Create a playbook for the first step of the installation. Enter instructions for generating the certificate signing request (CSR) and copying it from the controller to the managed node. Create another playbook for the final step of the installation. Additional resources man ipa-server-install(1) /usr/share/doc/ansible-freeipa/README-server.md 31.8. Setting the parameters for a deployment with external DNS and an external CA as the root CA Complete this procedure to configure the inventory file for installing an IdM server with an external CA as the root CA in an environment that uses an external DNS solution. Note The inventory file in this procedure uses the INI format. You can, alternatively, use the YAML or JSON formats. Procedure Create a ~/MyPlaybooks/ directory: Create a ~/MyPlaybooks/inventory file. Open the inventory file for editing. Specify the fully-qualified domain names ( FQDN ) of the host you want to use as an IdM server. Ensure that the FQDN meets the following criteria: Only alphanumeric characters and hyphens (-) are allowed. For example, underscores are not allowed and can cause DNS failures. The host name must be all lower-case. Specify the IdM domain and realm information. Make sure that the ipaserver_setup_dns option is set to no or that it is absent. Specify the passwords for admin and for the Directory Manager . Use the Ansible Vault to store the password, and reference the Vault file from the playbook file. Alternatively and less securely, specify the passwords directly in the inventory file. Optional: Specify a custom firewalld zone to be used by the IdM server. If you do not set a custom zone, IdM will add its services to the default firewalld zone. The predefined default zone is public . Important The specified firewalld zone must exist and be permanent. Example of an inventory file with the required server information (excluding the passwords) Example of an inventory file with the required server information (including the passwords) Example of an inventory file with a custom firewalld zone Create a playbook for the first step of the installation. Enter instructions for generating the certificate signing request (CSR) and copying it from the controller to the managed node. Create another playbook for the final step of the installation. Additional resources Installing an IdM server: Without integrated DNS, with an external CA as the root CA man ipa-server-install(1) /usr/share/doc/ansible-freeipa/README-server.md 31.9. Deploying an IdM server with an external CA as the root CA using an Ansible playbook Complete this procedure to deploy an IdM server with an external certificate authority (CA) as the root CA using an Ansible playbook. Prerequisites The managed node is a Red Hat Enterprise Linux 8 system with a static IP address and a working package manager. You have set the parameters that correspond to your scenario by choosing one of the following procedures: Procedure with integrated DNS Procedure with external DNS Procedure Run the Ansible playbook with the instructions for the first step of the installation, for example install-server-step1.yml : Locate the ipa.csr certificate signing request file on the controller and submit it to the external CA. Place the IdM CA certificate signed by the external CA in the controller file system so that the playbook in the step can find it. Run the Ansible playbook with the instructions for the final step of the installation, for example install-server-step2.yml : Choose one of the following options: If your IdM deployment uses external DNS: add the DNS resource records contained in the /tmp/ipa.system.records.UFRPto.db file to the existing external DNS servers. The process of updating the DNS records varies depending on the particular DNS solution. Important The server installation is not complete until you add the DNS records to the existing DNS servers. If your IdM deployment uses integrated DNS: Add DNS delegation from the parent domain to the IdM DNS domain. For example, if the IdM DNS domain is idm.example.com , add a name server (NS) record to the example.com parent domain. Important Repeat this step each time after an IdM DNS server is installed. Add an _ntp._udp service (SRV) record for your time server to your IdM DNS. The presence of the SRV record for the time server of the newly-installed IdM server in IdM DNS ensures that future replica and client installations are automatically configured to synchronize with the time server used by this primary IdM server. 31.10. Uninstalling an IdM server using an Ansible playbook Note In an existing Identity Management (IdM) deployment, replica and server are interchangeable terms. Complete this procedure to uninstall an IdM replica using an Ansible playbook. In this example: IdM configuration is uninstalled from server123.idm.example.com . server123.idm.example.com and the associated host entry are removed from the IdM topology. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. In this example, the FQDN is server123.idm.example.com . You have stored your ipaadmin_password in the secret.yml Ansible vault. For the ipaserver_remove_from_topology option to work, the system must be running on RHEL 8.9 or later. On the managed node: The system is running on RHEL 8. Procedure Create your Ansible playbook file uninstall-server.yml with the following content: The ipaserver_remove_from_domain option unenrolls the host from the IdM topology. Note If the removal of server123.idm.example.com should lead to a disconnected topology, the removal will be aborted. For more information, see Using an Ansible playbook to uninstall an IdM server even if this leads to a disconnected topology . Uninstall the replica: Ensure that all name server (NS) DNS records pointing to server123.idm.example.com are deleted from your DNS zones. This applies regardless of whether you use integrated DNS managed by IdM or external DNS. For more information on how to delete DNS records from IdM, see Deleting DNS records in the IdM CLI . 31.11. Using an Ansible playbook to uninstall an IdM server even if this leads to a disconnected topology Note In an existing Identity Management (IdM) deployment, replica and server are interchangeable terms. Complete this procedure to uninstall an IdM replica using an Ansible playbook even if this results in a disconnected IdM topology. In the example, server456.idm.example.com is used to remove the replica and the associated host entry with the FQDN of server123.idm.example.com from the topology, leaving certain replicas disconnected from server456.idm.example.com and the rest of the topology. Note If removing a replica from the topology using only the remove_server_from_domain does not result in a disconnected topology, no other options are required. If the result is a disconnected topology, you must specify which part of the domain you want to preserve. In that case, you must do the following: Specify the ipaserver_remove_on_server value. Set ipaserver_ignore_topology_disconnect to True. Prerequisites On the control node: You are using Ansible version 2.13 or later. The system is running on RHEL 8.9 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. In this example, the FQDN is server123.idm.example.com . You have stored your ipaadmin_password in the secret.yml Ansible vault. On the managed node: The system is running on 8 or later. Procedure Create your Ansible playbook file uninstall-server.yml with the following content: Note Under normal circumstances, if the removal of server123 does not result in a disconnected topology: if the value for ipaserver_remove_on_server is not set, the replica on which server123 is removed is automatically determined using the replication agreements of server123. Uninstall the replica: Ensure that all name server (NS) DNS records pointing to server123.idm.example.com are deleted from your DNS zones. This applies regardless of whether you use integrated DNS managed by IdM or external DNS. For more information on how to delete DNS records from IdM, see Deleting DNS records in the IdM CLI . 31.12. Additional resources Planning the replica topology Backing up and restoring IdM servers using Ansible playbooks Inventory basics: formats, hosts, and groups
[ "subscription-manager repos --enable ansible-2.8-for-rhel-8-x86_64-rpms", "yum install ansible", "yum install ansible-freeipa", "ls -1 /usr/share/ansible/roles/ ipaclient ipareplica ipaserver", "ls -1 /usr/share/doc/ansible-freeipa/ playbooks README-client.md README.md README-replica.md README-server.md README-topology.md", "ls -1 /usr/share/doc/ansible-freeipa/playbooks/ install-client.yml install-cluster.yml install-replica.yml install-server.yml uninstall-client.yml uninstall-cluster.yml uninstall-replica.yml uninstall-server.yml", "mkdir MyPlaybooks", "ipaserver_setup_dns=true", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=true ipaserver_auto_forwarders=true [...]", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=true ipaserver_auto_forwarders=true ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 [...]", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=true ipaserver_auto_forwarders=true ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 ipaserver_firewalld_zone= custom zone", "--- - name: Playbook to configure IPA server hosts: ipaserver become: true vars_files: - playbook_sensitive_data.yml roles: - role: ipaserver state: present", "--- - name: Playbook to configure IPA server hosts: ipaserver become: true roles: - role: ipaserver state: present", "mkdir MyPlaybooks", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=no [...]", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=no ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 [...]", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=no ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 ipaserver_firewalld_zone= custom zone", "--- - name: Playbook to configure IPA server hosts: ipaserver become: true vars_files: - playbook_sensitive_data.yml roles: - role: ipaserver state: present", "--- - name: Playbook to configure IPA server hosts: ipaserver become: true roles: - role: ipaserver state: present", "ansible-playbook -i ~/MyPlaybooks/inventory ~/MyPlaybooks/install-server.yml", "Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server", "mkdir MyPlaybooks", "ipaserver_setup_dns=true", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=true ipaserver_auto_forwarders=true [...]", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=true ipaserver_auto_forwarders=true ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 [...]", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=true ipaserver_auto_forwarders=true ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 ipaserver_firewalld_zone= custom zone [...]", "--- - name: Playbook to configure IPA server Step 1 hosts: ipaserver become: true vars_files: - playbook_sensitive_data.yml vars: ipaserver_external_ca: true roles: - role: ipaserver state: present post_tasks: - name: Copy CSR /root/ipa.csr from node to \"{{ groups.ipaserver[0] + '-ipa.csr' }}\" fetch: src: /root/ipa.csr dest: \"{{ groups.ipaserver[0] + '-ipa.csr' }}\" flat: true", "--- - name: Playbook to configure IPA server Step 2 hosts: ipaserver become: true vars_files: - playbook_sensitive_data.yml vars: ipaserver_external_cert_files: - \"/root/servercert20240601.pem\" - \"/root/cacert.pem\" pre_tasks: - name: Copy \"{{ groups.ipaserver[0] }}-{{ item }}\" to \"/root/{{ item }}\" on node ansible.builtin.copy: src: \"{{ groups.ipaserver[0] }}-{{ item }}\" dest: \"/root/{{ item }}\" force: true with_items: - servercert20240601.pem - cacert.pem roles: - role: ipaserver state: present", "mkdir MyPlaybooks", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=no [...]", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=no ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 [...]", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=no ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 ipaserver_firewalld_zone= custom zone [...]", "--- - name: Playbook to configure IPA server Step 1 hosts: ipaserver become: true vars_files: - playbook_sensitive_data.yml vars: ipaserver_external_ca: true roles: - role: ipaserver state: present post_tasks: - name: Copy CSR /root/ipa.csr from node to \"{{ groups.ipaserver[0] + '-ipa.csr' }}\" fetch: src: /root/ipa.csr dest: \"{{ groups.ipaserver[0] + '-ipa.csr' }}\" flat: true", "--- - name: Playbook to configure IPA server Step 2 hosts: ipaserver become: true vars_files: - playbook_sensitive_data.yml vars: ipaserver_external_cert_files: - \"/root/servercert20240601.pem\" - \"/root/cacert.pem\" pre_tasks: - name: Copy \"{{ groups.ipaserver[0] }}-{{ item }}\" to \"/root/{{ item }}\" on node ansible.builtin.copy: src: \"{{ groups.ipaserver[0] }}-{{ item }}\" dest: \"/root/{{ item }}\" force: true with_items: - servercert20240601.pem - cacert.pem roles: - role: ipaserver state: present", "ansible-playbook --vault-password-file=password_file -v -i ~/MyPlaybooks/inventory ~/MyPlaybooks/install-server-step1.yml", "ansible-playbook -v -i ~/MyPlaybooks/inventory ~/MyPlaybooks/install-server-step2.yml", "Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server", "--- - name: Playbook to uninstall an IdM replica hosts: ipaserver become: true roles: - role: ipaserver ipaserver_remove_from_domain: true state: absent", "ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/inventory <path_to_playbooks_directory>/uninstall-server.yml", "--- - name: Playbook to uninstall an IdM replica hosts: ipaserver become: true roles: - role: ipaserver ipaserver_remove_from_domain: true ipaserver_remove_on_server: server456.idm.example.com ipaserver_ignore_topology_disconnect: true state: absent", "ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/uninstall-server.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_identity_management/installing-an-Identity-Management-server-using-an-Ansible-playbook_installing-identity-management
A.16. Disable SMART Disk Monitoring for Guest Virtual Machines
A.16. Disable SMART Disk Monitoring for Guest Virtual Machines SMART disk monitoring can be safely disabled as virtual disks and the physical storage devices are managed by the host physical machine.
[ "service smartd stop systemctl --del smartd" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-troubleshooting-disable_smart_disk_monitoring_for_guest_virtual_machines
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/replacing_nodes/providing-feedback-on-red-hat-documentation_rhodf
Appendix B. Provisioning FIPS-compliant hosts
Appendix B. Provisioning FIPS-compliant hosts Satellite supports provisioning hosts that comply with the National Institute of Standards and Technology's Security Requirements for Cryptographic Modules standard, reference number FIPS 140-2, referred to here as FIPS. To enable the provisioning of hosts that are FIPS-compliant, complete the following tasks: Change the provisioning password hashing algorithm for the operating system Create a host group and set a host group parameter to enable FIPS For more information, see Creating a Host Group in Managing hosts . The provisioned hosts have the FIPS-compliant settings applied. To confirm that these settings are enabled, complete the steps in Section B.3, "Verifying FIPS mode is enabled" . B.1. Changing the provisioning password hashing algorithm To provision FIPS-compliant hosts, you must first set the password hashing algorithm that you use in provisioning to SHA256. This configuration setting must be applied for each operating system you want to deploy as FIPS-compliant. Procedure Identify the Operating System IDs: Update each operating system's password hash value. Note that you cannot use a comma-separated list of values. B.2. Setting the FIPS-enabled parameter To provision a FIPS-compliant host, you must create a host group and set the host group parameter fips_enabled to true . If this is not set to true , or is absent, the FIPS-specific changes do not apply to the system. You can set this parameter when you provision a host or for a host group. To set this parameter when provisioning a host, append --parameters fips_enabled=true to the Hammer command. For more information, see the output of the command hammer hostgroup set-parameter --help . B.3. Verifying FIPS mode is enabled To verify these FIPS compliance changes have been successful, you must provision a host and check its configuration. Procedure Log in to the host as root or with an admin-level account. Enter the following command: A value of 1 confirms that FIPS mode is enabled.
[ "hammer os list", "hammer os update --password-hash SHA256 --title \" My_Operating_System \"", "hammer hostgroup set-parameter --hostgroup \" My_Host_Group \" --name fips_enabled --value \"true\"", "cat /proc/sys/crypto/fips_enabled" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/provisioning_hosts/provisioning_fips_compliant_hosts_provisioning
Chapter 12. DeploymentRequest [apps.openshift.io/v1]
Chapter 12. DeploymentRequest [apps.openshift.io/v1] Description DeploymentRequest is a request to a deployment config for a new deployment. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required name latest force 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources excludeTriggers array (string) ExcludeTriggers instructs the instantiator to avoid processing the specified triggers. This field overrides the triggers from latest and allows clients to control specific logic. This field is ignored if not specified. force boolean Force will try to force a new deployment to run. If the deployment config is paused, then setting this to true will return an Invalid error. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds latest boolean Latest will update the deployment config with the latest state from all triggers. name string Name of the deployment config for requesting a new deployment. 12.2. API endpoints The following API endpoints are available: /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/instantiate POST : create instantiate of a DeploymentConfig 12.2.1. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/instantiate Table 12.1. Global path parameters Parameter Type Description name string name of the DeploymentRequest namespace string object name and auth scope, such as for teams and projects Table 12.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create instantiate of a DeploymentConfig Table 12.3. Body parameters Parameter Type Description body DeploymentRequest schema Table 12.4. HTTP responses HTTP code Reponse body 200 - OK DeploymentRequest schema 201 - Created DeploymentRequest schema 202 - Accepted DeploymentRequest schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/workloads_apis/deploymentrequest-apps-openshift-io-v1
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_installation_guide/making-open-source-more-inclusive
5.3.5. Use Kerberos Authentication
5.3.5. Use Kerberos Authentication One of the most glaring flaws inherent when NIS is used for authentication is that whenever a user logs into a machine, a password hash from the /etc/shadow map is sent over the network. If an intruder gains access to an NIS domain and sniffs network traffic, usernames and password hashes can be quietly collected. With enough time, a password cracking program can guess weak passwords, and an attacker can gain access to a valid account on the network. Since Kerberos uses secret-key cryptography, no password hashes are ever sent over the network, making the system far more secure. For more about Kerberos, refer to the chapter titled Kerberos in the Reference Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-server-nis-kerb
4.70. gdm
4.70. gdm 4.70.1. RHBA-2012:1447 - gdm bug fix update Updated gdm packages that fix a bug are now available for Red Hat Enterprise Linux 6 Extended Update Support. The GNOME Display Manager (GDM) is a highly configurable reimplementation of XDM, the X Display Manager. GDM allows you to log into your system with the X Window System running and supports running several different X sessions on your local machine at the same time. Bug Fix BZ# 860645 When gdm was used to connect to a server via XDMCP (X Display Manager Control Protocol), another connection to a remote system using the "ssh -X" command resulted in wrong authorization with the X server. Consequently, applications such as xterm could not be displayed on the remote system. This update provides a compatible MIT-MAGIC-COOKIE-1 key in the described scenario, thus fixing this bug. All users of gdm are advised to upgrade to these updated packages, which fix this bug. 4.70.2. RHBA-2011:1721 - gdm bug fix update Updated gdm packages that fix multiple bugs are now available for Red Hat Enterprise Linux 6. The GNOME Display Manager (GDM) provides the graphical login screen, shown shortly after boot up, log out, and when user-switching. Bug Fixes BZ# 661618 GDM did not properly queue up multiple authentication messages so that messages could quickly be overwritten by newer messages. The queueing mechanism has been modified, and this problem no longer occurs. BZ# 628462 If a Russian keyboard layout was chosen during system installation, the login screen was configured to use Russian input for user names and passwords by default. However, GDM did not provide any visible way to switch between keyboard layouts, and pressing Left Shift and Right Shift keys did not cause the input to change to ASCII mode in GDM. Consequently, users were not able to log in to the system. With this update, GDM allows users to switch keyboard layout properly using the keyboard layout indicator, and users can now log in as expected. BZ# 723515 GDM did not properly release file descriptors used with XDMCP indirect queries. As a consequence, the number of file descriptors used by GDM increased with every XDMCP chooser restart, which, in some cases, led to memory exhaustion and a GDM crash. The underlying GDM code has been modified to manage file descriptors properly, and the problem no longer occurs in this scenario. BZ# 670619 In multi-monitor setups, GDM always displayed the login window on the screen that was determined as active by the mouse pointer position. This behavior caused unpredictable login window placement in dual screen setups when using the NVIDIA's TwinView Dual-Display Architecture because the mouse pointer initially appeared exactly between the monitors outside of the visible screen. GDM now uses new logic to ensure that the initial placement of the mouse pointer and the login window are consistently on one screen. BZ# 645453 The GDM simple greeter login window displayed "Suspend", "Restart" and "Shut Down" buttons even though the buttons were disabled in GDM configuration and the PolicyKit toolkit disallowed any stop, restart, suspend actions on the system. With this update, GDM logic responsible for setting up the greeter login window has been modified and these buttons are no longer displayed under these circumstances BZ# 622561 When authenticating to a system and the fingerprint authentication method was enabled, but no fingerprint reader was attached to the machine, GDM erroneously displayed authentication method buttons for a brief moment. With this update, GDM displays authentication method buttons only if the authentication method is enabled and a reading device is connected. BZ# 708430 GDM did not properly handle its message queue. Therefore, when resetting a password on user login, GDM displayed an error message from a unsuccessful attempt. The queueing mechanism has been modified, and this problem no longer occurs. BZ# 688158 When logging into a system using LDAP authentication, GDM did not properly handle LDAP usernames containing backslash characters. As a consequence, such usernames were not recognized and users were not able to log in even though they provided valid credentials. With this update, GDM now handles usernames with backslash characters correctly and users can log in as expected. All users of gdm are advised to upgrade to these updated packages, which fix these bugs. 4.70.3. RHEA-2012:0435 - gdm enhancement update Updated gdm packages that add one enhancement are now available for Red Hat Enterprise Linux 6. The GNOME Display Manager (GDM) provides the graphical login screen, shown shortly after boot up, logout, and when user-switching. Enhancement BZ# 799940 Previously, X server audit messages were not included by default in the X server log. Now, those messages are unconditionally included in the log. Also, with this update, verbose messages are added to the X server log if debugging is enabled in the /etc/gdm/custom.conf file (by setting "Enable=true" in the [debug] section). All users of gdm are advised to upgrade to these updated packages, which add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/gdm
7.3. Understanding the pkispawn Utility
7.3. Understanding the pkispawn Utility In Red Hat Certificate System, you set up the individual public key infrastructure (PKI) subsystems using the pkispawn utility. During the setup, pkispawn : Reads the default values from the /usr/share/pki/server/etc/default.cfg file. For further details, see the pki_default.cfg (5) man page. Important Do not edit the /usr/share/pki/server/etc/default.cfg file. Instead, create a configuration file and that overrides the defaults, and pass it to the pkispawn utility. For details about using a configuration file, see Section 7.7, "Two-step Installation" . Uses the passwords and other deployment-specific information provided depending on the setup mode: Interactive mode: The user is asked for the individual settings during the setup. Use this mode for simple deployments. Batch mode: The values are read from a configuration file the user provides. Parameters not set in the configuration file use the defaults. Performs the installation of the requested PKI subsystem. Passes the settings to a Java servlet that performs the configuration based on the settings. Use the pkispawn utility to install: A root CA. For details, see Section 7.4, "Setting Up a Root Certificate Authority" . A subordinate CA or any other subsystem. For details, see Section 7.6, "Setting up Additional Subsystems" . Note See Section 7.4, "Setting Up a Root Certificate Authority" on how to set up a root CA using the pkispawn utility. For a setup of a subordinate CA or non-CA subsystems, see Section 7.8, "Setting up Subsystems with an External CA" . For further information about pkispawn and examples, see the pkispawn (8) man page.
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/understanding_the_pkispawn_utility
Chapter 2. Upgrading a Red Hat Ceph Storage cluster running Red Hat Enterprise Linux 8 from RHCS 4 to RHCS 5
Chapter 2. Upgrading a Red Hat Ceph Storage cluster running Red Hat Enterprise Linux 8 from RHCS 4 to RHCS 5 As a storage administrator, you can upgrade a Red Hat Ceph Storage cluster running Red Hat Enterprise Linux 8 from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5. The upgrade process includes the following tasks: Use Ansible playbooks to upgrade a Red Hat Ceph Storage 4 storage cluster to Red Hat Ceph Storage 5. Important ceph-ansible is currently not supported with Red Hat Ceph Storage 5. This means that once you have migrated your storage cluster to Red Hat Ceph Storage 5, you must use cephadm and cephadm-ansible to perform subsequent updates. Important While upgrading from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5, do not set bluestore_fsck_quick_fix_on_mount parameter to true or do not run the ceph-bluestore-tool --path PATH_TO_OSD --command quick-fix|repair commands as it might lead to improperly formatted OMAP keys and cause data corruption. Warning Upgrading to Red Hat Ceph Storage 5.2 from Red Hat Ceph Storage 5.0 on Ceph Object Gateway storage clusters (single-site or multi-site) is supported but you must set the ceph config set mgr mgr/cephadm/no_five_one_rgw true --force option prior to upgrading your storage cluster. Upgrading to Red Hat Ceph Storage 5.2 from Red Hat Ceph Storage 5.1 on Ceph Object Gateway storage clusters (single-site or multi-site) is not supported due to a known issue. For more information, see the knowledge base article Support Restrictions for upgrades for RADOS Gateway (RGW) on Red Hat Red Hat Ceph Storage 5.2 . Note Follow the knowledge base article How to upgrade from Red Hat Ceph Storage 4.2z4 to 5.0z4 with the upgrade procedure if you are planning to upgrade to Red Hat Ceph Storage 5.0z4. Important The option bluefs_buffered_io is set to True by default for Red Hat Ceph Storage. This option enables BlueFS to perform buffered reads in some cases, and enables the kernel page cache to act as a secondary cache for reads like RocksDB block reads. For example, if the RocksDB block cache is not large enough to hold all blocks during the OMAP iteration, it may be possible to read them from the page cache instead of the disk. This can dramatically improve performance when osd_memory_target is too small to hold all entries in the block cache. Currently, enabling bluefs_buffered_io and disabling the system level swap prevents performance degradation. For more information about viewing the current setting for bluefs_buffered_io , see the Viewing the bluefs_buffered_io setting section in the Red Hat Ceph Storage Administration Guide . Red Hat Ceph Storage 5 supports only containerized daemons. It does not support non-containerized storage clusters. If you are upgrading a non-containerized storage cluster from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5, the upgrade process includes the conversion to a containerized deployment. 2.1. Prerequisites A Red Hat Ceph Storage 4 cluster running Red Hat Enterprise Linux 8.4 or later. A valid customer subscription. Root-level access to the Ansible administration node. Root-level access to all nodes in the storage cluster. The Ansible user account for use with the Ansible application. Red Hat Ceph Storage tools and Ansible repositories are enabled. Important You can manually upgrade the Ceph File System (CephFS) Metadata Server (MDS) software on a Red Hat Ceph Storage cluster and the Red Hat Enterprise Linux operating system to a new major release at the same time. The underlying XFS filesystem must be formatted with ftype=1 or with d_type support. Run the command xfs_info /var to ensure the ftype is set to 1 . If the value of ftype is not 1 , attach a new disk or create a volume. On top of this new device, create a new XFS filesystem and mount it on /var/lib/containers . Starting with Red Hat Enterprise Linux 8, mkfs.xfs enables ftype=1 by default. 2.2. Compatibility considerations between RHCS and podman versions podman and Red Hat Ceph Storage have different end-of-life strategies that might make it challenging to find compatible versions. If you plan to upgrade from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 as part of the Ceph upgrade process, make sure that the version of podman is compatible with Red Hat Ceph Storage 5. Red Hat recommends to use the podman version shipped with the corresponding Red Hat Enterprise Linux version for Red Hat Ceph Storage 5. See the Red Hat Ceph Storage: Supported configurations knowledge base article for more details. See the Contacting Red Hat support for service section in the Red Hat Ceph Storage Troubleshooting Guide for additional assistance. Important Red Hat Ceph Storage 5 is compatible with podman versions 2.0.0 and later, except for version 2.2.1. Version 2.2.1 is not compatible with Red Hat Ceph Storage 5. The following table shows version compatibility between Red Hat Ceph Storage 5 and versions of podman . Ceph Podman 1.9 2.0 2.1 2.2 3.0 5.0 (Pacific) false true true false true 2.3. Preparing for an upgrade As a storage administrator, you can upgrade your Ceph storage cluster to Red Hat Ceph Storage 5. However, some components of your storage cluster must be running specific software versions before an upgrade can take place. The following list shows the minimum software versions that must be installed on your storage cluster before you can upgrade to Red Hat Ceph Storage 5. Red Hat Ceph Storage 4.3 or later. Ansible 2.9. Ceph-ansible shipped with the latest version of Red Hat Ceph Storage. Red Hat Enterprise Linux 8.4 EUS or later. FileStore OSDs must be migrated to BlueStore. For more information about converting OSDs from FileStore to BlueStore, refer to BlueStore . There is no direct upgrade path from Red Hat Ceph Storage versions earlier than Red Hat Ceph Storage 4.3. If you are upgrading from Red Hat Ceph Storage 3, you must first upgrade to Red Hat Ceph Storage 4.3 or later, and then upgrade to Red Hat Ceph Storage 5. Important You can only upgrade to the latest version of Red Hat Ceph Storage 5. For example, if version 5.1 is available, you cannot upgrade from 4 to 5.0; you must go directly to 5.1. Important The new deployment of Red Hat Ceph Storage-4.3.z1 on Red Hat Enterprise Linux-8.7 (or higher) or Upgrade of Red Hat Ceph Storage-4.3.z1 to 5.X with host OS as Red Hat Enterprise Linux-8.7(or higher) fails at TASK [ceph-mgr : wait for all mgr to be up] . The behavior of podman released with Red Hat Enterprise Linux 8.7 had changed with respect to SELinux relabeling. Due to this, depending on their startup order, some Ceph containers would fail to start as they would not have access to the files they needed. As a workaround, refer to the knowledge base RHCS 4.3 installation fails while executing the command `ceph mgr dump` . To upgrade your storage cluster to Red Hat Ceph Storage 5, Red Hat recommends that your cluster be running Red Hat Ceph Storage 4.3 or later. Refer to the Knowledgebase article What are the Red Hat Ceph Storage Releases? . This article contains download links to the most recent versions of the Ceph packages and ceph-ansible. The upgrade process uses Ansible playbooks to upgrade an Red Hat Ceph Storage 4 storage cluster to Red Hat Ceph Storage 5. If your Red Hat Ceph Storage 4 cluster is a non-containerized cluster, the upgrade process includes a step to transform the cluster into a containerized version. Red Hat Ceph Storage 5 does not run on non-containerized clusters. If you have a mirroring or multisite configuration, upgrade one cluster at a time. Make sure that each upgraded cluster is running properly before upgrading another cluster. Important leapp does not support upgrades for encrypted OSDs or OSDs that have encrypted partitions. If your OSDs are encrypted and you are upgrading the host OS, disable dmcrypt in ceph-ansible before upgrading the OS. For more information about using leapp , refer to Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 . Important Perform the first three steps in this procedure only if the storage cluster is not already running the latest version of Red Hat Ceph Storage 4. The latest version of Red Hat Ceph Storage 4 should be 4.3 or later. Prerequisites A running Red Hat Ceph Storage 4 cluster. Sudo-level access to all nodes in the storage cluster. A valid customer subscription. Root-level access to the Ansible administration node. The Ansible user account for use with the Ansible application. Red Hat Ceph Storage tools and Ansible repositories are enabled. Procedure Enable the Ceph and Ansible repositories on the Ansible administration node: Example Update Ansible: Example If the storage cluster you want to upgrade contains Ceph Block Device images that use the exclusive-lock feature, ensure that all Ceph Block Device users have permissions to create a denylist for clients: Syntax If the storage cluster was originally installed using Cockpit, create a symbolic link in the /usr/share/ceph-ansible directory to the inventory file where Cockpit created it, at /usr/share/ansible-runner-service/inventory/hosts : Change to the /usr/share/ceph-ansible directory: Create the symbolic link: To upgrade the cluster using ceph-ansible , create the symbolic link in the etc/ansible/hosts directory to the hosts inventory file: If the storage cluster was originally installed using Cockpit, copy the Cockpit-generated SSH keys to the Ansible user's ~/.ssh directory: Copy the keys: Syntax Replace ANSIBLE_USERNAME with the user name for Ansible. The usual default user name is admin . Example Set the appropriate owner, group, and permissions on the key files: Syntax Replace ANSIBLE_USERNAME with the username for Ansible. The usual default user name is admin . Example Additional Resources What are the Red Hat Ceph Storage Releases? For more information about converting from FileStore to BlueStore, refer to BlueStore . 2.4. Backing up the files before the host OS upgrade Note Perform the procedure in this section only if you are upgrading the host OS. If you are not upgrading the host OS, skip this section. Before you can perform the upgrade procedure, you must make backup copies of the files that you customized for your storage cluster, including keyring files and the yml files for your configuration as the ceph.conf file gets overridden when you execute any playbook. Prerequisites A running Red Hat Ceph Storage 4 cluster. A valid customer subscription. Root-level access to the Ansible administration node. The Ansible user account for use with the Ansible application. Red Hat Ceph Storage Tools and Ansible repositories are enabled. Procedure Make a backup copy of the /etc/ceph and /var/lib/ceph folders. Make a backup copy of the ceph.client.admin.keyring file. Make backup copies of the ceph.conf files from each node. Make backup copies of the /etc/ganesha/ folder on each node. If the storage cluster has RBD mirroring defined, then make backup copies of the /etc/ceph folder and the group_vars/rbdmirrors.yml file. 2.5. Converting to a containerized deployment This procedure is required for non-containerized clusters. If your storage cluster is a non-containerized cluster, this procedure transforms the cluster into a containerized version. Red Hat Ceph Storage 5 supports container-based deployments only. A cluster needs to be containerized before upgrading to RHCS 5.x. If your Red Hat Ceph Storage 4 storage cluster is already containerized, skip this section. Important This procedure stops and restarts a daemon. If the playbook stops executing during this procedure, be sure to analyze the state of the cluster before restarting. Prerequisites A running Red Hat Ceph Storage non-containerized 4 cluster. Root-level access to all nodes in the storage cluster. A valid customer subscription. Root-level access to the Ansible administration node. The Ansible user account for use with the Ansible application. Procedure If you are running a multisite setup, set rgw_multisite: false in all.yml . Ensure the group_vars/all.yml has the following default values for the configuration parameters: Note These values differ if you use a local registry and a custom image name. Optional: For two-way RBD mirroring configured using the command-line interface in a bare-metal storage cluster, the cluster does not migrate RBD mirroring. For such a configuration, follow the below steps before migrating the non-containerized storage cluster to a containerized deployment: Create a user on the Ceph client node: Syntax Example Change the username in the auth file in /etc/ceph directory: Example Import the auth file to add relevant permissions: Syntax Example Check the service name of the RBD mirror node: Example Add the rbd-mirror node to the /etc/ansible/hosts file: Example If you are using daemons that are not containerized, convert them to containerized format: Syntax The -vvvv option collects verbose logs of the conversion process. Example Once the playbook completes successfully, edit the value of rgw_multisite: true in the all.yml file and ensure the value of containerized_deployment is true . Note Ensure to remove the ceph-iscsi , libtcmu , and tcmu-runner packages from the admin node. 2.6. The upgrade process As a storage administrator, you use Ansible playbooks to upgrade an Red Hat Ceph Storage 4 storage cluster to Red Hat Ceph Storage 5. The rolling_update.yml Ansible playbook performs upgrades for deployments of Red Hat Ceph Storage. The ceph-ansible upgrades the Ceph nodes in the following order: Ceph Monitor Ceph Manager Ceph OSD nodes MDS nodes Ceph Object Gateway (RGW) nodes Ceph RBD-mirror node Ceph NFS nodes Ceph iSCSI gateway node Ceph client nodes Ceph-crash daemons Node-exporter on all nodes Ceph Dashboard Important After the storage cluster is upgraded from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5, the Grafana UI shows two dashboards. This is because the port for Prometheus in Red Hat Ceph Storage 4 is 9092 while for Red Hat Ceph Storage 5 is 9095. You can remove the grafana. The cephadm redeploys the service and the daemons and removes the old dashboard on the Grafana UI. Note Red Hat Ceph Storage 5 supports only containerized deployments. ceph-ansible is currently not supported with Red Hat Ceph Storage 5. This means that once you have migrated your storage cluster to Red Hat Ceph Storage 5, you must use cephadm to perform subsequent updates. Important To deploy multi-site Ceph Object Gateway with single realm and multiple realms, edit the all.yml file. For more information, see the Configuring multi-site Ceph Object Gateways in the Red Hat Ceph Storage 4 Installation Guide. Note Red Hat Ceph Storage 5 also includes a health check function that returns a DAEMON_OLD_VERSION warning if it detects that any of the daemons in the storage cluster are running multiple versions of Red Hat Ceph Storage. The warning is triggered when the daemons continue to run multiple versions of Red Hat Ceph Storage beyond the time value set in the mon_warn_older_version_delay option. By default, the mon_warn_older_version_delay option is set to one week. This setting allows most upgrades to proceed without falsely seeing the warning. If the upgrade process is paused for an extended time period, you can mute the health warning: After the upgrade has finished, unmute the health warning: Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all hosts in the storage cluster. A valid customer subscription. Root-level access to the Ansible administration node. The latest versions of Ansible and ceph-ansible available with Red Hat Ceph Storage 5. The ansible user account for use with the Ansible application. The nodes of the storage cluster is upgraded to Red Hat Enterprise Linux 8.4 EUS or later. Important The Ansible inventory file must be present in the ceph-ansible directory. Procedure Enable the Ceph and Ansible repositories on the Ansible administration node: Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 9 On the Ansible administration node, ensure that the latest versions of the ansible and ceph-ansible packages are installed. Syntax Navigate to the /usr/share/ceph-ansible/ directory: Example If upgrading from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5, make copies of the group_vars/osds.yml.sample and group_vars/clients.yml.sample files, and rename them to group_vars/osds.yml , and group_vars/clients.yml respectively. Example If upgrading from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5, edit the group_vars/all.yml file to add Red Hat Ceph Storage 5 details. Once you have done the above two steps, copy the settings from the old yaml files to the new yaml files. Do not change the values of ceph_rhcs_version , ceph_docker_image , and grafana_container_image as the values for these configuration parameters are for Red Hat Ceph Storage 5. This ensures that all the settings related to your cluster are present in the current yaml file. Example Note Ensure the Red Hat Ceph Storage 5 container images are set to the default values. Edit the group_vars/osds.yml file. Add and set the following options: Syntax Open the group_vars/all.yml file and verify the following values are present from the old all.yml file. The fetch_directory option is set with the same value from the old all.yml file: Syntax Replace FULL_DIRECTORY_PATH with a writable location, such as the Ansible user's home directory. If the cluster you want to upgrade contains any Ceph Object Gateway nodes, add the radosgw_interface option: Replace INTERFACE with the interface to which the Ceph Object Gateway nodes listen. If your current setup has SSL certificates configured, edit the following: Syntax Uncomment the upgrade_ceph_packages option and set it to True : Syntax If the storage cluster has more than one Ceph Object Gateway instance per node, then uncomment the radosgw_num_instances setting and set it to the number of instances per node in the cluster: Syntax Example If the storage cluster has Ceph Object Gateway multi-site defined, check the multisite settings in all.yml to make sure that they contain the same values as they did in the old all.yml file. If the buckets are created or have the num_shards = 0 , manually reshard the buckets, before planning an upgrade to Red Hat Ceph Storage 5.3: Warning Upgrade to Red Hat Ceph Storage 5.3 from older releases when bucket_index_max_shards is 0 can result in the loss of the Ceph Object Gateway bucket's metadata leading to the bucket's unavailability while trying to access it. Hence, ensure bucket_index_max_shards is set to 11 shards. If not, modify this configuration at the zonegroup level. Syntax Example Log in as ansible-user on the Ansible administration node. Use the --extra-vars option to update the infrastructure-playbooks/rolling_update.yml playbook and to change the health_osd_check_retries and health_osd_check_delay values to 50 and 30 , respectively: Example For each OSD node, these values cause ceph-ansible to check the storage cluster health every 30 seconds, up to 50 times. This means that ceph-ansible waits up to 25 minutes for each OSD. Adjust the health_osd_check_retries option value up or down, based on the used storage capacity of the storage cluster. For example, if you are using 218 TB out of 436 TB, or 50% of the storage capacity, then set the health_osd_check_retries option to 50 . /etc/ansible/hosts is the default location for the Ansible inventory file. Run the rolling_update.yml playbook to convert the storage cluster from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5: Syntax The -vvvv option collects verbose logs of the upgrade process. Example Important Using the --limit Ansible option with the rolling_update.yml playbook is not supported. Review the Ansible playbook log output to verify the status of the upgrade. Verification List all running containers: Example Check the health status of the cluster. Replace MONITOR_ID with the name of the Ceph Monitor container found in the step: Syntax Example Verify the Ceph cluster daemon versions to confirm the upgrade of all daemons. Replace MONITOR_ID with the name of the Ceph Monitor container found in the step: Syntax Example 2.7. Converting the storage cluster to using cephadm After you have upgraded the storage cluster to Red Hat Ceph Storage 5, run the cephadm-adopt playbook to convert the storage cluster daemons to run cephadm . The cephadm-adopt playbook adopts the Ceph services, installs all cephadm dependencies, enables the cephadm Orchestrator backend, generates and configures the ssh key on all hosts, and adds the hosts to the Orchestrator configuration. Note After you run the cephadm-adopt playbook, remove the ceph-ansible package. The cluster daemons no longer work with ceph-ansible . You must use cephadm to manage the cluster daemons. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all nodes in the storage cluster. Procedure Log in to the ceph-ansible node and change directory to /usr/share/ceph-ansible . Edit the all.yml file. Syntax Example Run the cephadm-adopt playbook: Syntax Example Set the minimum compat client parameter to luminous : Example Run the following command to enable applications to run on the NFS-Ganesha pool. POOL_NAME is nfs-ganesha , and APPLICATION_NAME is the name of the application you want to enable, such as cephfs , rbd , or rgw . Syntax Example Important The cephadm-adopt playbook does not bring up rbd-mirroring after migrating the storage cluster from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5. To work around this issue, add the peers manually: Syntax Example Remove Grafana after upgrade: Log in to the Cephadm shell: Example Fetch the name of Grafana in your storage cluster: Example Remove Grafana: Syntax Example Wait a few minutes and check the latest log: Example cephadm redeploys the Grafana service and the daemon. Additional Resources For more information about using leapp to upgrade Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8, see Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 . For more information about using leapp to upgrade Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9, see Upgrading from Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 . For more information about converting from FileStore to BlueStore, refer to BlueStore . For more information about storage peers, see Viewing information about peers . 2.8. Installing cephadm-ansible on an upgraded storage cluster cephadm-ansible is a collection of Ansible playbooks to simplify workflows that are not covered by cephadm . After installation, the playbooks are located in /usr/share/cephadm-ansible/ . Note Before adding new nodes or new clients to your upgraded storage cluster, run the cephadm-preflight.yml playbook. Prerequisites Root-level access to the Ansible administration node. A valid Red Hat subscription with the appropriate entitlements. An active Red Hat Network (RHN) or service account to access the Red Hat Registry. Procedure Uninstall ansible and the older ceph-ansible packages: Syntax Disable Ansible repository and enable Ceph repository on the Ansible administration node: Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 9 Install the cephadm-ansible package, which installs the ansible-core as a dependency: Syntax Additional Resources Running the preflight playbook Adding hosts Adding Monitor service Adding Manager service Adding OSDs For more information about configuring clients and services, see Red Hat Ceph Storage Operations Guide . For more information about the cephadm-ansible playbooks, see The cephadm-ansible playbooks .
[ "subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms", "dnf update ansible ceph-ansible", "ceph auth caps client. ID mon 'profile rbd' osd 'profile rbd pool= POOL_NAME_1 , profile rbd pool= POOL_NAME_2 '", "cd /usr/share/ceph-ansible", "ln -s /usr/share/ansible-runner-service/inventory/hosts hosts", "ln -s /etc/ansible/hosts hosts", "cp /usr/share/ansible-runner-service/env/ssh_key.pub /home/ ANSIBLE_USERNAME /.ssh/id_rsa.pub cp /usr/share/ansible-runner-service/env/ssh_key /home/ ANSIBLE_USERNAME /.ssh/id_rsa", "cp /usr/share/ansible-runner-service/env/ssh_key.pub /home/admin/.ssh/id_rsa.pub cp /usr/share/ansible-runner-service/env/ssh_key /home/admin/.ssh/id_rsa", "chown ANSIBLE_USERNAME : ANSIBLE_USERNAME /home/ ANSIBLE_USERNAME /.ssh/id_rsa.pub chown ANSIBLE_USERNAME : ANSIBLE_USERNAME /home/ ANSIBLE_USERNAME /.ssh/id_rsa chmod 644 /home/ ANSIBLE_USERNAME /.ssh/id_rsa.pub chmod 600 /home/ ANSIBLE_USERNAME /.ssh/id_rsa", "chown admin:admin /home/admin/.ssh/id_rsa.pub chown admin:admin /home/admin/.ssh/id_rsa chmod 644 /home/admin/.ssh/id_rsa.pub chmod 600 /home/admin/.ssh/id_rsa", "ceph_docker_image_tag: \"latest\" ceph_docker_registry: \"registry.redhat.io\" ceph_docker_image: rhceph/rhceph-4-rhel8 containerized_deployment: true", "ceph auth get client.PRIMARY_CLUSTER_NAME -o /etc/ceph/ceph.PRIMARY_CLUSTER_NAME.keyring", "ceph auth get client.rbd-mirror.site-a -o /etc/ceph/ceph.client.rbd-mirror.site-a.keyring", "[client.rbd-mirror.rbd-client-site-a] key = AQCbKbVg+E7POBAA7COSZCodvOrg2LWIFc9+3g== caps mds = \"allow *\" caps mgr = \"allow *\" caps mon = \"allow *\" caps osd = \"allow *\"", "ceph auth import -i PATH_TO_KEYRING", "ceph auth import -i /etc/ceph/ceph.client.rbd-mirror.rbd-client-site-a.keyring", "systemctl list-units --all systemctl stop [email protected] systemctl disable [email protected] systemctl reset-failed [email protected] systemctl start [email protected] systemctl enable [email protected] systemctl status [email protected]", "[rbdmirrors] ceph.client.rbd-mirror.rbd-client-site-a", "ansible-playbook -vvvv -i INVENTORY_FILE infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml", "[ceph-admin@admin ceph-ansible]USD ansible-playbook -vvvv -i hosts infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml", "ceph health mute DAEMON_OLD_VERSION --sticky", "ceph health unmute DAEMON_OLD_VERSION", "subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpms", "dnf update ansible ceph-ansible", "cd /usr/share/ceph-ansible", "cp group_vars/osds.yml.sample group_vars/osds.yml cp group_vars/mdss.yml.sample group_vars/mdss.yml cp group_vars/rgws.yml.sample group_vars/rgws.yml cp group_vars/clients.yml.sample group_vars/clients.yml", "fetch_directory: ~/ceph-ansible-keys monitor_interface: eth0 public_network: 192.168.0.0/24 ceph_docker_registry_auth: true ceph_docker_registry_username: SERVICE_ACCOUNT_USER_NAME ceph_docker_registry_password: TOKEN dashboard_admin_user: DASHBOARD_ADMIN_USERNAME dashboard_admin_password: DASHBOARD_ADMIN_PASSWORD grafana_admin_user: GRAFANA_ADMIN_USER grafana_admin_password: GRAFANA_ADMIN_PASSWORD radosgw_interface: eth0 ceph_docker_image: \"rhceph/rhceph-5-rhel8\" ceph_docker_image_tag: \"latest\" ceph_docker_registry: \"registry.redhat.io\" node_exporter_container_image: registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6 grafana_container_image: registry.redhat.io/rhceph/rhceph-5-dashboard-rhel8:5 prometheus_container_image: registry.redhat.io/openshift4/ose-prometheus:v4.6 alertmanager_container_image: registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6", "nb_retry_wait_osd_up: 50 delay_wait_osd_up: 30", "fetch_directory: FULL_DIRECTORY_PATH", "radosgw_interface: INTERFACE", "radosgw_frontend_ssl_certificate: /etc/pki/ca-trust/extracted/ CERTIFICATE_NAME radosgw_frontend_port: 443", "upgrade_ceph_packages: True", "radosgw_num_instances : NUMBER_OF_INSTANCES_PER_NODE", "radosgw_num_instances : 2", "radosgw-admin bucket reshard --num-shards 11 --bucket BUCKET_NAME", "radosgw-admin bucket reshard --num-shards 11 --bucket mybucket", "ansible-playbook -i hosts infrastructure-playbooks/rolling_update.yml --extra-vars \"health_osd_check_retries=50 health_osd_check_delay=30\"", "ansible-playbook -vvvv infrastructure-playbooks/rolling_update.yml -i INVENTORY_FILE", "[ceph-admin@admin ceph-ansible]USD ansible-playbook -vvvv infrastructure-playbooks/rolling_update.yml -i hosts", "podman ps", "exec ceph-mon- MONITOR_ID ceph -s", "podman exec ceph-mon-mon01 ceph -s", "exec ceph-mon- MONITOR_ID ceph --cluster ceph versions", "podman exec ceph-mon-mon01 ceph --cluster ceph versions", "ceph_origin: custom/rhcs ceph_custom_repositories: - name: NAME state: present description: DESCRIPTION gpgcheck: 'no' baseurl: BASE_URL file: FILE_NAME priority: '2' enabled: 1", "ceph_origin: custom ceph_custom_repositories: - name: ceph_custom state: present description: Ceph custom repo gpgcheck: 'no' baseurl: https://example.ceph.redhat.com file: cephbuild priority: '2' enabled: 1 - name: ceph_custom_1 state: present description: Ceph custom repo 1 gpgcheck: 'no' baseurl: https://example.ceph.redhat.com file: cephbuild_1 priority: '2' enabled: 1", "ansible-playbook infrastructure-playbooks/cephadm-adopt.yml -i INVENTORY_FILE", "[ceph-admin@admin ceph-ansible]USD ansible-playbook infrastructure-playbooks/cephadm-adopt.yml -i hosts", "ceph osd set-require-min-compat-client luminous", "ceph osd pool application enable POOL_NAME APPLICATION_NAME", "ceph osd pool application enable nfs-ganesha rgw", "rbd mirror pool peer add POOL_NAME CLIENT_NAME @ CLUSTER_NAME", "rbd --cluster site-a mirror pool peer add image-pool client.rbd-mirror-peer@site-b", "cephadm shell", "ceph orch ps --daemon_type grafana", "ceph orch daemon rm GRAFANA_DAEMON_NAME", "ceph orch daemon rm grafana.host01 Removed grafana.host01 from host 'host01'", "ceph log last cephadm", "dnf remove ansible ceph-ansible", "subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms --disable=ansible-2.9-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms --enable=rhel-9-for-x86_64-appstream-rpms --enable=rhceph-5-tools-for-rhel-9-x86_64-rpms --disable=ansible-2.9-for-rhel-9-x86_64-rpms", "dnf install cephadm-ansible" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/upgrade_guide/upgrading-a-red-hat-ceph-storage-cluster-running-rhel-8-from-rhcs-4-to-rhcs-5
Chapter 12. Network interface bonding
Chapter 12. Network interface bonding You can use various bonding options in your custom network configuration. 12.1. Network interface bonding for overcloud nodes You can bundle multiple physical NICs together to form a single logical channel known as a bond. You can configure bonds to provide redundancy for high availability systems or increased throughput. Red Hat OpenStack Platform supports Open vSwitch (OVS) kernel bonds, OVS-DPDK bonds, and Linux kernel bonds. Table 12.1. Supported interface bonding types Bond type Type value Allowed bridge types Allowed members OVS kernel bonds ovs_bond ovs_bridge interface OVS-DPDK bonds ovs_dpdk_bond ovs_user_bridge ovs_dpdk_port Linux kernel bonds linux_bond ovs_bridge or linux_bridge interface Important Do not combine ovs_bridge and ovs_user_bridge on the same node. 12.2. Creating Open vSwitch (OVS) bonds You create OVS bonds in your network interface templates. For example, you can create a bond as part of an OVS user space bridge: In this example, you create the bond from two DPDK ports. The ovs_options parameter contains the bonding options. You can configure a bonding options in a network environment file with the BondInterfaceOvsOptions parameter: 12.3. Open vSwitch (OVS) bonding options You can set various Open vSwitch (OVS) bonding options with the ovs_options heat parameter in your NIC template files. The active-backup, balance-tlb, balance-alb and balance-slb modes do not require any specific configuration of the switch. bond_mode=balance-slb Source load balancing (slb) balances flows based on source MAC address and output VLAN, with periodic rebalancing as traffic patterns change. When you configure a bond with the balance-slb bonding option, there is no configuration required on the remote switch. The Networking service (neutron) assigns each source MAC and VLAN pair to a link and transmits all packets from that MAC and VLAN through that link. A simple hashing algorithm based on source MAC address and VLAN number is used, with periodic rebalancing as traffic patterns change. The balance-slb mode is similar to mode 2 bonds used by the Linux bonding driver, although unlike mode 2, balance-slb does not require any specific configuration of the swtich. You can use the balance-slb mode to provide load balancing even when the switch is not configured to use LACP. bond_mode=active-backup When you configure a bond using active-backup bond mode, the Networking service keeps one NIC in standby. The standby NIC resumes network operations when the active connection fails. Only one MAC address is presented to the physical switch. This mode does not require switch configuration, and works when the links are connected to separate switches. This mode does not provide load balancing. lacp=[active | passive | off] Controls the Link Aggregation Control Protocol (LACP) behavior. Only certain switches support LACP. If your switch does not support LACP, use bond_mode=balance-slb or bond_mode=active-backup . other-config:lacp-fallback-ab=true Set active-backup as the bond mode if LACP fails. other_config:lacp-time=[fast | slow] Set the LACP heartbeat to one second (fast) or 30 seconds (slow). The default is slow. other_config:bond-detect-mode=[miimon | carrier] Set the link detection to use miimon heartbeats (miimon) or monitor carrier (carrier). The default is carrier. other_config:bond-miimon-interval=100 If using miimon, set the heartbeat interval (milliseconds). bond_updelay=1000 Set the interval (milliseconds) that a link must be up to be activated to prevent flapping. other_config:bond-rebalance-interval=10000 Set the interval (milliseconds) that flows are rebalancing between bond members. Set this value to zero to disable flow rebalancing between bond members. 12.4. Using Link Aggregation Control Protocol (LACP) with Open vSwitch (OVS) bonding modes You can use bonds with the optional Link Aggregation Control Protocol (LACP). LACP is a negotiation protocol that creates a dynamic bond for load balancing and fault tolerance. Use the following table to understand support compatibility for OVS kernel and OVS-DPDK bonded interfaces in conjunction with LACP options. Important The OVS/OVS-DPDK balance-tcp mode is available as a technology preview only. Important On control and storage networks, Red Hat recommends that you use Linux bonds with VLAN and LACP, because OVS bonds carry the potential for control plane disruption that can occur when OVS or the neutron agent is restarted for updates, hot fixes, and other events. The Linux bond/LACP/VLAN configuration provides NIC management without the OVS disruption potential. Table 12.2. LACP options for OVS kernel and OVS-DPDK bond modes Objective OVS bond mode Compatible LACP options Notes High availability (active-passive) active-backup active , passive , or off Increased throughput (active-active) balance-slb active , passive , or off Performance is affected by extra parsing per packet. There is a potential for vhost-user lock contention. balance-tcp active or passive Tech preview only . Not recommended for use in production. Recirculation needed for L4 hashing has a performance impact. As with balance-slb, performance is affected by extra parsing per packet and there is a potential for vhost-user lock contention. LACP must be enabled. 12.5. Creating Linux bonds You create linux bonds in your network interface templates. For example, you can create a linux bond that bond two interfaces: The bonding_options parameter sets the specific bonding options for the Linux bond. mode Sets the bonding mode, which in the example is 802.3ad or LACP mode. For more information about Linux bonding modes, see "Upstream Switch Configuration Depending on the Bonding Modes" in the Red Hat Enterprise Linux 8 Configuring and Managing Networking guide. lacp_rate Defines whether LACP packets are sent every 1 second, or every 30 seconds. updelay Defines the minimum amount of time that an interface must be active before it is used for traffic. This minimum configuration helps to mitigate port flapping outages. miimon The interval in milliseconds that is used for monitoring the port state using the MIIMON functionality of the driver. Use the following additional examples as guides to configure your own Linux bonds: Linux bond set to active-backup mode with one VLAN: Linux bond on OVS bridge. Bond set to 802.3ad LACP mode with one VLAN:
[ "params: USDnetwork_config: network_config: - type: ovs_user_bridge name: br-ex use_dhcp: false members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 2140 ovs_options: {get_param: BondInterfaceOvsOptions} rx_queue: get_param: NumDpdkInterfaceRxQueues members: - type: ovs_dpdk_port name: dpdk0 mtu: 2140 members: - type: interface name: p1p1 - type: ovs_dpdk_port name: dpdk1 mtu: 2140 members: - type: interface name: p1p2", "parameter_defaults: BondInterfaceOvsOptions: \"bond_mode=balance-slb\"", "params: USDnetwork_config: network_config: - type: linux_bond name: bond1 members: - type: interface name: nic2 - type: interface name: nic3 bonding_options: \"mode=802.3ad lacp_rate=[fast|slow] updelay=1000 miimon=100\"", ". params: USDnetwork_config: network_config: - type: linux_bond name: bond_api bonding_options: \"mode=active-backup\" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: nic3 primary: true - type: interface name: nic4 - type: vlan vlan_id: get_param: InternalApiNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: InternalApiIpSubnet", "params: USDnetwork_config: network_config: - type: ovs_bridge name: br-tenant use_dhcp: false mtu: 9000 members: - type: linux_bond name: bond_tenant bonding_options: \"mode=802.3ad updelay=1000 miimon=100\" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: p1p1 primary: true - type: interface name: p1p2 - type: vlan device: bond_tenant vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet}" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/advanced_overcloud_customization/assembly_network-interface-bonding
5.4. Permanent Changes in SELinux States and Modes
5.4. Permanent Changes in SELinux States and Modes As discussed in Section 2.4, "SELinux States and Modes" , SELinux can be enabled or disabled. When enabled, SELinux has two modes: enforcing and permissive. Use the getenforce or sestatus commands to check the status of SELinux. The getenforce command returns Enforcing , Permissive , or Disabled . The sestatus command returns the SELinux status and the SELinux policy being used: Note When the system runs SELinux in permissive mode, users are able to label files incorrectly. Files created with SELinux in permissive mode are not labeled correctly while files created while SELinux is disabled are not labeled at all. This behavior causes problems when changing to enforcing mode because files are labeled incorrectly or are not labeled at all. To prevent incorrectly labeled and unlabeled files from causing problems, file systems are automatically relabeled when changing from the disabled state to permissive or enforcing mode. When changing from permissive mode to enforcing mode, force a relabeling on boot by creating the .autorelabel file in the root directory: 5.4.1. Enabling SELinux When enabled, SELinux can run in one of two modes: enforcing or permissive. The following sections show how to permanently change into these modes. 5.4.1.1. Enforcing Mode When SELinux is running in enforcing mode, it enforces the SELinux policy and denies access based on SELinux policy rules. In Red Hat Enterprise Linux, enforcing mode is enabled by default when the system was initially installed with SELinux. If SELinux was disabled, follow the procedure below to change mode to enforcing again: Procedure 5.2. Changing to Enforcing Mode This procedure assumes that the selinux-policy-targeted , selinux-policy , libselinux , libselinux-python , libselinux-utils , policycoreutils , policycoreutils-python , setroubleshoot , setroubleshoot-server , setroubleshoot-plugins packages are installed. To verify that the packages are installed, use the following command: rpm -q package_name Important If the system was initially installed without SELinux, particularly the selinux-policy package, one additional step is necessary to enable SELinux. To make sure SELinux is initialized during system startup, the dracut utility has to be run to put SELinux awareness into the initramfs file system. Failing to do so causes SELinux to not start during system startup. Before SELinux is enabled, each file on the file system must be labeled with an SELinux context. Before this happens, confined domains may be denied access, preventing your system from booting correctly. To prevent this, configure SELINUX=permissive in /etc/selinux/config : For more information about the permissive mode, see Section 5.4.1.2, "Permissive Mode" . As the Linux root user, reboot the system. During the boot, file systems are labeled. The label process labels each file with an SELinux context: Each * (asterisk) character on the bottom line represents 1000 files that have been labeled. In the above example, four * characters represent 4000 files have been labeled. The time it takes to label all files depends on the number of files on the system and the speed of hard drives. On modern systems, this process can take as short as 10 minutes. In permissive mode, the SELinux policy is not enforced, but denial messages are still logged for actions that would have been denied in enforcing mode. Before changing to enforcing mode, as the Linux root user, run the following command to confirm that SELinux did not deny actions during the last boot: If SELinux did not deny any actions during the last boot, this command returns no output. See Chapter 8, Troubleshooting for troubleshooting information if SELinux denied access during boot. If there were no denial messages in /var/log/messages , configure SELINUX=enforcing in /etc/selinux/config : Reboot your system. After reboot, confirm that getenforce returns Enforcing : Temporary changes in modes are covered in Section 2.4, "SELinux States and Modes" . 5.4.1.2. Permissive Mode When SELinux is running in permissive mode, SELinux policy is not enforced. The system remains operational and SELinux does not deny any operations but only logs AVC messages, which can be then used for troubleshooting, debugging, and SELinux policy improvements. To permanently change mode to permissive, follow the procedure below: Procedure 5.3. Changing to Permissive Mode Edit the /etc/selinux/config file as follows: Reboot the system: Temporary changes in modes are covered in Section 2.4, "SELinux States and Modes" .
[ "~]USD sestatus SELinux status: enabled SELinuxfs mount: /selinux Current mode: enforcing Mode from config file: enforcing Policy version: 24 Policy from config file: targeted", "~]# touch /.autorelabel; reboot", "This file controls the state of SELinux on the system. SELINUX= can take one of these three values: enforcing - SELinux security policy is enforced. permissive - SELinux prints warnings instead of enforcing. disabled - No SELinux policy is loaded. SELINUX= permissive SELINUXTYPE= can take one of these two values: targeted - Targeted processes are protected, mls - Multi Level Security protection. SELINUXTYPE=targeted", "*** Warning -- SELinux targeted policy relabel is required. *** Relabeling could take a very long time, depending on file *** system size and speed of hard drives. ****", "~]# grep \"SELinux is preventing\" /var/log/messages", "This file controls the state of SELinux on the system. SELINUX= can take one of these three values: enforcing - SELinux security policy is enforced. permissive - SELinux prints warnings instead of enforcing. disabled - No SELinux policy is loaded. SELINUX= enforcing SELINUXTYPE= can take one of these two values: targeted - Targeted processes are protected, mls - Multi Level Security protection. SELINUXTYPE=targeted", "~]USD getenforce Enforcing", "This file controls the state of SELinux on the system. SELINUX= can take one of these three values: enforcing - SELinux security policy is enforced. permissive - SELinux prints warnings instead of enforcing. disabled - No SELinux policy is loaded. SELINUX= permissive SELINUXTYPE= can take one of these two values: targeted - Targeted processes are protected, mls - Multi Level Security protection. SELINUXTYPE=targeted", "~]# reboot" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-Security-Enhanced_Linux-Working_with_SELinux-Changing_SELinux_Modes
13.4. The Hot Rod Interface Connector
13.4. The Hot Rod Interface Connector The following enables a Hot Rod server using the hotrod socket binding. The connector creates a supporting topology cache with default settings. These settings can be tuned by adding the <topology-state-transfer /> child element to the connector as follows: The Hot Rod connector can be tuned with additional settings. See Section 13.4.1, "Configure Hot Rod Connectors" for more information on how to configure the Hot Rod connector. Note The Hot Rod connector can be secured using SSL. See the Hot Rod Authentication Using SASL section of the Developer Guide for more information. Report a bug 13.4.1. Configure Hot Rod Connectors The following procedure describes the attributes used to configure the Hot Rod connector in Red Hat JBoss Data Grid's Remote Client-Server Mode. Both the hotrod-connector and topology-state-transfer elements must be configured based on the following procedure. Procedure 13.1. Configuring Hot Rod Connectors for Remote Client-Server Mode The hotrod-connector element defines the configuration elements for use with Hot Rod. The socket-binding parameter specifies the socket binding port used by the Hot Rod connector. This is a mandatory parameter. The cache-container parameter names the cache container used by the Hot Rod connector. This is a mandatory parameter. The worker-threads parameter specifies the number of worker threads available for the Hot Rod connector. The default value for this parameter is 160 . This is an optional parameter. The idle-timeout parameter specifies the time (in milliseconds) the connector can remain idle before the connection times out. The default value for this parameter is -1 , which means that no timeout period is set. This is an optional parameter. The tcp-nodelay parameter specifies whether TCP packets will be delayed and sent out in batches. Valid values for this parameter are true and false . The default value for this parameter is true . This is an optional parameter. The send-buffer-size parameter indicates the size of the send buffer for the Hot Rod connector. The default value for this parameter is the size of the TCP stack buffer. This is an optional parameter. The receive-buffer-size parameter indicates the size of the receive buffer for the Hot Rod connector. The default value for this parameter is the size of the TCP stack buffer. This is an optional parameter. The topology-state-transfer element specifies the topology state transfer configurations for the Hot Rod connector. This element can only occur once within a hotrod-connector element. The lock-timeout parameter specifies the time (in milliseconds) after which the operation attempting to obtain a lock times out. The default value for this parameter is 10 seconds. This is an optional parameter. The replication-timeout parameter specifies the time (in milliseconds) after which the replication operation times out. The default value for this parameter is 10 seconds. This is an optional parameter. The external-host parameter specifies the hostname sent by the Hot Rod server to clients listed in the topology information. The default value for this parameter is the host address. This is an optional parameter. The external-port parameter specifies the port sent by the Hot Rod server to clients listed in the topology information. The default value for this parameter is the configured port. This is an optional parameter. The lazy-retrieval parameter indicates whether the Hot Rod connector will carry out retrieval operations lazily. The default value for this parameter is true . This is an optional parameter. The await-initial-transfer parameter specifies whether the initial state retrieval happens immediately at startup. This parameter only applies when lazy-retrieval is set to false . This default value for this parameter is true . Report a bug
[ "<hotrod-connector socket-binding=\"hotrod\" cache-container=\"local\" />", "<hotrod-connector socket-binding=\"hotrod\" cache-container=\"local\"> <topology-state-transfer lazy-retrieval=\"false\" lock-timeout=\"1000\" replication-timeout=\"5000\" /> </hotrod-connector>", "<subsystem xmlns=\"urn:infinispan:server:endpoint:6.1\"> <hotrod-connector socket-binding=\"hotrod\" cache-container=\"local\" worker-threads=\"USD{VALUE}\" idle-timeout=\"USD{VALUE}\" tcp-nodelay=\"USD{TRUE/FALSE}\" send-buffer-size=\"USD{VALUE}\" receive-buffer-size=\"USD{VALUE}\" /> <topology-state-transfer lock-timeout\"=\"USD{MILLISECONDS}\" replication-timeout=\"USD{MILLISECONDS}\" external-host=\"USD{HOSTNAME}\" external-port=\"USD{PORT}\" lazy-retrieval=\"USD{TRUE/FALSE}\" await-initial-transfer=\"USD{TRUE/FALSE}\" /> </subsystem>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-The_Hot_Rod_Interface_Connector
Chapter 4. Configuring client applications for connecting to a Kafka cluster
Chapter 4. Configuring client applications for connecting to a Kafka cluster To connect to a Kafka cluster, a client application must be configured with a minimum set of properties that identify the brokers and enable a connection. Additionally, you need to add a serializer/deserializer mechanism to convert messages into or out of the byte array format used by Kafka. When developing a consumer client, you begin by adding an initial connection to your Kafka cluster, which is used to discover all available brokers. When you have established a connection, you can begin consuming messages from Kafka topics or producing messages to them. Although not required, a unique client ID is recommended so that you can identity your clients in logs and metrics collection. You can configure the properties in a properties file. Using a properties file means you can modify the configuration without recompiling the code. For example, you can load the properties in a Java client using the following code: Loading configuration properties into a client Properties properties = new Properties(); InsetPropertyStream insetPropertyStream = new FileInsetPropertyStream("config.properties"); properties.load(insetPropertyStream); KafkaProducer<String, String> consumer = new KafkaProducer<>(properties); You can also use add the properties directly to the code in a configuration object. For example, you can use the setProperty() method for a Java client application. Adding properties directly is a useful option when you only have a small number of properties to configure. 4.1. Basic producer client configuration When you develop a producer client, configure the following: A connection to your Kafka cluster A serializer to transform message keys into bytes for the Kafka broker A serializer to transform message values into bytes for the Kafka broker You might also add a compression type in case you want to send and store compressed messages. Basic producer client configuration properties client.id = my-producer-id 1 bootstrap.servers = my-cluster-kafka-bootstrap:9092 2 key.serializer = org.apache.kafka.common.serialization.StringSerializer 3 value.serializer = org.apache.kafka.common.serialization.StringSerializer 4 1 The logical name for the client. 2 Bootstrap address for the client to be able to make an initial connection to the Kafka cluster. 3 Serializer to transform message keys into bytes before being sent to the Kafka broker. 4 Serializer to transform message values into bytes before being sent to the Kafka broker. Adding producer client configuration directly to the code Properties props = new Properties(); props.setProperty(ProducerConfig.CLIENT_ID_CONFIG, "my-producer-id"); props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "my-cluster-kafka-bootstrap:9092"); props.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); props.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); KafkaProducer<String, String> producer = new KafkaProducer<>(properties); The KafkaProducer specifies string key and value types for the messages it sends. The serializers used must be able to convert the key and values from the specified type into bytes before sending them to Kafka. 4.2. Basic consumer client configuration When you develop a consumer client, configure the following: A connection to your Kafka cluster A deserializer to transform the bytes fetched from the Kafka broker into message keys that can be understood by the client application A deserializer to transform the bytes fetched from the Kafka broker into message values that can be understood by the client application Typically, you also add a consumer group ID to associate the consumer with a consumer group. A consumer group is a logical entity for distributing the processing of a large data stream from one or more topics to parallel consumers. Consumers are grouped using a group.id , allowing messages to be spread across the members. In a given consumer group, each topic partition is read by a single consumer. A single consumer can handle many partitions. For maximum parallelism, create one consumer for each partition. If there are more consumers than partitions, some consumers remain idle, ready to take over in case of failure. Basic consumer client configuration properties client.id = my-consumer-id 1 group.id = my-group-id 2 bootstrap.servers = my-cluster-kafka-bootstrap:9092 3 key.deserializer = org.apache.kafka.common.serialization.StringDeserializer 4 value.deserializer = org.apache.kafka.common.serialization.StringDeserializer 5 1 The logical name for the client. 2 A group ID for the consumer to be able to join a specific consumer group. 3 Bootstrap address for the client to be able to make an initial connection to the Kafka cluster. 4 Deserializer to transform the bytes fetched from the Kafka broker into message keys. 5 Deserializer to transform the bytes fetched from the Kafka broker into message values. Adding consumer client configuration directly to the code Properties props = new Properties(); props.setProperty(ConsumerConfig.CLIENT_ID_CONFIG, "my-consumer-id"); props.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "my-group-id"); props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "my-cluster-kafka-bootstrap:9092"); props.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); props.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties); The KafkaConsumer specifies string key and value types for the messages it receives. The serializers used must be able to convert the bytes received from Kafka into the specified types. Note Each consumer group must have a unique group.id . If you restart a consumer with the same group.id , it resumes consuming messages from where it left off before it was stopped.
[ "Properties properties = new Properties(); InsetPropertyStream insetPropertyStream = new FileInsetPropertyStream(\"config.properties\"); properties.load(insetPropertyStream); KafkaProducer<String, String> consumer = new KafkaProducer<>(properties);", "client.id = my-producer-id 1 bootstrap.servers = my-cluster-kafka-bootstrap:9092 2 key.serializer = org.apache.kafka.common.serialization.StringSerializer 3 value.serializer = org.apache.kafka.common.serialization.StringSerializer 4", "Properties props = new Properties(); props.setProperty(ProducerConfig.CLIENT_ID_CONFIG, \"my-producer-id\"); props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, \"my-cluster-kafka-bootstrap:9092\"); props.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); props.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); KafkaProducer<String, String> producer = new KafkaProducer<>(properties);", "client.id = my-consumer-id 1 group.id = my-group-id 2 bootstrap.servers = my-cluster-kafka-bootstrap:9092 3 key.deserializer = org.apache.kafka.common.serialization.StringDeserializer 4 value.deserializer = org.apache.kafka.common.serialization.StringDeserializer 5", "Properties props = new Properties(); props.setProperty(ConsumerConfig.CLIENT_ID_CONFIG, \"my-consumer-id\"); props.setProperty(ConsumerConfig.GROUP_ID_CONFIG, \"my-group-id\"); props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, \"my-cluster-kafka-bootstrap:9092\"); props.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); props.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties);" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/developing_kafka_client_applications/con-client-dev-config-basics-str
Chapter 5. Deprecated features
Chapter 5. Deprecated features The features deprecated in this release, and that were supported in releases of Streams for Apache Kafka, are outlined below. 5.1. Java 11 deprecated in Streams for Apache Kafka 2.7.0 Support for Java 11 is deprecated in Kafka 3.7.0 and Streams for Apache Kafka 2.7.0. Java 11 will be unsupported for all Streams for Apache Kafka components, including clients, in the future. Streams for Apache Kafka supports Java 17. Use Java 17 when developing new applications. Plan to migrate any applications that currently use Java 11 to 17. Note Support for Java 8 was removed in Streams for Apache Kafka 2.4.0. If you are currently using Java 8, plan to migrate to Java 17 in the same way. 5.2. Kafka MirrorMaker 2 identity replication policy Identity replication policy is a feature used with MirrorMaker 2 to override the automatic renaming of remote topics. Instead of prepending the name with the source cluster's name, the topic retains its original name. This setting is particularly useful for active/passive backups and data migration scenarios. To implement an identity replication policy, you must specify a replication policy class ( replication.policy.class ) in the MirrorMaker 2 configuration. Previously, you could specify the io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy class included with the Streams for Apache Kafka mirror-maker-2-extensions component. However, this component is now deprecated and will be removed in the future. Therefore, it is recommended to update your implementation to use Kafka's own replication policy class ( org.apache.kafka.connect.mirror.IdentityReplicationPolicy ). See Using Streams for Apache Kafka with MirrorMaker 2 . 5.3. Kafka MirrorMaker 1 Kafka MirrorMaker replicates data between two or more active Kafka clusters, within or across data centers. Kafka MirrorMaker 1 was deprecated in Kafka 3.0.0 and will be removed in Kafka 4.0.0. MirrorMaker 2 will be the only version available. MirrorMaker 2 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters. As a result, MirrorMaker 1 has also been deprecated in Streams for Apache Kafka as well. If you are using MirrorMaker 1 (referred to as just MirrorMaker in the Streams for Apache Kafka documentation), use MirrorMaker 2 with the IdentityReplicationPolicy class. MirrorMaker 2 renames topics replicated to a target cluster. IdentityReplicationPolicy configuration overrides the automatic renaming. Use it to produce the same active/passive unidirectional replication as MirrorMaker 1. See Using Streams for Apache Kafka with MirrorMaker 2 . 5.4. Kafka Bridge span attributes The following Kafka Bridge span attributes are deprecated with replacements shown where applicable: http.method replaced by http.request.method http.url replaced by url.scheme , url.path , and url.query messaging.destination replaced by messaging.destination.name http.status_code replaced by http.response.status_code messaging.destination.kind=topic without replacement Kafka Bridge uses OpenTelemetry for distributed tracing. The changes are inline with changes to OpenTelemetry semantic conventions. The attributes will be removed in a future release of the Kafka Bridge
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/release_notes_for_streams_for_apache_kafka_2.7_on_rhel/deprecated-features-str
Chapter 6. Uninstalling a cluster on Azure Stack Hub
Chapter 6. Uninstalling a cluster on Azure Stack Hub You can remove a cluster that you deployed to Azure Stack Hub. 6.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with user-provisioned infrastructure clusters. There might be resources that the installation program did not create or that the installation program is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. While you can uninstall the cluster using the copy of the installation program that was used to deploy it, using OpenShift Container Platform version 4.13 or later is recommended. The removal of service principals is dependent on the Microsoft Azure AD Graph API. Using version 4.13 or later of the installation program ensures that service principals are removed without the need for manual intervention, if and when Microsoft decides to retire the Azure AD Graph API. Procedure On the computer that you used to install the cluster, go to the directory that contains the installation program, and run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
[ "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_azure_stack_hub/uninstalling-cluster-azure-stack-hub
Chapter 19. Using the KDC Proxy in IdM
Chapter 19. Using the KDC Proxy in IdM Some administrators might choose to make the default Kerberos ports inaccessible in their deployment. To allow users, hosts, and services to obtain Kerberos credentials, you can use the HTTPS service as a proxy that communicates with Kerberos via the HTTPS port 443. In Identity Management (IdM), the Kerberos Key Distribution Center Proxy (KKDCP) provides this functionality. On an IdM server, KKDCP is enabled by default and available at https:// server.idm.example.com /KdcProxy . On an IdM client, you must change its Kerberos configuration to access the KKDCP. 19.1. Configuring an IdM client to use KKDCP As an Identity Management (IdM) system administrator, you can configure an IdM client to use the Kerberos Key Distribution Center Proxy (KKDCP) on an IdM server. This is useful if the default Kerberos ports are not accessible on the IdM server and the HTTPS port 443 is the only way of accessing the Kerberos service. Prerequisites You have root access to the IdM client. Procedure Open the /etc/krb5.conf file for editing. In the [realms] section, enter the URL of the KKDCP for the kdc , admin_server , and kpasswd_server options: For redundancy, you can add the parameters kdc , admin_server , and kpasswd_server multiple times to indicate different KKDCP servers. Restart the sssd service to make the changes take effect: 19.2. Verifying that KKDCP is enabled on an IdM server On an Identity Management (IdM) server, the Kerberos Key Distribution Center Proxy (KKDCP) is automatically enabled each time the Apache web server starts if the attribute and value pair ipaConfigString=kdcProxyEnabled exists in the directory. In this situation, the symbolic link /etc/httpd/conf.d/ipa-kdc-proxy.conf is created. You can verify if the KKDCP is enabled on the IdM server, even as an unprivileged user. Procedure Check that the symbolic link exists: The output confirms that KKDCP is enabled. 19.3. Disabling KKDCP on an IdM server As an Identity Management (IdM) system administrator, you can disable the Kerberos Key Distribution Center Proxy (KKDCP) on an IdM server. Prerequisites You have root access to the IdM server. Procedure Remove the ipaConfigString=kdcProxyEnabled attribute and value pair from the directory: Restart the httpd service: KKDCP is now disabled on the current IdM server. Verification Verify that the symbolic link does not exist: 19.4. Re-enabling KKDCP on an IdM server On an IdM server, the Kerberos Key Distribution Center Proxy (KKDCP) is enabled by default and available at https:// server.idm.example.com /KdcProxy . If KKDCP has been disabled on a server, you can re-enable it. Prerequisites You have root access to the IdM server. Procedure Add the ipaConfigString=kdcProxyEnabled attribute and value pair to the directory: Restart the httpd service: KKDCP is now enabled on the current IdM server. Verification Verify that the symbolic link exists: 19.5. Configuring the KKDCP server I With the following configuration, you can enable TCP to be used as the transport protocol between the IdM KKDCP and the Active Directory (AD) realm, where multiple Kerberos servers are used. Prerequisites You have root access. Procedure Set the use_dns parameter in the [global] section of the /etc/ipa/kdcproxy/kdcproxy.conf file to false . Put the proxied realm information into the /etc/ipa/kdcproxy/kdcproxy.conf file. For example, for the [AD. EXAMPLE.COM ] realm with proxy list the realm configuration parameters as follows: Important The realm configuration parameters must list multiple servers separated by a space, as opposed to /etc/krb5.conf and kdc.conf , in which certain options may be specified multiple times. Restart Identity Management (IdM) services: Additional resources Configure IPA server as a KDC Proxy for AD Kerberos communication (Red Hat Knowledgebase) 19.6. Configuring the KKDCP server II The following server configuration relies on the DNS service records to find Active Directory (AD) servers to communicate with. Prerequisites You have root access. Procedure In the /etc/ipa/kdcproxy/kdcproxy.conf file, the [global] section, set the use_dns parameter to true . The configs parameter allows you to load other configuration modules. In this case, the configuration is read from the MIT libkrb5 library. Optional: In case you do not want to use DNS service records, add explicit AD servers to the [realms] section of the /etc/krb5.conf file. If the realm with proxy is, for example, AD. EXAMPLE.COM , you add: Restart Identity Management (IdM) services: Additional resources Configure IPA server as a KDC Proxy for AD Kerberos communication (Red Hat Knowledgebase)
[ "[realms] EXAMPLE.COM = { kdc = https://kdc.example.com/KdcProxy admin_server = https://kdc.example.com/KdcProxy kpasswd_server = https://kdc.example.com/KdcProxy default_domain = example.com }", "~]# systemctl restart sssd", "ls -l /etc/httpd/conf.d/ipa-kdc-proxy.conf lrwxrwxrwx. 1 root root 36 Jun 21 2020 /etc/httpd/conf.d/ipa-kdc-proxy.conf -> /etc/ipa/kdcproxy/ipa-kdc-proxy.conf", "ipa-ldap-updater /usr/share/ipa/kdcproxy-disable.uldif Update complete The ipa-ldap-updater command was successful", "systemctl restart httpd.service", "ls -l /etc/httpd/conf.d/ipa-kdc-proxy.conf ls: cannot access '/etc/httpd/conf.d/ipa-kdc-proxy.conf': No such file or directory", "ipa-ldap-updater /usr/share/ipa/kdcproxy-enable.uldif Update complete The ipa-ldap-updater command was successful", "systemctl restart httpd.service", "ls -l /etc/httpd/conf.d/ipa-kdc-proxy.conf lrwxrwxrwx. 1 root root 36 Jun 21 2020 /etc/httpd/conf.d/ipa-kdc-proxy.conf -> /etc/ipa/kdcproxy/ipa-kdc-proxy.conf", "[global] use_dns = false", "[AD. EXAMPLE.COM ] kerberos = kerberos+tcp://1.2.3.4:88 kerberos+tcp://5.6.7.8:88 kpasswd = kpasswd+tcp://1.2.3.4:464 kpasswd+tcp://5.6.7.8:464", "ipactl restart", "[global] configs = mit use_dns = true", "[realms] AD. EXAMPLE.COM = { kdc = ad-server.ad.example.com kpasswd_server = ad-server.ad.example.com }", "ipactl restart" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/using-the-kdc-proxy-in-idm_managing-users-groups-hosts
Chapter 6. Managing Replication Topology
Chapter 6. Managing Replication Topology This chapter describes how to manage replication between servers in an Identity Management (IdM) domain. Note This chapter describes simplified topology management introduced in Red Hat Enterprise Linux 7.3. The procedures require domain level 1 (see Chapter 7, Displaying and Raising the Domain Level ). For documentation on managing topology at domain level 0, see Section D.3, "Managing Replicas and Replication Agreements" . For details on installing an initial replica and basic information on replication, see Chapter 4, Installing and Uninstalling Identity Management Replicas . 6.1. Explaining Replication Agreements, Topology Suffixes, and Topology Segments Replication Agreements Data stored on an IdM server is replicated based on replication agreements: when two servers have a replication agreement configured, they share their data. Replication agreements are always bilateral: the data is replicated from the first replica to the other one as well as from the other replica to the first one. Note For additional details, see Section 4.1, "Explaining IdM Replicas" . Topology Suffixes Topology suffixes store the data that is replicated. IdM supports two types of topology suffixes: domain and ca . Each suffix represents a separate back end, a separate replication topology. When a replication agreement is configured, it joins two topology suffixes of the same type on two different servers. The domain suffix: dc= example ,dc= com The domain suffix contains all domain-related data. When two replicas have a replication agreement between their domain suffixes, they share directory data, such as users, groups, and policies. The ca suffix: o=ipaca The ca suffix contains data for the Certificate System component. It is only present on servers with a certificate authority (CA) installed. When two replicas have a replication agreement between their ca suffixes, they share certificate data. Figure 6.1. Topology Suffixes An initial topology segment is set up between two servers by the ipa-replica-install script when installing a new replica. Example 6.1. Viewing Topology Suffixes The ipa topologysuffix-find command displays a list of topology suffixes: Topology Segments When two replicas have a replication agreement between their suffixes, the suffixes form a topology segment . Each topology segment consists of a left node and a right node . The nodes represent the servers joined in the replication agreement. Topology segments in IdM are always bidirectional. Each segment represents two replication agreements: from server A to server B, and from server B to server A. The data is therefore replicated in both directions. Figure 6.2. Topology Segments Example 6.2. Viewing Topology Segments The ipa topologysegment-find command shows the current topology segments configured for the domain or CA suffixes. For example, for the domain suffix: In this example, domain-related data is only replicated between two servers: server1.example.com and server1.example.com . To display details for a particular segment only, use the ipa topologysegment-show command:
[ "ipa topologysuffix-find --------------------------- 2 topology suffixes matched --------------------------- Suffix name: ca Managed LDAP suffix DN: o=ipaca Suffix name: domain Managed LDAP suffix DN: dc=example,dc=com ---------------------------- Number of entries returned 2 ----------------------------", "ipa topologysegment-find Suffix name: domain ----------------- 1 segment matched ----------------- Segment name: server1.example.com-to-server2.example.com Left node: server1.example.com Right node: server2.example.com Connectivity: both ---------------------------- Number of entries returned 1 ----------------------------", "ipa topologysegment-show Suffix name: domain Segment name: server1.example.com-to-server2.example.com Segment name: server1.example.com-to-server2.example.com Left node: server1.example.com Right node: server2.example.com Connectivity: both" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/managing-topology
Chapter 4. Configuring Compute service storage
Chapter 4. Configuring Compute service storage You create an instance from a base image, which the Compute service copies from the Image (glance) service, and caches locally on the Compute nodes. The instance disk, which is the back end for the instance, is also based on the base image. You can configure the Compute service to store ephemeral instance disk data locally on the host Compute node or remotely on either an NFS share or Ceph cluster. Alternatively, you can also configure the Compute service to store instance disk data in persistent storage provided by the Block Storage (Cinder) service. You can configure image caching for your environment, and configure the performance and security of the instance disks. You can also configure the Compute service to download images directly from the RBD image repository without using the Image service API, when the Image service (glance) uses Red Hat Ceph RADOS Block Device (RBD) as the back end. 4.1. Configuration options for image caching Use the parameters detailed in the following table to configure how the Compute service implements and manages an image cache on Compute nodes. Table 4.1. Compute (nova) service image cache parameters Configuration method Parameter Description Puppet nova::compute::image_cache::manager_interval Specifies the number of seconds to wait between runs of the image cache manager, which manages base image caching on Compute nodes. The Compute service uses this period to perform automatic removal of unused cached images when nova::compute::image_cache::remove_unused_base_images is set to True . Set to 0 to run at the default metrics interval of 60 seconds (not recommended). Set to -1 to disable the image cache manager. Default: 2400 Puppet nova::compute::image_cache::precache_concurrency Specifies the maximum number of Compute nodes that can pre-cache images in parallel. Note Setting this parameter to a high number can cause slower pre-cache performance and might result in a DDoS on the Image service. Setting this parameter to a low number reduces the load on the Image service, but can cause longer runtime to completion as the pre-cache is performed as a more sequential operation. Default: 1 Puppet nova::compute::image_cache::remove_unused_base_images Set to True to automatically remove unused base images from the cache at intervals configured by using manager_interval . Images are defined as unused if they have not been accessed during the time specified by using NovaImageCacheTTL . Default: True Puppet nova::compute::image_cache::remove_unused_resized_minimum_age_seconds Specifies the minimum age that an unused resized base image must be to be removed from the cache, in seconds. Unused resized base images younger than this will not be removed. Set to undef to disable. Default: 3600 Puppet nova::compute::image_cache::subdirectory_name Specifies the name of the folder where cached images are stored, relative to USDinstances_path . Default: _base Heat NovaImageCacheTTL Specifies the length of time in seconds that the Compute service should continue caching an image when it is no longer used by any instances on the Compute node. The Compute service deletes images cached on the Compute node that are older than this configured lifetime from the cache directory until they are needed again. Default: 86400 (24 hours) 4.2. Configuration options for instance ephemeral storage properties Use the parameters detailed in the following table to configure the performance and security of ephemeral storage used by instances. Note Red Hat OpenStack Platform (RHOSP) does not support the LVM image type for instance disks. Therefore, the [libvirt]/volume_clear configuration option, which wipes ephemeral disks when instances are deleted, is not supported because it only applies when the instance disk image type is LVM. Table 4.2. Compute (nova) service instance ephemeral storage parameters Configuration method Parameter Description Puppet nova::compute::default_ephemeral_format Specifies the default format that is used for a new ephemeral volume. Set to one of the following valid values: ext2 ext3 ext4 The ext4 format provides much faster initialization times than ext3 for new, large disks. Default: ext4 Puppet nova::compute::force_raw_images Set to True to convert non-raw cached base images to raw format. The raw image format uses more space than other image formats, such as qcow2. Non-raw image formats use more CPU for compression. When set to False , the Compute service removes any compression from the base image during compression to avoid CPU bottlenecks. Set to False if you have a system with slow I/O or low available space to reduce input bandwidth. Default: True Puppet nova::compute::use_cow_images Set to True to use CoW (Copy on Write) images in qcow2 format for instance disks. With CoW, depending on the backing store and host caching, there might be better concurrency achieved by having each instance operate on its own copy. Set to False to use the raw format. Raw format uses more space for common parts of the disk image. Default: True Puppet nova::compute::libvirt::preallocate_images Specifies the preallocation mode for instance disks. Set to one of the following valid values: none - No storage is provisioned at instance start. space - The Compute service fully allocates storage at instance start by running fallocate(1) on the instance disk images. This reduces CPU overhead and file fragmentation, improves I/O performance, and helps guarantee the required disk space. Default: none Hieradata override DEFAULT/resize_fs_using_block_device Set to True to enable direct resizing of the base image by accessing the image over a block device. This is only necessary for images with older versions of cloud-init that cannot resize themselves. This parameter is not enabled by default because it enables the direct mounting of images which might otherwise be disabled for security reasons. Default: False Hieradata override [libvirt]/images_type Specifies the image type to use for instance disks. Set to one of the following valid values: raw qcow2 flat rbd default Note RHOSP does not support the LVM image type for instance disks. When set to a valid value other than default the image type supersedes the configuration of use_cow_images . If default is specified, the configuration of use_cow_images determines the image type: If use_cow_images is set to True (default) then the image type is qcow2 . If use_cow_images is set to False then the image type is Flat . The default value is determined by the configuration of NovaEnableRbdBackend : NovaEnableRbdBackend: False Default: default NovaEnableRbdBackend: True Default: rbd 4.3. Configuring shared instance storage By default, when you launch an instance, the instance disk is stored as a file in the instance directory, /var/lib/nova/instances . You can configure an NFS storage backend for the Compute service to store these instance files on shared NFS storage. Prerequisites You must be using NFSv4 or later. Red Hat OpenStack Platform (RHOSP) does not support earlier versions of NFS. For more information, see the Red Hat Knowledgebase solution RHOS NFSv4-Only Support Notes . Procedure Log in to the undercloud as the stack user. Source the stackrc file: Create an environment file to configure shared instance storage, for example, nfs_instance_disk_backend.yaml . To configure an NFS backend for instance files, add the following configuration to nfs_instance_disk_backend.yaml : Replace <nfs_share> with the NFS share directory to mount for instance file storage, for example, '192.168.122.1:/export/nova' or '192.168.24.1:/var/nfs' . If using IPv6, use both double and single-quotes, e.g. "'[fdd0::1]:/export/nova'" . Optional: The default mount SELinux context for NFS storage when NFS backend storage is enabled is 'context=system_u:object_r:nfs_t:s0' . Add the following parameter to amend the mount options for the NFS instance file storage mount point: parameter_defaults: ... NovaNfsOptions: 'context=system_u:object_r:nfs_t:s0,<additional_nfs_mount_options>' Replace <additional_nfs_mount_options> with a comma-separated list of the mount options you want to use for NFS instance file storage. For more information on the available mount options, see the mount man page: Save the updates to your environment file. Add your new environment file to the stack with your other environment files and deploy the overcloud: 4.4. Configuring image downloads directly from Red Hat Ceph RADOS Block Device (RBD) When the Image service (glance) uses Red Hat Ceph RADOS Block Device (RBD) as the back end, and the Compute service uses local file-based ephemeral storage, you can configure the Compute service to download images directly from the RBD image repository without using the Image service API. This reduces the time it takes to download an image to the Compute node image cache at instance boot time, which improves instance launch time. Prerequisites The Image service back end is a Red Hat Ceph RADOS Block Device (RBD). The Compute service is using a local file-based ephemeral store for the image cache and instance disks. Procedure Log in to the undercloud as the stack user. Open your Compute environment file. To download images directly from the RBD back end, add the following configuration to your Compute environment file: Optional: If the Image service is configured to use multiple Red Hat Ceph Storage back ends, add the following configuration to your Compute environment file to identify the RBD back end to download images from: Replace <rbd_backend_id> with the ID used to specify the back end in the GlanceMultistoreConfig configuration, for example rbd2_store . Add the following configuration to your Compute environment file to specify the Image service RBD back end, and the maximum length of time that the Compute service waits to connect to the Image service RBD back end, in seconds: Add your Compute environment file to the stack with your other environment files and deploy the overcloud: To verify that the Compute service downloads images directly from RBD, create an instance then check the instance debug log for the entry "Attempting to export RBD image:". 4.5. Additional resources Configuring the Compute service (nova)
[ "[stack@director ~]USD source ~/stackrc", "parameter_defaults: NovaNfsEnabled: True NovaNfsShare: <nfs_share>", "parameter_defaults: NovaNfsOptions: 'context=system_u:object_r:nfs_t:s0,<additional_nfs_mount_options>'", "man 8 mount.", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/nfs_instance_disk_backend.yaml", "parameter_defaults: ComputeParameters: NovaGlanceEnableRbdDownload: True NovaEnableRbdBackend: False", "parameter_defaults: ComputeParameters: NovaGlanceEnableRbdDownload: True NovaEnableRbdBackend: False NovaGlanceRbdDownloadMultistoreID: <rbd_backend_id>", "parameter_defaults: ComputeExtraConfig: nova::config::nova_config: glance/rbd_user: value: 'glance' glance/rbd_pool: value: 'images' glance/rbd_ceph_conf: value: '/etc/ceph/ceph.conf' glance/rbd_connect_timeout: value: '5'", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-compute-service-storage_compute-performance
probe::socket.aio_read
probe::socket.aio_read Name probe::socket.aio_read - Receiving message via sock_aio_read Synopsis socket.aio_read Values flags Socket flags value type Socket type value size Message size in bytes family Protocol family value name Name of this probe protocol Protocol value state Socket state value Context The message sender Description Fires at the beginning of receiving a message on a socket via the sock_aio_read function
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-socket-aio-read
2.2. Server Security
2.2. Server Security When a system is used as a server on a public network, it becomes a target for attacks. Hardening the system and locking down services is therefore of paramount importance for the system administrator. Before delving into specific issues, review the following general tips for enhancing server security: Keep all services current, to protect against the latest threats. Use secure protocols whenever possible. Serve only one type of network service per machine whenever possible. Monitor all servers carefully for suspicious activity. 2.2.1. Securing Services With TCP Wrappers and xinetd TCP Wrappers provide access control to a variety of services. Most modern network services, such as SSH, Telnet, and FTP, make use of TCP Wrappers, which stand guard between an incoming request and the requested service. The benefits offered by TCP Wrappers are enhanced when used in conjunction with xinetd , a super server that provides additional access, logging, binding, redirection, and resource utilization control. Note It is a good idea to use iptables firewall rules in conjunction with TCP Wrappers and xinetd to create redundancy within service access controls. Refer to Section 2.8, "Firewalls" for more information about implementing firewalls with iptables commands. The following subsections assume a basic knowledge of each topic and focus on specific security options. 2.2.1.1. Enhancing Security With TCP Wrappers TCP Wrappers are capable of much more than denying access to services. This section illustrates how they can be used to send connection banners, warn of attacks from particular hosts, and enhance logging functionality. Refer to the hosts_options man page for information about the TCP Wrapper functionality and control language. Refer to the xinetd.conf man page available online at http://linux.die.net/man/5/xinetd.conf for available flags, which act as options you can apply to a service. 2.2.1.1.1. TCP Wrappers and Connection Banners Displaying a suitable banner when users connect to a service is a good way to let potential attackers know that the system administrator is being vigilant. You can also control what information about the system is presented to users. To implement a TCP Wrappers banner for a service, use the banner option. This example implements a banner for vsftpd . To begin, create a banner file. It can be anywhere on the system, but it must have same name as the daemon. For this example, the file is called /etc/banners/vsftpd and contains the following lines: The %c token supplies a variety of client information, such as the user name and hostname, or the user name and IP address to make the connection even more intimidating. For this banner to be displayed to incoming connections, add the following line to the /etc/hosts.allow file: 2.2.1.1.2. TCP Wrappers and Attack Warnings If a particular host or network has been detected attacking the server, TCP Wrappers can be used to warn the administrator of subsequent attacks from that host or network using the spawn directive. In this example, assume that an attacker from the 206.182.68.0/24 network has been detected attempting to attack the server. Place the following line in the /etc/hosts.deny file to deny any connection attempts from that network, and to log the attempts to a special file: The %d token supplies the name of the service that the attacker was trying to access. To allow the connection and log it, place the spawn directive in the /etc/hosts.allow file. Note Because the spawn directive executes any shell command, it is a good idea to create a special script to notify the administrator or execute a chain of commands in the event that a particular client attempts to connect to the server. 2.2.1.1.3. TCP Wrappers and Enhanced Logging If certain types of connections are of more concern than others, the log level can be elevated for that service using the severity option. For this example, assume that anyone attempting to connect to port 23 (the Telnet port) on an FTP server is an attacker. To denote this, place an emerg flag in the log files instead of the default flag, info , and deny the connection. To do this, place the following line in /etc/hosts.deny : This uses the default authpriv logging facility, but elevates the priority from the default value of info to emerg , which posts log messages directly to the console. 2.2.1.2. Enhancing Security With xinetd This section focuses on using xinetd to set a trap service and using it to control resource levels available to any given xinetd service. Setting resource limits for services can help thwart Denial of Service ( DoS ) attacks. Refer to the man pages for xinetd and xinetd.conf for a list of available options. 2.2.1.2.1. Setting a Trap One important feature of xinetd is its ability to add hosts to a global no_access list. Hosts on this list are denied subsequent connections to services managed by xinetd for a specified period or until xinetd is restarted. You can do this using the SENSOR attribute. This is an easy way to block hosts attempting to scan the ports on the server. The first step in setting up a SENSOR is to choose a service you do not plan on using. For this example, Telnet is used. Edit the file /etc/xinetd.d/telnet and change the flags line to read: Add the following line: This denies any further connection attempts to that port by that host for 30 minutes. Other acceptable values for the deny_time attribute are FOREVER, which keeps the ban in effect until xinetd is restarted, and NEVER, which allows the connection and logs it. Finally, the last line should read: This enables the trap itself. While using SENSOR is a good way to detect and stop connections from undesirable hosts, it has two drawbacks: It does not work against stealth scans. An attacker who knows that a SENSOR is running can mount a Denial of Service attack against particular hosts by forging their IP addresses and connecting to the forbidden port. 2.2.1.2.2. Controlling Server Resources Another important feature of xinetd is its ability to set resource limits for services under its control. It does this using the following directives: cps = <number_of_connections> <wait_period> - Limits the rate of incoming connections. This directive takes two arguments: <number_of_connections> - The number of connections per second to handle. If the rate of incoming connections is higher than this, the service is temporarily disabled. The default value is fifty (50). <wait_period> - The number of seconds to wait before re-enabling the service after it has been disabled. The default interval is ten (10) seconds. instances = <number_of_connections> - Specifies the total number of connections allowed to a service. This directive accepts either an integer value or UNLIMITED . per_source = <number_of_connections> - Specifies the number of connections allowed to a service by each host. This directive accepts either an integer value or UNLIMITED . rlimit_as = <number[K|M]> - Specifies the amount of memory address space the service can occupy in kilobytes or megabytes. This directive accepts either an integer value or UNLIMITED . rlimit_cpu = <number_of_seconds> - Specifies the amount of time in seconds that a service may occupy the CPU. This directive accepts either an integer value or UNLIMITED . Using these directives can help prevent any single xinetd service from overwhelming the system, resulting in a denial of service.
[ "220-Hello, %c 220-All activity on ftp.example.com is logged. 220-Inappropriate use will result in your access privileges being removed.", "vsftpd : ALL : banners /etc/banners/", "ALL : 206.182.68.0 : spawn /bin/echo `date` %c %d >> /var/log/intruder_alert", "in.telnetd : ALL : severity emerg", "flags = SENSOR", "deny_time = 30", "disable = no" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-Security_Guide-Server_Security
Chapter 1. Introduction
Chapter 1. Introduction You can use host-based subscriptions for Red Hat Enterprise Linux virtual machines in the following virtualization platforms: Red Hat Virtualization Red Hat Enterprise Linux Virtualization (KVM) Red Hat OpenStack Platform VMware vSphere Microsoft Hyper-V Nutanix AHV 1.1. Recommendations for subscription usage reporting For accurate subscription usage reporting, follow these recommendations: Configure virt-who properly as described in the following sections of this guide. Set up system purpose on your systems and activation keys. For more information, see the following resources: Managing activation keys in Managing content Editing the system purpose of a host in Managing hosts In a connected environment, configure the Satellite inventory upload plugin to upload your inventory to Red Hat Hybrid Cloud Console so that you can use the Subscriptions service for subscription usage reporting. You can configure the plugin in the Satellite web UI by navigating to Configure > RH Cloud > Inventory Upload . For more information about the Subscriptions service, see Getting Started with the Subscriptions Service in Subscription Central . 1.2. Host-based subscriptions Virtual machines require host-based subscriptions instead of physical subscriptions. Many host-based subscriptions provide entitlements for unlimited virtual machines. To allow virtual machines to report host-guest mappings to their hypervisors, you must install and configure virt-who. Virt-who queries the virtualization platform and reports hypervisor and virtual machine information to Red Hat Satellite. This information is used in reporting about subscription usage which you can get in the Subscriptions service on the Red Hat Hybrid Cloud Console. To see if a subscription requires virt-who, in the Satellite web UI, navigate to Content > Subscriptions . If there is a tick in the Requires Virt-Who column, you must configure virt-who to use that subscription. 1.3. Configuration overview To allow virtual machines to report host-guest mappings and subscription information through their hypervisors, complete the following steps: Prerequisites Import a Subscription Manifest that includes a host-based subscription into Satellite Server. For more information, see Importing a Red Hat Subscription Manifest into Satellite Server in Managing Content . If you are using Microsoft Hyper-V, enable remote management on the hypervisors. If you are using Nutanix AHV, consult How to configure virt-who for Nutanix AHV to work with RHSM in the Red Hat Knowledgebase . Create a user with read-only access and a non-expiring password on each hypervisor or virtualization manager. Virt-who uses this account to retrieve the list of virtual machines to report to Satellite Server. For Red Hat products and Microsoft Hyper-V, create a virt-who user on each hypervisor that runs Red Hat Enterprise Linux virtual machines. For VMware vSphere, create a virt-who user on the vCenter Server. The virt-who user requires at least read-only access to all objects in the vCenter Data Center. Procedure overview Section 1.4, "Virt-who configuration for each virtualization platform" . Use the table in this section to plan how to configure and deploy virt-who for your virtualization platform. Chapter 2, Creating a virt-who configuration . Create a virt-who configuration for each hypervisor or virtualization manager. Chapter 3, Deploying a virt-who configuration . Deploy the virt-who configurations using the scripts generated by Satellite. Register the virtual machines to Satellite Server. For more information, see Registering hosts in Managing hosts . 1.4. Virt-who configuration for each virtualization platform Virt-who is configured using files that specify details such as the virtualization type and the hypervisor or virtualization manager to query. The supported configuration is different for each virtualization platform. Typical virt-who configuration file This example shows a typical virt-who configuration file created using the Satellite web UI or Hammer CLI: The type and server values depend on the virtualization platform. The following table provides more detail. The username refers to a read-only user on the hypervisor or virtualization manager, which you must create before configuring virt-who. The rhsm-username refers to an automatically generated user that only has permissions for virt-who reporting to Satellite Server. Required configuration for each virtualization platform Use this table to plan your virt-who configuration: Supported virtualization platform Type specified in the configuration file Server specified in the configuration file Server where the configuration file is deployed Red Hat Virtualization RHEL Virtualization (KVM) Red Hat OpenStack Platform libvirt Hypervisor (one file for each hypervisor) Each hypervisor VMware vSphere esx vCenter Server Satellite Server, Capsule Server, or a dedicated RHEL server Microsoft Hyper-V hyperv Hyper-V hypervisor (one file for each hypervisor) Satellite Server, Capsule Server, or a dedicated RHEL server Example virt-who configuration files Example virt-who configuration files for several common hypervisor types are shown. Example OpenStack virt-who configuration Example KVM virt-who configuration Example VMware virt-who configuration Important The rhevm and xen hypervisor types are not supported. The kubevirt hypervisor type is provided as a Technology Preview only.
[ "[virt-who-config-1] type=libvirt hypervisor_id=hostname owner=Default_Organization env=Library server=hypervisor1.example.com username=virt_who_user encrypted_password=USDcr_password rhsm_hostname=satellite.example.com rhsm_username=virt_who_reporter_1 rhsm_encrypted_password=USDuser_password rhsm_prefix=/rhsm", "cat /etc/virt-who.d/virt-who-config-1.conf This configuration file is managed via the virt-who configure plugin manual edits will be deleted. [virt-who-config-1] type=libvirt hypervisor_id=hostname owner=ORG env=Library server=qemu:///system <==== username=virt-who-user encrypted_password=xxxxxxxxxxx rhsm_hostname=satellite.example.com rhsm_username=virt_who_reporter_1 rhsm_encrypted_password=yyyyyyyyyyy rhsm_prefix=/rhsm", "type=libvirt hypervisor_id=hostname owner=gss env=Library server=qemu+ssh://[email protected]/system username=root encrypted_password=33di3ksskd rhsm_hostname=satellite.example.com rhsm_username=virt_who_reporter_2 rhsm_encrypted_password=23233dj3j3k rhsm_prefix=/rhsm", "type=esx hypervisor_id=hostname owner=gss env=Library server=vcenter.example.com [email protected] encrypted_password=33di3ksskd rhsm_hostname=satellite.example.com rhsm_username=virt_who_reporter_2 rhsm_encrypted_password=23233dj3j3k rhsm_prefix=/rhsm" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/configuring_virtual_machine_subscriptions/introduction
OperatorHub APIs
OperatorHub APIs OpenShift Container Platform 4.14 Reference guide for OperatorHub APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operatorhub_apis/index
Chapter 16. InsightsOperator [operator.openshift.io/v1]
Chapter 16. InsightsOperator [operator.openshift.io/v1] Description InsightsOperator holds cluster-wide information about the Insights Operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 16.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Insights. status object status is the most recently observed status of the Insights operator. 16.1.1. .spec Description spec is the specification of the desired behavior of the Insights. Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides 16.1.2. .status Description status is the most recently observed status of the Insights operator. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. gatherStatus object gatherStatus provides basic information about the last Insights data gathering. When omitted, this means no data gathering has taken place yet. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. insightsReport object insightsReport provides general Insights analysis results. When omitted, this means no data gathering has taken place yet. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 16.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 16.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 16.1.5. .status.gatherStatus Description gatherStatus provides basic information about the last Insights data gathering. When omitted, this means no data gathering has taken place yet. Type object Property Type Description gatherers array gatherers is a list of active gatherers (and their statuses) in the last gathering. gatherers[] object gathererStatus represents information about a particular data gatherer. lastGatherDuration string lastGatherDuration is the total time taken to process all gatherers during the last gather event. lastGatherTime string lastGatherTime is the last time when Insights data gathering finished. An empty value means that no data has been gathered yet. 16.1.6. .status.gatherStatus.gatherers Description gatherers is a list of active gatherers (and their statuses) in the last gathering. Type array 16.1.7. .status.gatherStatus.gatherers[] Description gathererStatus represents information about a particular data gatherer. Type object Required conditions lastGatherDuration name Property Type Description conditions array conditions provide details on the status of each gatherer. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } lastGatherDuration string lastGatherDuration represents the time spent gathering. name string name is the name of the gatherer. 16.1.8. .status.gatherStatus.gatherers[].conditions Description conditions provide details on the status of each gatherer. Type array 16.1.9. .status.gatherStatus.gatherers[].conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 16.1.10. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 16.1.11. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 16.1.12. .status.insightsReport Description insightsReport provides general Insights analysis results. When omitted, this means no data gathering has taken place yet. Type object Property Type Description downloadedAt string downloadedAt is the time when the last Insights report was downloaded. An empty value means that there has not been any Insights report downloaded yet and it usually appears in disconnected clusters (or clusters when the Insights data gathering is disabled). healthChecks array healthChecks provides basic information about active Insights health checks in a cluster. healthChecks[] object healthCheck represents an Insights health check attributes. 16.1.13. .status.insightsReport.healthChecks Description healthChecks provides basic information about active Insights health checks in a cluster. Type array 16.1.14. .status.insightsReport.healthChecks[] Description healthCheck represents an Insights health check attributes. Type object Required advisorURI description state totalRisk Property Type Description advisorURI string advisorURI provides the URL link to the Insights Advisor. description string description provides basic description of the healtcheck. state string state determines what the current state of the health check is. Health check is enabled by default and can be disabled by the user in the Insights advisor user interface. totalRisk integer totalRisk of the healthcheck. Indicator of the total risk posed by the detected issue; combination of impact and likelihood. The values can be from 1 to 4, and the higher the number, the more important the issue. 16.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/insightsoperators DELETE : delete collection of InsightsOperator GET : list objects of kind InsightsOperator POST : create an InsightsOperator /apis/operator.openshift.io/v1/insightsoperators/{name} DELETE : delete an InsightsOperator GET : read the specified InsightsOperator PATCH : partially update the specified InsightsOperator PUT : replace the specified InsightsOperator /apis/operator.openshift.io/v1/insightsoperators/{name}/scale GET : read scale of the specified InsightsOperator PATCH : partially update scale of the specified InsightsOperator PUT : replace scale of the specified InsightsOperator /apis/operator.openshift.io/v1/insightsoperators/{name}/status GET : read status of the specified InsightsOperator PATCH : partially update status of the specified InsightsOperator PUT : replace status of the specified InsightsOperator 16.2.1. /apis/operator.openshift.io/v1/insightsoperators Table 16.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of InsightsOperator Table 16.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 16.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind InsightsOperator Table 16.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 16.5. HTTP responses HTTP code Reponse body 200 - OK InsightsOperatorList schema 401 - Unauthorized Empty HTTP method POST Description create an InsightsOperator Table 16.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.7. Body parameters Parameter Type Description body InsightsOperator schema Table 16.8. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 201 - Created InsightsOperator schema 202 - Accepted InsightsOperator schema 401 - Unauthorized Empty 16.2.2. /apis/operator.openshift.io/v1/insightsoperators/{name} Table 16.9. Global path parameters Parameter Type Description name string name of the InsightsOperator Table 16.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an InsightsOperator Table 16.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 16.12. Body parameters Parameter Type Description body DeleteOptions schema Table 16.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified InsightsOperator Table 16.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 16.15. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified InsightsOperator Table 16.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.17. Body parameters Parameter Type Description body Patch schema Table 16.18. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified InsightsOperator Table 16.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.20. Body parameters Parameter Type Description body InsightsOperator schema Table 16.21. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 201 - Created InsightsOperator schema 401 - Unauthorized Empty 16.2.3. /apis/operator.openshift.io/v1/insightsoperators/{name}/scale Table 16.22. Global path parameters Parameter Type Description name string name of the InsightsOperator Table 16.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read scale of the specified InsightsOperator Table 16.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 16.25. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified InsightsOperator Table 16.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.27. Body parameters Parameter Type Description body Patch schema Table 16.28. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified InsightsOperator Table 16.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.30. Body parameters Parameter Type Description body Scale schema Table 16.31. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 16.2.4. /apis/operator.openshift.io/v1/insightsoperators/{name}/status Table 16.32. Global path parameters Parameter Type Description name string name of the InsightsOperator Table 16.33. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified InsightsOperator Table 16.34. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 16.35. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified InsightsOperator Table 16.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.37. Body parameters Parameter Type Description body Patch schema Table 16.38. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified InsightsOperator Table 16.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.40. Body parameters Parameter Type Description body InsightsOperator schema Table 16.41. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 201 - Created InsightsOperator schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operator_apis/insightsoperator-operator-openshift-io-v1
6.2. Enabling Infinispan Query DSL-based Queries
6.2. Enabling Infinispan Query DSL-based Queries In library mode, running Infinispan Query DSL-based queries is almost identical to running Lucene-based API queries. Prerequisites are: All libraries required for Infinispan Query (see Section 2.1.1, "Infinispan Query Dependencies in Library Mode" for details) on the classpath. Indexing enabled and configured for caches (optional). See Section 2.4, "Configure Indexing" for details. Annotated POJO cache values (optional). If indexing is not enabled, POJO annotations are also not required and are ignored if set. If indexing is not enabled, all fields that follow JavaBeans conventions are searchable instead of only the fields with Hibernate Search annotations. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/Enabling_DSL-based_Queries
5.3. Planning Security Domains
5.3. Planning Security Domains A security domain is a registry of PKI services. PKI services, such as CAs, register information about themselves in these domains so users of PKI services can find other services by inspecting the registry. The security domain service in Certificate System manages both the registration of PKI services for Certificate System subsystems and a set of shared trust policies. The registry provides a complete view of all PKI services provided by the subsystems within that domain. Each Certificate System subsystem must be either a host or a member of a security domain. A CA subsystem is the only subsystem which can host a security domain. The security domain shares the CA internal database for privileged user and group information to determine which users can update the security domain, register new PKI services, and issue certificates. A security domain is created during CA configuration, which automatically creates an entry in the security domain CA's LDAP directory. Each entry contains all the important information about the domain. Every subsystem within the domain, including the CA registering the security domain, is recorded under the security domain container entry. The URL to the CA uniquely identifies the security domain. The security domain is also given a friendly name, such as Example Corp Intranet PKI . All other subsystems - KRA, TPS, TKS, OCSP, and other CAs - must become members of the security domain by supplying the security domain URL when configuring the subsystem. Each subsystem within the security domain shares the same trust policies and trusted roots which can be retrieved from different servers and browsers. The information available in the security domain is used during configuration of a new subsystem, which makes the configuration process streamlined and automated. For example, when a TPS needs to connect to a CA, it can consult the security domain to get a list of available CAs. Each CA has its own LDAP entry. The security domain is an organizational group underneath that CA entry: Then there is a list of each subsystem type beneath the security domain organizational group, with a special object class ( pkiSecurityGroup ) to identify the group type: Each subsystem instance is then stored as a member of that group, with a special pkiSubsystem object class to identify the entry type: If a subsystem needs to contact another subsystem to perform an operation, it contacts the CA which hosts the security domain (by invoking a servlet which connects over the administrative port of the CA). The security domain CA then retrieves the information about the subsystem from its LDAP database, and returns that information to the requesting subsystem. The subsystem authenticates to the security domain using a subsystem certificate. Consider the following when planning the security domain: The CA hosting the security domain can be signed by an external authority. Multiple security domains can be set up within an organization. However, each subsystem can belong to only one security domain. Subsystems within a domain can be cloned. Cloning subsystem instances distributes the system load and provides failover points. The security domain streamlines configuration between the CA and KRA; the KRA can push its KRA connector information and transport certificates automatically to the CA instead of administrators having to manually copy the certificates over to the CA. The Certificate System security domain allows an offline CA to be set up. In this scenario, the offline root has its own security domain. All online subordinate CAs belong to a different security domain. The security domain streamlines configuration between the CA and OCSP. The OCSP can push its information to the CA for the CA to set up OCSP publishing and also retrieve the CA certificate chain from the CA and store it in the internal database.
[ "ou=Security Domain,dc=server.example.com-pki-ca", "cn=KRAList,ou=Security Domain,o=pki-tomcat-CA objectClass: top objectClass: pkiSecurityGroup cn: KRAList", "dn: cn=kra.example.com:8443,cn=KRAList,ou=Security Domain,o=pki-tomcat-CA objectClass: top objectClass: pkiSubsystem cn: kra.example.com:8443 host: server.example.com UnSecurePort: 8080 SecurePort: 8443 SecureAdminPort: 8443 SecureAgentPort: 8443 SecureEEClientAuthPort: 8443 DomainManager: false Clone: false SubsystemName: KRA kra.example.com 8443" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/Certificate_Manager-Security_Domains
Part III. Debugging Tools
Part III. Debugging Tools
null
https://docs.redhat.com/en/documentation/red_hat_developer_toolset/12/html/user_guide/part-debugging_tools
Chapter 6. Building simplified installer images to provision a RHEL for Edge image
Chapter 6. Building simplified installer images to provision a RHEL for Edge image You can build a RHEL for Edge Simplified Installer image, which is optimized for unattended installation to a device, and provision the image to a RHEL for Edge image. 6.1. Simplified installer image build and deployment The RHEL for Edge Simplified Installer image is optimized for unattended installation to a device and supports both network-based deployment and non-network-based deployments. However, for network-based deployment, it supports only UEFI HTTP boot. Build a RHEL for Edge Simplified Installer image by using the edge-simplified-installer image type. To build a RHEL for Edge Simplified Installer image, provide an existing OSTree commit. The resulting RHEL for Edge Simplified Installer contains a raw image that has a deployed OSTree commit. After you boot the Simplified installer ISO image, it provisions a RHEL for Edge system that you can use on a hard disk or as a boot image in a virtual machine. You can log in to the deployed system with the user name and password that you specified in the blueprint that you used to create the Simplified Installer image. Composing and deploying a simplified RHEL for Edge image involves the following high-level steps: Install and register a RHEL system Install RHEL image builder Using RHEL image builder, create a blueprint with customizations for RHEL for Edge Container image Import the RHEL for Edge blueprint in RHEL image builder Create a RHEL for Edge image embed in an OCI container with a web server ready to deploy the commit as an OSTree repository Create a blueprint for the edge-simplified-installer image Build a simplified RHEL for Edge image Download the RHEL for Edge simplified image Install the raw image with the edge-simplified-installer virt-install The following diagram represents the RHEL for Edge Simplified building and provisioning workflow: Figure 6.1. Building and provisioning RHEL for Edge in network-based environment 6.2. Creating RHEL for Edge Simplified Installer images by using the CLI 6.2.1. Setting up an UEFI HTTP Boot server Set up an UEFI HTTP Boot server so that you can start to provision a RHEL for Edge Virtual Machine over the network by connecting to this UEFI HTTP Boot server. Prerequisites You have created the ISO simplified installer image. An http server that serves the ISO content. Procedure Mount the ISO image to the directory of your choice: Replace /path_directory/installer.iso with the path to the RHEL for Edge bootable ISO image. Copy the files from the mounted image to the HTTP server root. This command creates the /var/www/html/rhel9-install/ directory with the contents of the image. Note Some copying methods can skip the .treeinfo file which is required for a valid installation source. Running the cp command for whole directories as shown in this procedure will copy .treeinfo correctly. Update the /var/www/html/EFI/BOOT/grub.cfg file, by replacing: coreos.inst.install_dev=/dev/sda with coreos.inst.install_dev=/dev/vda linux /images/pxeboot/vmlinuz with linuxefi /images/pxeboot/vmlinuz initrd /images/pxeboot/initrd.img with initrdefi /images/pxeboot/initrd.img coreos.inst.image_file=/run/media/iso/disk.img.xz with coreos.inst.image_url=http://{IP-ADDRESS}/disk.img.xz The IP-ADDRESS is the IP address of this machine, which will serve as a http boot server. Start the httpd service: As a result, after you set up an UEFI HTTP Boot server, you can install your RHEL for Edge devices by using UEFI HTTP boot. 6.2.2. Creating a blueprint for a Simplified image using RHEL image builder CLI To create a blueprint for a simplified RHEL for Edge image, you must add the following customizations to the blueprint: Customize the blueprint with the installation_device customization. Add a device file location to the blueprint to enable an unattended installation to the device. Add a URL to perform the initial device credential exchange. Customize the blueprint with the customizations.user and add with users and user groups to it. Procedure Create a plain text file in the Tom's Obvious, Minimal Language (TOML) format, with the following content: Note The FDO customization in the blueprints is optional, and you can build your RHEL for Edge Simplified Installer image with no errors. name is the name and description is the description for your blueprint. 0.0.1 is the version number according to the Semantic Versioning scheme. Modules describe the package name and matching version glob to be installed into the image, for example, the package name = "tmux" and the matching version glob is version = "2.9a". Notice that currently there are no differences between packages and modules. Groups are packages groups to be installed into the image, for example the anaconda-tools group package. If you do not know the modules and groups, leave them empty. installation-device is the customization to enable an unattended installation to your device. manufacturing_server_url is the URL to perform the initial device credential exchange. name is the user name to login to the image. password is a password of your choice. groups are any user groups, such as "widget". Push (import) the blueprint to the RHEL image builder server: Check whether the created blueprint is successfully pushed and exists. Check whether the components and versions listed in the blueprint and their dependencies are valid: Additional resources Composing a RHEL for Edge image using RHEL image builder command-line 6.2.3. Creating a RHEL for Edge Simplified Installer image by using image builder CLI Create a RHEL for Edge Simplified image by using RHEL image builder command-line interface. Prerequisites You created a blueprint for the RHEL for Edge Simplified image. You served an OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo. See ref:setting-up-a-web-server-to-install-rhel-for-edge-image_installing-rpm-ostree-images[Setting up a web server to install RHEL for Edge image]. Procedure Create the bootable ISO image. Where, blueprint-name is the RHEL for Edge blueprint name. edge-simplified-installer is the image type . --ref is the reference for where your commit is going to be created. --url is the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo/. You can either start a RHEL for Edge Container or set up a web server. See Creating a RHEL for Edge Container image for non-network-based deployments and Setting up a web server to install RHEL for Edge image . A confirmation that the composer process has been added to the queue appears. It also shows a Universally Unique Identifier (UUID) number for the image created. Use the UUID number to track your build. Also keep the UUID number handy for further tasks. Check the image compose status. The output displays the status in the following format: Note The image creation processes can take up to ten minutes to complete. To interrupt the image creation process, run: To delete an existing image, run: Additional resources Composing a RHEL for Edge image using RHEL image builder command-line 6.3. Downloading a simplified RHEL for Edge image using the image builder command-line interface To download a RHEL for Edge image by using RHEL image builder command-line interface, ensure that you have met the following prerequisites and then follow the procedure. Prerequisites You have created a RHEL for Edge image. Procedure Review the RHEL for Edge image status. The output must display the following: Download the image. RHEL image builder downloads the image as an .iso file at the current directory path where you run the command. The UUID number and the image size is displayed alongside. As a result, you downloaded a RHEL for Edge Simplified Installer ISO image. You can use it directly as a boot ISO to install a RHEL for Edge system. 6.4. Creating RHEL for Edge Simplified Installer images by using the GUI 6.4.1. Creating a blueprint for a Simplified image RHEL using image builder GUI To create a RHEL for Edge Simplified Installer image, you must create a blueprint and ensure that you customize it with: A device node location to enable an unattended installation to your device. A URL to perform the initial device credential exchange. A user or user group. Note You can also add any other customizations that your image requires. To create a blueprint for a simplified RHEL for Edge image in the RHEL image builder GUI, complete the following steps: Prerequisites You have opened the image builder app from the web console in a browser. See Accessing the RHEL image builder GUI in the RHEL web console . Procedure Click Create Blueprint in the upper-right corner of the RHEL image builder app. A dialog wizard with fields for the blueprint name and description opens. On the Details page: Enter the name of the blueprint and, optionally, its description. Click . Optional: On the Packages page, complete the following steps: In the Available packages search, enter the package name and click the > button to move it to the Chosen packages field. Search and include as many packages as you want. Click . Note The customizations are all optional unless otherwise specified. Optional: On the Kernel page, enter a kernel name and the command-line arguments. Optional: On the File system page, select Use automatic partitioning .The filesystem customization is not supported for OSTree systems, because OSTree images have their own mount rule, such as read-only. Click . Optional: On the Services page, you can enable or disable services: Enter the service names you want to enable or disable, separating them by a comma, by space, or by pressing the Enter key. Click . Optional: On the Firewall page, set up your firewall setting: Enter the Ports , and the firewall services you want to enable or disable. Click the Add zone button to manage your firewall rules for each zone independently. Click . On the Users page, add a users by following the steps: Click Add user . Enter a Username , a password , and a SSH key . You can also mark the user as a privileged user, by clicking the Server administrator checkbox. Note When you specify the user in the blueprint customization and then create an image from that blueprint, the blueprint creates the user under the /usr/lib/passwd directory and the password under the /usr/etc/shadow during installation time. You can log in to the device with the username and password you created for the blueprint. After you access the system, you must create users, for example, using the useradd command. Click . Optional: On the Groups page, add groups by completing the following steps: Click the Add groups button: Enter a Group name and a Group ID . You can add more groups. Click . Optional: On the SSH keys page, add a key: Click the Add key button. Enter the SSH key. Enter a User . Click . Optional: On the Timezone page, set your timezone settings: On the Timezone field, enter the timezone you want to add to your system image. For example, add the following timezone format: "US/Eastern". If you do not set a timezone, the system uses Universal Time, Coordinated (UTC) as default. Enter the NTP servers. Click . Optional: On the Locale page, complete the following steps: On the Keyboard search field, enter the package name you want to add to your system image. For example: ["en_US.UTF-8"]. On the Languages search field, enter the package name you want to add to your system image. For example: "us". Click . Mandatory: On the Others page, complete the following steps: In the Hostname field, enter the hostname you want to add to your system image. If you do not add a hostname, the operating system determines the hostname. Mandatory: In the Installation Devices field, enter a valid node for your system image to enable an unattended installation to your device. For example: dev/sda1 . Click . Optional: On the FIDO device onboarding page, complete the following steps: On the Manufacturing server URL field, enter the manufacturing server URL to perform the initial device credential exchange, for example: "http://10.0.0.2:8080". The FDO customization in the blueprints is optional, and you can build your RHEL for Edge Simplified Installer image with no errors. On the DIUN public key insecure field, enter the certification public key hash to perform the initial device credential exchange. This field accepts "true" as value, which means this is an insecure connection to the manufacturing server. For example: manufacturing_server_url="http://USD{FDO_SERVER}:8080" diun_pub_key_insecure="true" . You must use only one of these three options: "key insecure", "key hash" and "key root certs". On the DIUN public key hash field, enter the hashed version of your public key. For example: 17BD05952222C421D6F1BB1256E0C925310CED4CE1C4FFD6E5CB968F4B73BF73 . You can get the key hash by generating it based on the certificate of the manufacturing server. To generate the key hash, run the command: The /etc/fdo/aio/keys/diun_cert.pem is the certificate that is stored in the manufacturing server. On the DIUN public key root certs field, enter the public key root certs. This field accepts the content of the certification file that is stored in the manufacturing server. To get the content of certificate file, run the command: Click . On the Review page, review the details about the blueprint. Click Create . The RHEL image builder view opens, listing existing blueprints. 6.4.2. Creating a RHEL for Edge Simplified Installer image using image builder GUI To create a RHEL for Edge Simplified image by using RHEL image builder GUI, ensure that you have met the following prerequisites and then follow the procedure. Prerequisites You opened the RHEL image builder app from the web console in a browser. You created a blueprint for the RHEL for Edge Simplified image. You served an OSTree repository of the commit to embed in the image, for example, http://10.0.2.2:8080/repo . See Setting up a web server to install RHEL for Edge image . The FDO manufacturing server is up and running. Procedure Access mage builder dashboard. On the blueprint table, find the blueprint you want to build an image for. Navigate to the Images tab and click Create Image . The Create image wizard opens. On the Image output page, complete the following steps: From the Select a blueprint list, select the blueprint you created for the RHEL for Edge Simplified image. From the Image output type list, select RHEL for Edge Simplified Installer (.iso) . In the Image Size field, enter the image size. Minimum image size required for Simplified Installer image is: Click . In the OSTree settings page, complete the following steps: In the Repository URL field, enter the repository URL to where the parent OSTree commit will be pulled from. In the Ref field, enter the ref branch name path. If you do not enter a ref , the default ref for the distro is used. On the Review page, review the image customization and click Create . The image build starts and takes up to 20 minutes to complete. To stop the building, click Stop build . 6.4.3. Downloading a simplified RHEL for Edge image using the image builder GUI To download a RHEL for Edge image by using RHEL image builder GUI, ensure that you have met the following prerequisites and then follow the procedure. Prerequisites You have successfully created a RHEL for Edge image. See link. Procedure Access the RHEL image builder dashboard. The blueprint list dashboard opens. In the blueprint table, find the blueprint you built your RHEL for Edge Simplified Installer image for. Navigate to the Images tab. Choose one of the options: Download the image. Download the logs of the image to inspect the elements and verify if any issue is found. Note You can use the RHEL for Edge Simplified Installer ISO image that you downloaded directly as a boot ISO to install a RHEL for Edge system. 6.5. Provisioning the RHEL for Edge Simplified Installer image 6.5.1. Deploying the Simplified ISO image in a Virtual Machine Deploy the RHEL for Edge ISO image you generated by creating a RHEL for Edge Simplified image by using any the following installation sources: UEFI HTTP Boot virt-install This example shows how to create a virt-install installation source from your ISO image for a network-based installation . Prerequisites You have created an ISO image. You set up a network configuration to support UEFI HTTP boot. Procedure Set up a network configuration to support UEFI HTTP boot. See Setting up UEFI HTTP boot with libvirt . Use the virt-install command to create a RHEL for Edge Virtual Machine from the UEFI HTTP Boot. After you run the command, the Virtual Machine installation starts. Verification Log in to the created Virtual Machine. 6.5.2. Deploying the Simplified ISO image from a USB flash drive Deploy the RHEL for Edge ISO image you generated by creating a RHEL for Edge Simplified image by using an USB installation . This example shows how to create a USB installation source from your ISO image. Prerequisites You have created a simplified installer image, which is an ISO image. You have a 8 GB USB flash drive. Procedure Copy the ISO image file to a USB flash drive. Connect the USB flash drive to the port of the computer you want to boot. Boot the ISO image from the USB flash drive.The boot menu shows you the following options: Choose Install Red Hat Enterprise Linux 9. This starts the system installation. Additional resources Booting the installation 6.6. Creating and booting a RHEL for Edge image in FIPS mode You can create and boot a FIPS-enabled RHEL for Edge image. Before you compose the image, you must change the value of the fips directive in your blueprint. You can build the following image types in FIPS mode: edge-installer edge-simplified-installer edge-raw-image edge-ami edge-vsphere Important You can enable FIPS mode only during the image provisioning process. You cannot change to FIPS mode after the non-FIPS image build starts. If the host that builds the FIPS-enabled image is not FIPS-enabled, any keys generated by this host are not FIPS-compliant, but the resulting image is FIPS-compliant. Prerequisites You created and downloaded a RHEL for Edge Container OSTree commit. You have installed Podman on your system. See the Red Hat Knowledgebase solution How to install Podman in RHEL . You are logged in as the root user or a user who is a member of the weldr group. Procedure Create a plain text file in the Tom's Obvious, Minimal Language (TOML) format with the following content: Import the blueprint to the RHEL image builder server: List the existing blueprints to check whether the created blueprint is successfully imported and exists: Check whether the components and versions listed in the blueprint and their dependencies are valid: Serve an OSTree repository of the commit to embed in the image, for example, http://10.0.2.2:8080/repo . For more information, see Setting up an UEFI HTTP Boot server . Create the bootable ISO image: Review the RHEL for Edge image status: Download the image: RHEL image builder downloads the image as an .iso file to the current directory path. The UUID number and the image size are displayed alongside: Create a RHEL for Edge virtual machine from the UEFI HTTP Boot server, for example: After you enter the command, the virtual machine installation starts. Verification Log in to the created virtual machine with the username and password that you configured in your blueprint. Check if FIPS mode is enabled:
[ "mkdir /mnt/rhel9-install/ mount -o loop,ro -t iso9660 /path_directory/installer.iso /mnt/rhel9-install/", "mkdir /var/www/html/httpboot/ cp -R /mnt/rhel9-install/* /var/www/html/httpboot/ chmod -R +r /var/www/html/httpboot/*", "systemctl start httpd.service", "name = \"simplified-installer-blueprint\" description = \"blueprint for the simplified installer image\" version = \"0.0.1\" packages = [] modules = [] groups = [] distro = \"\" [customizations] installation_device = \"/dev/vda\" [[customizations.user]] name = \"admin\" password = \"admin\" groups = [\"users\", \"wheel\"] [customizations.fdo] manufacturing_server_url = \"http://10.0.0.2:8080\" diun_pub_key_insecure = \"true\"", "composer-cli blueprints push blueprint-name.toml", "composer-cli blueprints show blueprint-name", "composer-cli blueprints depsolve blueprint-name", "composer-cli compose start-ostree blueprint-name edge-simplified-installer --ref rhel/9/x86_64/edge --url URL-OSTree-repository \\", "composer-cli compose status", "<UUID> RUNNING date blueprint-name blueprint-version image-type", "composer-cli compose cancel <UUID>", "composer-cli compose delete <UUID>", "composer-cli compose status", "<UUID> FINISHED date blueprint-name blueprint-version image-type", "composer-cli compose image <UUID>", "<UUID> -simplified-installer.iso: size MB", "openssl x509 -fingerprint -sha256 -noout -in /etc/fdo/aio/keys/diun_cert.pem | cut -d\"=\" -f2 | sed 's/://g'", "cat /etc/fdo/aio/keys/diun_cert.pem.", "virt-install --name edge-install-image --disk path=\" \", ,format=qcow2 --ram 3072 --memory 4096 --vcpus 2 --network network=integration,mac=mac_address --os-type linux --os-variant rhel9 --cdrom \"/var/lib/libvirt/images/\"ISO_FILENAME\" --boot uefi,loader_ro=yes,loader_type=pflash,nvram_template=/usr/share/edk2/ovmf/OVMF_VARS.fd,loader_secure=no --virt-type kvm --graphics none --wait=-1 --noreboot", "Install Red Hat Enterprise Linux 9 Test this media & install Red Hat Enterprise Linux 9", "name = \"system-fips-mode-enabled\" description = \"blueprint with FIPS enabled \" version = \"0.0.1\" [ostree] ref= \"example/edge\" url= \"http://example.com/repo\" [customizations] installation_device = \"/dev/vda\" fips = true [[customizations.user]] name = \"admin\" password = \"admin\" groups = [\"users\", \"wheel\"] [customizations.fdo] manufacturing_server_url = \"https://fdo.example.com\" diun_pub_key_insecure = true", "composer-cli blueprints push <blueprint-name> .toml", "composer-cli blueprints show <blueprint-name>", "composer-cli blueprints depsolve <blueprint-name>", "composer-cli compose start-ostree \\ <blueprint-name> \\ edge-simplified-installer \\ --ref rhel/8/x86_64/edge \\ --url <URL-OSTree-repository> \\", "composer-cli compose status ... <UUID> FINISHED <date> <blueprint-name> <blueprint-version> <image-type> ...", "composer-cli compose image <UUID>", "<UUID> -simplified-installer.iso: <size> MB", "virt-install \\ --name edge-device --disk path=\"/var/lib/libvirt/images/edge-device.qcow2\",size=5,format=qcow2 \\ --memory 4096 \\ --vcpus 2 \\ --network network=default \\ --os-type linux \\ --os-variant rhel8.9 \\ --cdrom /var/lib/libvirt/images/ <UUID> -simplified-installer.iso \\ --boot uefi,loader.secure=false \\ --virt-type kvm \\ --graphics none \\ --wait=-1 \\ --noreboot", "fips-mode-setup --check FIPS mode is enabled." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/composing_installing_and_managing_rhel_for_edge_images/building-and-provisioning-simplified-installer-images_composing-installing-managing-rhel-for-edge-images
Chapter 2. OpenShift Data Foundation upgrade channels and releases
Chapter 2. OpenShift Data Foundation upgrade channels and releases In OpenShift Container Platform 4.1, Red Hat introduced the concept of channels for recommending the appropriate release versions for cluster upgrades. By controlling the pace of upgrades, these upgrade channels allow you to choose an upgrade strategy. As OpenShift Data Foundation gets deployed as an operator in OpenShift Container Platform, it follows the same strategy to control the pace of upgrades by shipping the fixes in multiple channels. Upgrade channels are tied to a minor version of OpenShift Data Foundation. For example, OpenShift Data Foundation 4.9 upgrade channels recommend upgrades within 4.9. They do not recommend upgrades to 4.10 or later releases. This strategy ensures that administrators can explicitly decide to upgrade to the minor version of OpenShift Data Foundation. Upgrade channels control only release selection and do not impact the version of the cluster that you install; the odf-operator decides the version of OpenShift Data Foundation to be installed. Out of the box, it always installs the latest OpenShift Data Foundation release maintaining the compatibility with OpenShift Container Platform. So on OpenShift Container Platform 4.9, OpenShift Data Foundation 4.9 will be the latest version which can be installed. OpenShift Data Foundation upgrades are tied to the OpenShift Container Platform upgrade to ensure that compatibility and interoperability are maintained with the OpenShift Container Platform. For OpenShift Data Foundation 4.9, OpenShift Container Platform 4.9 and 4.10 (when generally available) are supported. OpenShift Container Platform 4.10 is supported to maintain forward compatibility with OpenShift Container Platform. Keep the OpenShift Data Foundation version the same as OpenShift Container Platform in order to get the benefit of all the features and enhancements in that release. Important Due to fundamental Kubernetes design, all OpenShift Container Platform updates between minor versions must be serialized. You must update from OpenShift Container Platform 4.8 to 4.9 and then to 4.10. You cannot update from OpenShift Container Platform 4.8 to 4.10 directly. For more information, see Preparing to perform an EUS-to-EUS update of the Updating clusters guide in OpenShift Container Platform documentation. OpenShift Data Foundation 4.9 offers the following upgrade channel: stable-4.9 eus-4.y (only when running an even-numbered 4.y cluster release, like 4.10) stable-4.9 channel Once a new version is Generally Available, the stable channel corresponding to the minor version gets updated with the new image which can be used to upgrade. You can use the stable-4.9 channel to upgrade from OpenShift Container Storage 4.8 and upgrades within 4.9. eus-4.y channel In addition to the stable channel, all even-numbered minor versions of OpenShift Data Foundation offer Extended Update Support (EUS). These EUS versions extend the Full and Maintenance support phases for customers with Standard and Premium Subscriptions to 18 months. The only difference between stable-4.y and eus-4.y channels is that the EUS channels will include the release only when the EUS release is available.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/upgrading_to_openshift_data_foundation/openshift-data-foundation-upgrade-channels-and-releases_rhodf
Chapter 1. Overview
Chapter 1. Overview Security The SELinux userspace has been rebased and provides various enhancements and performance improvements. Notably, the new SELinux module store supports priorities, and the SELinux Common Intermediate Language (CIL) has been introduced. OpenSCAP workbench now provides a new SCAP Security Guide integration dialog and enables modification of SCAP policies using a graphical tool. The OpenSCAP suite now includes support for scanning containers using the atomic scan command. Upgraded firewalld starts and restarts significantly faster due to a new transaction model. It also provides improved management of connections, interfaces, and sources, a new default logging option, and ipset support. The audit daemon introduces a new flush technique, which significantly improves performance. Audit policy, configuration, and logging have been enhanced and now support a number of new options. Media Access Control Security (MACsec) encryption over Ethernet is now supported. See Chapter 15, Security for more information on security enhancements. Identity Management The highlighted new features and improvements related to Identity Management (IdM) include: Improved performance of both IdM servers and clients in large customer environments Enhanced topology management and replica installation Extended smart card support for Active Directory (AD) users Fine-grained configuration of one-time password (OTP) authentication Improved troubleshooting capabilities of IdM clients. Red Hat Enterprise Linux 7.2 introduced the Ipsilon identity provider service for federated single sign-on (SSO). Subsequently, Red Hat has released Red Hat Single Sign-On as a web SSO solution based on the Keycloak community project. Red Hat Single Sign-On provides greater capabilities than Ipsilon and is designated as the standard web SSO solution across the Red Hat product portfolio. For details on Red Hat Single Sign-On, see: Red Hat Single Sign-On product page Red Hat Single Sign-On Release Notes Note that Red Hat does not plan to upgrade Ipsilon from Technology Preview to a fully supported feature. The ipsilon packages will be removed from Red Hat Enterprise Linux in a future minor release. Entitlements to Red Hat Single Sign-On are currently available using Red Hat JBoss Middleware or OpenShift Container Platform subscriptions. For detailed information on changes in IdM, refer to Chapter 5, Authentication and Interoperability . Core Kernel Support for Checkpoint/Restore in User space (CRIU) has been expanded to the the little-endian variant of IBM Power Systems architecture. Heterogeneous memory management (HMM) feature has been introduced as a Technology Preview. For more kernel features, refer to Chapter 12, Kernel . For information about Technology Previews related to kernel, see Chapter 42, Kernel . Networking Open vSwitch now uses kernel lightweight tunnel support. Bulking in the memory allocator subsystem is now supported. NetworkManager now supports new device types, improved stacking of virtual devices, LLDP, stable privacy IPv6 addresses (RFC 7217), detects duplicate IPv4 addresses, and controls a host name through systemd-hostnamed . Additionally, the user can set a DHCP timeout property and DNS priorities. For more networking features, see Chapter 14, Networking . Platform Hardware Enablement Support for the Coherent Accelerator Processor Interface (CAPI) flash block adapter has been added. For detailed information, see Chapter 10, Hardware Enablement . Real-Time Kernel A new scheduler policy, SCHED_DEADLINE has been introduced as Technology Preview. This new policy is available in the upstream kernel and shows promise for certain Realtime use cases. For details, see Chapter 43, Real-Time Kernel . Storage and File Systems Support for Non-Volatile Dual In-line Memory Module (NVDIMM) persistent memory architecture has been added, which includes the addition of the libnvdimm kernel subsystem. NVDIMM memory can be accessed either as a block storage device, which is fully supported in Red Hat Enterprise Linux 7.3, or in Direct Access (DAX) mode, which is provided by the ext4 and XFS file systems as a Technology Preview in Red Hat Enterprise Linux 7.3. For more information, see Chapter 17, Storage and Chapter 12, Kernel in the New Features part, and Chapter 39, File Systems in the Technology Previews part. A new Ceph File System (CephFS) kernel module, introduced as a Technology Preview, enables Red Hat Enterprise Linux Linux nodes to mount Ceph File Systems from Red Hat Ceph Storage clusters. For more information, see Chapter 39, File Systems . Support for pNFS SCSI file sharing has been introduced as a Technology Preview. For details, refer to Chapter 39, File Systems . LVM2 support for RAID-level takeover, the ability to switch between RAID types, is now available as a Technology Preview. See Chapter 45, Storage for more information. Clustering For Red Hat Enterprise Linux 7.3, the Red Hat High Availability Add-On supports the following major enhancements: The ability to better configure and trigger notifications when the status of a managed cluster changes with the introduction of enhanced pacemaker alerts. The ability to configure Pacemaker to manage multi-site clusters across geo-locations for disaster recovery and scalability through the use of the Booth ticket manager. This feature is provided as a Technology Preview. The ability to configure Pacemaker to manage stretch clusters using a separate quorum device (QDevice), which acts as a third-party arbitration device for the cluster. This functionality is provided as a Technology Preview, and its primary use is to allow a cluster to sustain more node failures than standard quorum rules allow. For more information on enhancements to the Red Hat High Availability Add-On, see Chapter 6, Clustering in the New Features Part and Chapter 38, Clustering in the Technology Previews part. Desktop A new instant messaging client, pidgin , has been introduced, which supports off-the-record (OTR) messaging and the Microsoft Lync instant messaging application. For more information regarding changes in desktop, refer to Chapter 8, Desktop . Internet of Things Red Hat Enterprise Linux 7.3 provides latest Bluetooth support, including support for connecting to Bluetooth Low Energy (LE) devices; see Chapter 14, Networking . Controller Area Network (CAN) device drivers are now supported, see Chapter 12, Kernel for more information. Red Hat Enterprise Linux 7 kernel is now able to use the embedded MMC (eMMC) interface version 5.0. For details, refer to Chapter 10, Hardware Enablement . Linux Containers The System Security Services Daemon (SSSD) container is now available for Red Hat Enterprise Linux Atomic Host as Technology Preview. See Chapter 37, Authentication and Interoperability for details. Red Hat Insights Since Red Hat Enterprise Linux 7.2, the Red Hat Insights service is available. Red Hat Insights is a proactive service designed to enable you to identify, examine, and resolve known technical issues before they affect your deployment. Insights leverages the combined knowledge of Red Hat Support Engineers, documented solutions, and resolved issues to deliver relevant, actionable information to system administrators. The service is hosted and delivered through the customer portal at https://access.redhat.com/insights/ or through Red Hat Satellite. For further information, data security, and limits, refer to https://access.redhat.com/insights/splash/ . Red Hat Customer Portal Labs Red Hat Customer Portal Labs is a set of tools in a section of the Customer Portal available at https://access.redhat.com/labs/ . The applications in Red Hat Customer Portal Labs can help you improve performance, quickly troubleshoot issues, identify security problems, and quickly deploy and configure complex applications. Some of the most popular applications are: Kickstart Configurator Registration Assistant NFS Helper Linter for Dockerfile Multipath Helper iSCSI Helper Code Browser
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/chap-red_hat_enterprise_linux-7.3_release_notes-overview
Preface
Preface Depending on the type of your deployment, you can choose one of the following procedures to replace a storage device: For dynamically created storage clusters deployed on AWS, see: Section 1.1, "Replacing operational or failed storage devices on AWS user-provisioned infrastructure" . Section 1.2, "Replacing operational or failed storage devices on AWS installer-provisioned infrastructure" . For dynamically created storage clusters deployed on VMware, see Section 2.1, "Replacing operational or failed storage devices on VMware infrastructure" . For dynamically created storage clusters deployed on Microsoft Azure, see Section 3.1, "Replacing operational or failed storage devices on Azure installer-provisioned infrastructure" . For storage clusters deployed using local storage devices, see: Section 5.1, "Replacing operational or failed storage devices on clusters backed by local storage devices" . Section 5.2, "Replacing operational or failed storage devices on IBM Power" . Section 5.3, "Replacing operational or failed storage devices on IBM Z or IBM LinuxONE infrastructure" . Note OpenShift Data Foundation does not support heterogeneous OSD sizes.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/replacing_devices/preface-replacing-devices
Chapter 33. XQuery
Chapter 33. XQuery Overview XQuery was originally devised as a query language for data stored in XML form in a database. The XQuery language enables you to select parts of the current message, when the message is in XML format. XQuery is a superset of the XPath language; hence, any valid XPath expression is also a valid XQuery expression. Java syntax You can pass an XQuery expression to xquery() in several ways. For simple expressions, you can pass the XQuery expressions as a string ( java.lang.String ). For longer XQuery expressions, you might prefer to store the expression in a file, which you can then reference by passing a java.io.File argument or a java.net.URL argument to the overloaded xquery() method. The XQuery expression implicitly acts on the message content and returns a node set as the result. Depending on the context, the return value is interpreted either as a predicate (where an empty node set is interpreted as false) or as an expression. Adding the Saxon module To use XQuery in your routes you need to add a dependency on camel-saxon to your project as shown in Example 33.1, "Adding the camel-saxon dependency" . Example 33.1. Adding the camel-saxon dependency Camel on EAP deployment The camel-saxon component is supported by the Camel on EAP (Wildfly Camel) framework, which offers a simplified deployment model on the Red Hat JBoss Enterprise Application Platform (JBoss EAP) container. Static import To use the xquery() static method in your application code, include the following import statement in your Java source files: Variables Table 33.1, "XQuery variables" lists the variables that are accessible when using XQuery. Table 33.1. XQuery variables Variable Type Description exchange Exchange The current Exchange in.body Object The body of the IN message out.body Object The body of the OUT message in.headers. key Object The IN message header whose key is key out.headers. key Object The OUT message header whose key is key key Object The Exchange property whose key is key Example Example 33.2, "Route using XQuery" shows a route that uses XQuery. Example 33.2. Route using XQuery
[ "<!-- Maven POM File --> <dependencies> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-saxon</artifactId> <version>USD{camel-version}</version> </dependency> </dependencies>", "import static org.apache.camel.component.xquery.XQueryBuilder.xquery;", "<camelContext> <route> <from uri=\"activemq:MyQueue\"/> <filter> <language langauge=\"xquery\">/foo:person[@name='James']</language> <to uri=\"mqseries:SomeOtherQueue\"/> </filter> </route> </camelContext>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/XQuery
Chapter 4. RoleBinding [rbac.authorization.k8s.io/v1]
Chapter 4. RoleBinding [rbac.authorization.k8s.io/v1] Description RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace. It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace. Type object Required roleRef 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. roleRef object RoleRef contains information that points to the role being used subjects array Subjects holds references to the objects the role applies to. subjects[] object Subject contains a reference to the object or user identities a role binding applies to. This can either hold a direct API object reference, or a value for non-objects such as user and group names. 4.1.1. .roleRef Description RoleRef contains information that points to the role being used Type object Required apiGroup kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 4.1.2. .subjects Description Subjects holds references to the objects the role applies to. Type array 4.1.3. .subjects[] Description Subject contains a reference to the object or user identities a role binding applies to. This can either hold a direct API object reference, or a value for non-objects such as user and group names. Type object Required kind name Property Type Description apiGroup string APIGroup holds the API group of the referenced subject. Defaults to "" for ServiceAccount subjects. Defaults to "rbac.authorization.k8s.io" for User and Group subjects. kind string Kind of object being referenced. Values defined by this API group are "User", "Group", and "ServiceAccount". If the Authorizer does not recognized the kind value, the Authorizer should report an error. name string Name of the object being referenced. namespace string Namespace of the referenced object. If the object kind is non-namespace, such as "User" or "Group", and this value is not empty the Authorizer should report an error. 4.2. API endpoints The following API endpoints are available: /apis/rbac.authorization.k8s.io/v1/rolebindings GET : list or watch objects of kind RoleBinding /apis/rbac.authorization.k8s.io/v1/watch/rolebindings GET : watch individual changes to a list of RoleBinding. deprecated: use the 'watch' parameter with a list operation instead. /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/rolebindings DELETE : delete collection of RoleBinding GET : list or watch objects of kind RoleBinding POST : create a RoleBinding /apis/rbac.authorization.k8s.io/v1/watch/namespaces/{namespace}/rolebindings GET : watch individual changes to a list of RoleBinding. deprecated: use the 'watch' parameter with a list operation instead. /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/rolebindings/{name} DELETE : delete a RoleBinding GET : read the specified RoleBinding PATCH : partially update the specified RoleBinding PUT : replace the specified RoleBinding /apis/rbac.authorization.k8s.io/v1/watch/namespaces/{namespace}/rolebindings/{name} GET : watch changes to an object of kind RoleBinding. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 4.2.1. /apis/rbac.authorization.k8s.io/v1/rolebindings HTTP method GET Description list or watch objects of kind RoleBinding Table 4.1. HTTP responses HTTP code Reponse body 200 - OK RoleBindingList schema 401 - Unauthorized Empty 4.2.2. /apis/rbac.authorization.k8s.io/v1/watch/rolebindings HTTP method GET Description watch individual changes to a list of RoleBinding. deprecated: use the 'watch' parameter with a list operation instead. Table 4.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/rolebindings HTTP method DELETE Description delete collection of RoleBinding Table 4.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind RoleBinding Table 4.5. HTTP responses HTTP code Reponse body 200 - OK RoleBindingList schema 401 - Unauthorized Empty HTTP method POST Description create a RoleBinding Table 4.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.7. Body parameters Parameter Type Description body RoleBinding schema Table 4.8. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 202 - Accepted RoleBinding schema 401 - Unauthorized Empty 4.2.4. /apis/rbac.authorization.k8s.io/v1/watch/namespaces/{namespace}/rolebindings HTTP method GET Description watch individual changes to a list of RoleBinding. deprecated: use the 'watch' parameter with a list operation instead. Table 4.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.5. /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/rolebindings/{name} Table 4.10. Global path parameters Parameter Type Description name string name of the RoleBinding HTTP method DELETE Description delete a RoleBinding Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified RoleBinding Table 4.13. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified RoleBinding Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.15. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified RoleBinding Table 4.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.17. Body parameters Parameter Type Description body RoleBinding schema Table 4.18. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 401 - Unauthorized Empty 4.2.6. /apis/rbac.authorization.k8s.io/v1/watch/namespaces/{namespace}/rolebindings/{name} Table 4.19. Global path parameters Parameter Type Description name string name of the RoleBinding HTTP method GET Description watch changes to an object of kind RoleBinding. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/rbac_apis/rolebinding-rbac-authorization-k8s-io-v1
8.2. Backups
8.2. Backups Backups have two major purposes: To permit restoration of individual files To permit wholesale restoration of entire file systems The first purpose is the basis for the typical file restoration request: a user accidentally deletes a file and asks that it be restored from the latest backup. The exact circumstances may vary somewhat, but this is the most common day-to-day use for backups. The second situation is a system administrator's worst nightmare: for whatever reason, the system administrator is staring at hardware that used to be a productive part of the data center. Now, it is little more than a lifeless chunk of steel and silicon. The thing that is missing is all the software and data you and your users have assembled over the years. Supposedly everything has been backed up. The question is: has it? And if it has, can you restore it? 8.2.1. Different Data: Different Backup Needs Look at the kinds of data [29] processed and stored by a typical computer system. Notice that some of the data hardly ever changes, and some of the data is constantly changing. The pace at which data changes is crucial to the design of a backup procedure. There are two reasons for this: A backup is nothing more than a snapshot of the data being backed up. It is a reflection of that data at a particular moment in time. Data that changes infrequently can be backed up infrequently, while data that changes often must be backed up more frequently. System administrators that have a good understanding of their systems, users, and applications should be able to quickly group the data on their systems into different categories. However, here are some examples to get you started: Operating System This data normally only changes during upgrades, the installation of bug fixes, and any site-specific modifications. Note Should you even bother with operating system backups? This is a question that many system administrators have pondered over the years. On the one hand, if the installation process is relatively easy, and if the application of bugfixes and customizations are well documented and easily reproducible, reinstalling the operating system may be a viable option. On the other hand, if there is the least doubt that a fresh installation can completely recreate the original system environment, backing up the operating system is the best choice, even if the backups are performed much less frequently than the backups for production data. Occasional operating system backups also come in handy when only a few system files must be restored (for example, due to accidental file deletion). Application Software This data changes whenever applications are installed, upgraded, or removed. Application Data This data changes as frequently as the associated applications are run. Depending on the specific application and your organization, this could mean that changes take place second-by-second or once at the end of each fiscal year. User Data This data changes according to the usage patterns of your user community. In most organizations, this means that changes take place all the time. Based on these categories (and any additional ones that are specific to your organization), you should have a pretty good idea concerning the nature of the backups that are needed to protect your data. Note You should keep in mind that most backup software deals with data on a directory or file system level. In other words, your system's directory structure plays a part in how backups will be performed. This is another reason why it is always a good idea to carefully consider the best directory structure for a new system and group files and directories according to their anticipated usage. [29] We are using the term data in this section to describe anything that is processed via backup software. This includes operating system software, application software, as well as actual data. No matter what it is, as far as backup software is concerned, it is all data.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-disaster-backups
Chapter 14. Hardware networks
Chapter 14. Hardware networks 14.1. About Single Root I/O Virtualization (SR-IOV) hardware networks The Single Root I/O Virtualization (SR-IOV) specification is a standard for a type of PCI device assignment that can share a single device with multiple pods. SR-IOV can segment a compliant network device, recognized on the host node as a physical function (PF), into multiple virtual functions (VFs). The VF is used like any other network device. The SR-IOV network device driver for the device determines how the VF is exposed in the container: netdevice driver: A regular kernel network device in the netns of the container vfio-pci driver: A character device mounted in the container You can use SR-IOV network devices with additional networks on your OpenShift Container Platform cluster installed on bare metal or Red Hat OpenStack Platform (RHOSP) infrastructure for applications that require high bandwidth or low latency. You can enable SR-IOV on a node by using the following command: USD oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable="true" 14.1.1. Components that manage SR-IOV network devices The SR-IOV Network Operator creates and manages the components of the SR-IOV stack. It performs the following functions: Orchestrates discovery and management of SR-IOV network devices Generates NetworkAttachmentDefinition custom resources for the SR-IOV Container Network Interface (CNI) Creates and updates the configuration of the SR-IOV network device plugin Creates node specific SriovNetworkNodeState custom resources Updates the spec.interfaces field in each SriovNetworkNodeState custom resource The Operator provisions the following components: SR-IOV network configuration daemon A daemon set that is deployed on worker nodes when the SR-IOV Network Operator starts. The daemon is responsible for discovering and initializing SR-IOV network devices in the cluster. SR-IOV Network Operator webhook A dynamic admission controller webhook that validates the Operator custom resource and sets appropriate default values for unset fields. SR-IOV Network resources injector A dynamic admission controller webhook that provides functionality for patching Kubernetes pod specifications with requests and limits for custom network resources such as SR-IOV VFs. The SR-IOV network resources injector adds the resource field to only the first container in a pod automatically. SR-IOV network device plugin A device plugin that discovers, advertises, and allocates SR-IOV network virtual function (VF) resources. Device plugins are used in Kubernetes to enable the use of limited resources, typically in physical devices. Device plugins give the Kubernetes scheduler awareness of resource availability, so that the scheduler can schedule pods on nodes with sufficient resources. SR-IOV CNI plugin A CNI plugin that attaches VF interfaces allocated from the SR-IOV network device plugin directly into a pod. SR-IOV InfiniBand CNI plugin A CNI plugin that attaches InfiniBand (IB) VF interfaces allocated from the SR-IOV network device plugin directly into a pod. Note The SR-IOV Network resources injector and SR-IOV Network Operator webhook are enabled by default and can be disabled by editing the default SriovOperatorConfig CR. Use caution when disabling the SR-IOV Network Operator Admission Controller webhook. You can disable the webhook under specific circumstances, such as troubleshooting, or if you want to use unsupported devices. 14.1.1.1. Supported platforms The SR-IOV Network Operator is supported on the following platforms: Bare metal Red Hat OpenStack Platform (RHOSP) 14.1.1.2. Supported devices OpenShift Container Platform supports the following network interface controllers: Table 14.1. Supported network interface controllers Manufacturer Model Vendor ID Device ID Broadcom BCM57414 14e4 16d7 Broadcom BCM57508 14e4 1750 Intel X710 8086 1572 Intel XL710 8086 1583 Intel XXV710 8086 158b Intel E810-CQDA2 8086 1592 Intel E810-2CQDA2 8086 1592 Intel E810-XXVDA2 8086 159b Intel E810-XXVDA4 8086 1593 Mellanox MT27700 Family [ConnectX‐4] 15b3 1013 Mellanox MT27710 Family [ConnectX‐4 Lx] 15b3 1015 Mellanox MT27800 Family [ConnectX‐5] 15b3 1017 Mellanox MT28880 Family [ConnectX‐5 Ex] 15b3 1019 Mellanox MT28908 Family [ConnectX‐6] 15b3 101b Mellanox MT2894 Family [ConnectX‐6 Lx] 15b3 101f Note For the most up-to-date list of supported cards and compatible OpenShift Container Platform versions available, see Openshift Single Root I/O Virtualization (SR-IOV) and PTP hardware networks Support Matrix . 14.1.1.3. Automated discovery of SR-IOV network devices The SR-IOV Network Operator searches your cluster for SR-IOV capable network devices on worker nodes. The Operator creates and updates a SriovNetworkNodeState custom resource (CR) for each worker node that provides a compatible SR-IOV network device. The CR is assigned the same name as the worker node. The status.interfaces list provides information about the network devices on a node. Important Do not modify a SriovNetworkNodeState object. The Operator creates and manages these resources automatically. 14.1.1.3.1. Example SriovNetworkNodeState object The following YAML is an example of a SriovNetworkNodeState object created by the SR-IOV Network Operator: An SriovNetworkNodeState object apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState metadata: name: node-25 1 namespace: openshift-sriov-network-operator ownerReferences: - apiVersion: sriovnetwork.openshift.io/v1 blockOwnerDeletion: true controller: true kind: SriovNetworkNodePolicy name: default spec: dpConfigVersion: "39824" status: interfaces: 2 - deviceID: "1017" driver: mlx5_core mtu: 1500 name: ens785f0 pciAddress: "0000:18:00.0" totalvfs: 8 vendor: 15b3 - deviceID: "1017" driver: mlx5_core mtu: 1500 name: ens785f1 pciAddress: "0000:18:00.1" totalvfs: 8 vendor: 15b3 - deviceID: 158b driver: i40e mtu: 1500 name: ens817f0 pciAddress: 0000:81:00.0 totalvfs: 64 vendor: "8086" - deviceID: 158b driver: i40e mtu: 1500 name: ens817f1 pciAddress: 0000:81:00.1 totalvfs: 64 vendor: "8086" - deviceID: 158b driver: i40e mtu: 1500 name: ens803f0 pciAddress: 0000:86:00.0 totalvfs: 64 vendor: "8086" syncStatus: Succeeded 1 The value of the name field is the same as the name of the worker node. 2 The interfaces stanza includes a list of all of the SR-IOV devices discovered by the Operator on the worker node. 14.1.1.4. Example use of a virtual function in a pod You can run a remote direct memory access (RDMA) or a Data Plane Development Kit (DPDK) application in a pod with SR-IOV VF attached. This example shows a pod using a virtual function (VF) in RDMA mode: Pod spec that uses RDMA mode apiVersion: v1 kind: Pod metadata: name: rdma-app annotations: k8s.v1.cni.cncf.io/networks: sriov-rdma-mlnx spec: containers: - name: testpmd image: <RDMA_image> imagePullPolicy: IfNotPresent securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] command: ["sleep", "infinity"] The following example shows a pod with a VF in DPDK mode: Pod spec that uses DPDK mode apiVersion: v1 kind: Pod metadata: name: dpdk-app annotations: k8s.v1.cni.cncf.io/networks: sriov-dpdk-net spec: containers: - name: testpmd image: <DPDK_image> securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: memory: "1Gi" cpu: "2" hugepages-1Gi: "4Gi" requests: memory: "1Gi" cpu: "2" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 14.1.1.5. DPDK library for use with container applications An optional library , app-netutil , provides several API methods for gathering network information about a pod from within a container running within that pod. This library can assist with integrating SR-IOV virtual functions (VFs) in Data Plane Development Kit (DPDK) mode into the container. The library provides both a Golang API and a C API. Currently there are three API methods implemented: GetCPUInfo() This function determines which CPUs are available to the container and returns the list. GetHugepages() This function determines the amount of huge page memory requested in the Pod spec for each container and returns the values. GetInterfaces() This function determines the set of interfaces in the container and returns the list. The return value includes the interface type and type-specific data for each interface. The repository for the library includes a sample Dockerfile to build a container image, dpdk-app-centos . The container image can run one of the following DPDK sample applications, depending on an environment variable in the pod specification: l2fwd , l3wd or testpmd . The container image provides an example of integrating the app-netutil library into the container image itself. The library can also integrate into an init container. The init container can collect the required data and pass the data to an existing DPDK workload. 14.1.1.6. Huge pages resource injection for Downward API When a pod specification includes a resource request or limit for huge pages, the Network Resources Injector automatically adds Downward API fields to the pod specification to provide the huge pages information to the container. The Network Resources Injector adds a volume that is named podnetinfo and is mounted at /etc/podnetinfo for each container in the pod. The volume uses the Downward API and includes a file for huge pages requests and limits. The file naming convention is as follows: /etc/podnetinfo/hugepages_1G_request_<container-name> /etc/podnetinfo/hugepages_1G_limit_<container-name> /etc/podnetinfo/hugepages_2M_request_<container-name> /etc/podnetinfo/hugepages_2M_limit_<container-name> The paths specified in the list are compatible with the app-netutil library. By default, the library is configured to search for resource information in the /etc/podnetinfo directory. If you choose to specify the Downward API path items yourself manually, the app-netutil library searches for the following paths in addition to the paths in the list. /etc/podnetinfo/hugepages_request /etc/podnetinfo/hugepages_limit /etc/podnetinfo/hugepages_1G_request /etc/podnetinfo/hugepages_1G_limit /etc/podnetinfo/hugepages_2M_request /etc/podnetinfo/hugepages_2M_limit As with the paths that the Network Resources Injector can create, the paths in the preceding list can optionally end with a _<container-name> suffix. 14.1.2. steps Installing the SR-IOV Network Operator Optional: Configuring the SR-IOV Network Operator Configuring an SR-IOV network device If you use OpenShift Virtualization: Connecting a virtual machine to an SR-IOV network Configuring an SR-IOV network attachment Adding a pod to an SR-IOV additional network 14.2. Installing the SR-IOV Network Operator You can install the Single Root I/O Virtualization (SR-IOV) Network Operator on your cluster to manage SR-IOV network devices and network attachments. 14.2.1. Installing SR-IOV Network Operator As a cluster administrator, you can install the SR-IOV Network Operator by using the OpenShift Container Platform CLI or the web console. 14.2.1.1. CLI: Installing the SR-IOV Network Operator As a cluster administrator, you can install the Operator using the CLI. Prerequisites A cluster installed on bare-metal hardware with nodes that have hardware that supports SR-IOV. Install the OpenShift CLI ( oc ). An account with cluster-admin privileges. Procedure To create the openshift-sriov-network-operator namespace, enter the following command: USD cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management EOF To create an OperatorGroup CR, enter the following command: USD cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator EOF Subscribe to the SR-IOV Network Operator. Run the following command to get the OpenShift Container Platform major and minor version. It is required for the channel value in the step. USD OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | \ grep -o '[0-9]*[.][0-9]*' | head -1) To create a Subscription CR for the SR-IOV Network Operator, enter the following command: USD cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: "USD{OC_VERSION}" name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF To verify that the Operator is installed, enter the following command: USD oc get csv -n openshift-sriov-network-operator \ -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase sriov-network-operator.4.9.0-202110121402 Succeeded 14.2.1.2. Web console: Installing the SR-IOV Network Operator As a cluster administrator, you can install the Operator using the web console. Prerequisites A cluster installed on bare-metal hardware with nodes that have hardware that supports SR-IOV. Install the OpenShift CLI ( oc ). An account with cluster-admin privileges. Procedure Install the SR-IOV Network Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Select SR-IOV Network Operator from the list of available Operators, and then click Install . On the Install Operator page, under Installed Namespace , select Operator recommended Namespace . Click Install . Verify that the SR-IOV Network Operator is installed successfully: Navigate to the Operators Installed Operators page. Ensure that SR-IOV Network Operator is listed in the openshift-sriov-network-operator project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator does not appear as installed, to troubleshoot further: Inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Navigate to the Workloads Pods page and check the logs for pods in the openshift-sriov-network-operator project. Check the namespace of the YAML file. If the annotation is missing, you can add the annotation workload.openshift.io/allowed=management to the Operator namespace with the following command: USD oc annotate ns/openshift-sriov-network-operator workload.openshift.io/allowed=management Note For single-node OpenShift clusters, the annotation workload.openshift.io/allowed=management is required for the namespace. 14.2.2. steps Optional: Configuring the SR-IOV Network Operator 14.3. Configuring the SR-IOV Network Operator The Single Root I/O Virtualization (SR-IOV) Network Operator manages the SR-IOV network devices and network attachments in your cluster. 14.3.1. Configuring the SR-IOV Network Operator Important Modifying the SR-IOV Network Operator configuration is not normally necessary. The default configuration is recommended for most use cases. Complete the steps to modify the relevant configuration only if the default behavior of the Operator is not compatible with your use case. The SR-IOV Network Operator adds the SriovOperatorConfig.sriovnetwork.openshift.io CustomResourceDefinition resource. The Operator automatically creates a SriovOperatorConfig custom resource (CR) named default in the openshift-sriov-network-operator namespace. Note The default CR contains the SR-IOV Network Operator configuration for your cluster. To change the Operator configuration, you must modify this CR. 14.3.1.1. SR-IOV Network Operator config custom resource The fields for the sriovoperatorconfig custom resource are described in the following table: Table 14.2. SR-IOV Network Operator config custom resource Field Type Description metadata.name string Specifies the name of the SR-IOV Network Operator instance. The default value is default . Do not set a different value. metadata.namespace string Specifies the namespace of the SR-IOV Network Operator instance. The default value is openshift-sriov-network-operator . Do not set a different value. spec.configDaemonNodeSelector string Specifies the node selection to control scheduling the SR-IOV Network Config Daemon on selected nodes. By default, this field is not set and the Operator deploys the SR-IOV Network Config daemon set on worker nodes. spec.disableDrain boolean Specifies whether to disable the node draining process or enable the node draining process when you apply a new policy to configure the NIC on a node. Setting this field to true facilitates software development and installing OpenShift Container Platform on a single node. By default, this field is not set. For single-node clusters, set this field to true after installing the Operator. This field must remain set to true . spec.enableInjector boolean Specifies whether to enable or disable the Network Resources Injector daemon set. By default, this field is set to true . spec.enableOperatorWebhook boolean Specifies whether to enable or disable the Operator Admission Controller webhook daemon set. By default, this field is set to true . spec.logLevel integer Specifies the log verbosity level of the Operator. Set to 0 to show only the basic logs. Set to 2 to show all the available logs. By default, this field is set to 2 . 14.3.1.2. About the Network Resources Injector The Network Resources Injector is a Kubernetes Dynamic Admission Controller application. It provides the following capabilities: Mutation of resource requests and limits in a pod specification to add an SR-IOV resource name according to an SR-IOV network attachment definition annotation. Mutation of a pod specification with a Downward API volume to expose pod annotations, labels, and huge pages requests and limits. Containers that run in the pod can access the exposed information as files under the /etc/podnetinfo path. By default, the Network Resources Injector is enabled by the SR-IOV Network Operator and runs as a daemon set on all control plane nodes. The following is an example of Network Resources Injector pods running in a cluster with three control plane nodes: USD oc get pods -n openshift-sriov-network-operator Example output NAME READY STATUS RESTARTS AGE network-resources-injector-5cz5p 1/1 Running 0 10m network-resources-injector-dwqpx 1/1 Running 0 10m network-resources-injector-lktz5 1/1 Running 0 10m 14.3.1.3. About the SR-IOV Network Operator admission controller webhook The SR-IOV Network Operator Admission Controller webhook is a Kubernetes Dynamic Admission Controller application. It provides the following capabilities: Validation of the SriovNetworkNodePolicy CR when it is created or updated. Mutation of the SriovNetworkNodePolicy CR by setting the default value for the priority and deviceType fields when the CR is created or updated. By default the SR-IOV Network Operator Admission Controller webhook is enabled by the Operator and runs as a daemon set on all control plane nodes. Note Use caution when disabling the SR-IOV Network Operator Admission Controller webhook. You can disable the webhook under specific circumstances, such as troubleshooting, or if you want to use unsupported devices. The following is an example of the Operator Admission Controller webhook pods running in a cluster with three control plane nodes: USD oc get pods -n openshift-sriov-network-operator Example output NAME READY STATUS RESTARTS AGE operator-webhook-9jkw6 1/1 Running 0 16m operator-webhook-kbr5p 1/1 Running 0 16m operator-webhook-rpfrl 1/1 Running 0 16m 14.3.1.4. About custom node selectors The SR-IOV Network Config daemon discovers and configures the SR-IOV network devices on cluster nodes. By default, it is deployed to all the worker nodes in the cluster. You can use node labels to specify on which nodes the SR-IOV Network Config daemon runs. 14.3.1.5. Disabling or enabling the Network Resources Injector To disable or enable the Network Resources Injector, which is enabled by default, complete the following procedure. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. You must have installed the SR-IOV Network Operator. Procedure Set the enableInjector field. Replace <value> with false to disable the feature or true to enable the feature. USD oc patch sriovoperatorconfig default \ --type=merge -n openshift-sriov-network-operator \ --patch '{ "spec": { "enableInjector": <value> } }' Tip You can alternatively apply the following YAML to update the Operator: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableInjector: <value> 14.3.1.6. Disabling or enabling the SR-IOV Network Operator admission controller webhook To disable or enable the admission controller webhook, which is enabled by default, complete the following procedure. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. You must have installed the SR-IOV Network Operator. Procedure Set the enableOperatorWebhook field. Replace <value> with false to disable the feature or true to enable it: USD oc patch sriovoperatorconfig default --type=merge \ -n openshift-sriov-network-operator \ --patch '{ "spec": { "enableOperatorWebhook": <value> } }' Tip You can alternatively apply the following YAML to update the Operator: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableOperatorWebhook: <value> 14.3.1.7. Configuring a custom NodeSelector for the SR-IOV Network Config daemon The SR-IOV Network Config daemon discovers and configures the SR-IOV network devices on cluster nodes. By default, it is deployed to all the worker nodes in the cluster. You can use node labels to specify on which nodes the SR-IOV Network Config daemon runs. To specify the nodes where the SR-IOV Network Config daemon is deployed, complete the following procedure. Important When you update the configDaemonNodeSelector field, the SR-IOV Network Config daemon is recreated on each selected node. While the daemon is recreated, cluster users are unable to apply any new SR-IOV Network node policy or create new SR-IOV pods. Procedure To update the node selector for the operator, enter the following command: USD oc patch sriovoperatorconfig default --type=json \ -n openshift-sriov-network-operator \ --patch '[{ "op": "replace", "path": "/spec/configDaemonNodeSelector", "value": {<node_label>} }]' Replace <node_label> with a label to apply as in the following example: "node-role.kubernetes.io/worker": "" . Tip You can alternatively apply the following YAML to update the Operator: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: <node_label> 14.3.1.8. Configuring the SR-IOV Network Operator for single node installations By default, the SR-IOV Network Operator drains workloads from a node before every policy change. The Operator performs this action to ensure that there no workloads using the virtual functions before the reconfiguration. For installations on a single node, there are no other nodes to receive the workloads. As a result, the Operator must be configured not to drain the workloads from the single node. Important After performing the following procedure to disable draining workloads, you must remove any workload that uses an SR-IOV network interface before you change any SR-IOV network node policy. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. You must have installed the SR-IOV Network Operator. Procedure To set the disableDrain field to true , enter the following command: USD oc patch sriovoperatorconfig default --type=merge \ -n openshift-sriov-network-operator \ --patch '{ "spec": { "disableDrain": true } }' Tip You can alternatively apply the following YAML to update the Operator: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: disableDrain: true 14.3.2. steps Configuring an SR-IOV network device 14.4. Configuring an SR-IOV network device You can configure a Single Root I/O Virtualization (SR-IOV) device in your cluster. 14.4.1. SR-IOV network node configuration object You specify the SR-IOV network device configuration for a node by creating an SR-IOV network node policy. The API object for the policy is part of the sriovnetwork.openshift.io API group. The following YAML describes an SR-IOV network node policy: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" 4 priority: <priority> 5 mtu: <mtu> 6 needVhostNet: false 7 numVfs: <num> 8 nicSelector: 9 vendor: "<vendor_code>" 10 deviceID: "<device_id>" 11 pfNames: ["<pf_name>", ...] 12 rootDevices: ["<pci_bus_id>", ...] 13 netFilter: "<filter_string>" 14 deviceType: <device_type> 15 isRdma: false 16 linkType: <link_type> 17 1 The name for the custom resource object. 2 The namespace where the SR-IOV Network Operator is installed. 3 The resource name of the SR-IOV network device plugin. You can create multiple SR-IOV network node policies for a resource name. 4 The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed on selected nodes only. 5 Optional: The priority is an integer value between 0 and 99 . A smaller value receives higher priority. For example, a priority of 10 is a higher priority than 99 . The default value is 99 . 6 Optional: The maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different network interface controller (NIC) models. 7 Optional: Set needVhostNet to true to mount the /dev/vhost-net device in the pod. Use the mounted /dev/vhost-net device with Data Plane Development Kit (DPDK) to forward traffic to the kernel network stack. 8 The number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 128 . 9 The NIC selector identifies the device for the Operator to configure. You do not have to specify values for all the parameters. It is recommended to identify the network device with enough precision to avoid selecting a device unintentionally. If you specify rootDevices , you must also specify a value for vendor , deviceID , or pfNames . If you specify both pfNames and rootDevices at the same time, ensure that they refer to the same device. If you specify a value for netFilter , then you do not need to specify any other parameter because a network ID is unique. 10 Optional: The vendor hexadecimal code of the SR-IOV network device. The only allowed values are 8086 and 15b3 . 11 Optional: The device hexadecimal code of the SR-IOV network device. For example, 101b is the device ID for a Mellanox ConnectX-6 device. 12 Optional: An array of one or more physical function (PF) names for the device. 13 Optional: An array of one or more PCI bus addresses for the PF of the device. Provide the address in the following format: 0000:02:00.1 . 14 Optional: The platform-specific network filter. The only supported platform is Red Hat OpenStack Platform (RHOSP). Acceptable values use the following format: openstack/NetworkID:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx . Replace xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx with the value from the /var/config/openstack/latest/network_data.json metadata file. 15 Optional: The driver type for the virtual functions. The only allowed values are netdevice and vfio-pci . The default value is netdevice . For a Mellanox NIC to work in DPDK mode on bare metal nodes, use the netdevice driver type and set isRdma to true . 16 Optional: Configures whether to enable remote direct memory access (RDMA) mode. The default value is false . If the isRdma parameter is set to true , you can continue to use the RDMA-enabled VF as a normal network device. A device can be used in either mode. Set isRdma to true and additionally set needVhostNet to true to configure a Mellanox NIC for use with Fast Datapath DPDK applications. 17 Optional: The link type for the VFs. The default value is eth for Ethernet. Change this value to ib for InfiniBand. When linkType is set to ib , isRdma is automatically set to true by the SR-IOV Network Operator webhook. When linkType is set to ib , deviceType should not be set to vfio-pci . Do not set linkType to eth for SriovNetworkNodePolicy , because this can lead to an incorrect number of available devices reported by the device plug-in. 14.4.1.1. SR-IOV network node configuration examples The following example describes the configuration for an InfiniBand device: Example configuration for an InfiniBand device apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-ib-net-1 namespace: openshift-sriov-network-operator spec: resourceName: ibnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 4 nicSelector: vendor: "15b3" deviceID: "101b" rootDevices: - "0000:19:00.0" linkType: ib isRdma: true The following example describes the configuration for an SR-IOV network device in a RHOSP virtual machine: Example configuration for an SR-IOV device in a virtual machine apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-sriov-net-openstack-1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 1 1 nicSelector: vendor: "15b3" deviceID: "101b" netFilter: "openstack/NetworkID:ea24bd04-8674-4f69-b0ee-fa0b3bd20509" 2 1 The numVfs field is always set to 1 when configuring the node network policy for a virtual machine. 2 The netFilter field must refer to a network ID when the virtual machine is deployed on RHOSP. Valid values for netFilter are available from an SriovNetworkNodeState object. 14.4.1.2. Virtual function (VF) partitioning for SR-IOV devices In some cases, you might want to split virtual functions (VFs) from the same physical function (PF) into multiple resource pools. For example, you might want some of the VFs to load with the default driver and the remaining VFs load with the vfio-pci driver. In such a deployment, the pfNames selector in your SriovNetworkNodePolicy custom resource (CR) can be used to specify a range of VFs for a pool using the following format: <pfname>#<first_vf>-<last_vf> . For example, the following YAML shows the selector for an interface named netpf0 with VF 2 through 7 : pfNames: ["netpf0#2-7"] netpf0 is the PF interface name. 2 is the first VF index (0-based) that is included in the range. 7 is the last VF index (0-based) that is included in the range. You can select VFs from the same PF by using different policy CRs if the following requirements are met: The numVfs value must be identical for policies that select the same PF. The VF index must be in the range of 0 to <numVfs>-1 . For example, if you have a policy with numVfs set to 8 , then the <first_vf> value must not be smaller than 0 , and the <last_vf> must not be larger than 7 . The VFs ranges in different policies must not overlap. The <first_vf> must not be larger than the <last_vf> . The following example illustrates NIC partitioning for an SR-IOV device. The policy policy-net-1 defines a resource pool net-1 that contains the VF 0 of PF netpf0 with the default VF driver. The policy policy-net-1-dpdk defines a resource pool net-1-dpdk that contains the VF 8 to 15 of PF netpf0 with the vfio VF driver. Policy policy-net-1 : apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1 namespace: openshift-sriov-network-operator spec: resourceName: net1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 16 nicSelector: pfNames: ["netpf0#0-0"] deviceType: netdevice Policy policy-net-1-dpdk : apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1-dpdk namespace: openshift-sriov-network-operator spec: resourceName: net1dpdk nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 16 nicSelector: pfNames: ["netpf0#8-15"] deviceType: vfio-pci 14.4.2. Configuring SR-IOV network devices The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io CustomResourceDefinition to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR). Note When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. It might take several minutes for a configuration change to apply. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the SR-IOV Network Operator. You have enough available nodes in your cluster to handle the evicted workload from drained nodes. You have not selected any control plane nodes for SR-IOV network device configuration. Procedure Create an SriovNetworkNodePolicy object, and then save the YAML in the <name>-sriov-node-network.yaml file. Replace <name> with the name for this configuration. Optional: Label the SR-IOV capable cluster nodes with SriovNetworkNodePolicy.Spec.NodeSelector if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes". Create the SriovNetworkNodePolicy object: USD oc create -f <name>-sriov-node-network.yaml where <name> specifies the name for this configuration. After applying the configuration update, all the pods in sriov-network-operator namespace transition to the Running status. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured. USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}' Additional resources Understanding how to update labels on nodes . 14.4.3. Troubleshooting SR-IOV configuration After following the procedure to configure an SR-IOV network device, the following sections address some error conditions. To display the state of nodes, run the following command: USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> where: <node_name> specifies the name of a node with an SR-IOV network device. Error output: Cannot allocate memory "lastSyncError": "write /sys/bus/pci/devices/0000:3b:00.1/sriov_numvfs: cannot allocate memory" When a node indicates that it cannot allocate memory, check the following items: Confirm that global SR-IOV settings are enabled in the BIOS for the node. Confirm that VT-d is enabled in the BIOS for the node. 14.4.4. Assigning an SR-IOV network to a VRF As a cluster administrator, you can assign an SR-IOV network interface to your VRF domain by using the CNI VRF plugin. To do this, add the VRF configuration to the optional metaPlugins parameter of the SriovNetwork resource. Note Applications that use VRFs need to bind to a specific device. The common usage is to use the SO_BINDTODEVICE option for a socket. SO_BINDTODEVICE binds the socket to a device that is specified in the passed interface name, for example, eth1 . To use SO_BINDTODEVICE , the application must have CAP_NET_RAW capabilities. Using a VRF through the ip vrf exec command is not supported in OpenShift Container Platform pods. To use VRF, bind applications directly to the VRF interface. 14.4.4.1. Creating an additional SR-IOV network attachment with the CNI VRF plugin The SR-IOV Network Operator manages additional network definitions. When you specify an additional SR-IOV network to create, the SR-IOV Network Operator creates the NetworkAttachmentDefinition custom resource (CR) automatically. Note Do not edit NetworkAttachmentDefinition custom resources that the SR-IOV Network Operator manages. Doing so might disrupt network traffic on your additional network. To create an additional SR-IOV network attachment with the CNI VRF plugin, perform the following procedure. Prerequisites Install the OpenShift Container Platform CLI (oc). Log in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. Procedure Create the SriovNetwork custom resource (CR) for the additional SR-IOV network attachment and insert the metaPlugins configuration, as in the following example CR. Save the YAML as the file sriov-network-attachment.yaml . apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: example-network namespace: additional-sriov-network-1 spec: ipam: | { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "routes": [{ "dst": "0.0.0.0/0" }], "gateway": "10.56.217.1" } vlan: 0 resourceName: intelnics metaPlugins : | { "type": "vrf", 1 "vrfname": "example-vrf-name" 2 } 1 type must be set to vrf . 2 vrfname is the name of the VRF that the interface is assigned to. If it does not exist in the pod, it is created. Create the SriovNetwork resource: USD oc create -f sriov-network-attachment.yaml Verifying that the NetworkAttachmentDefinition CR is successfully created Confirm that the SR-IOV Network Operator created the NetworkAttachmentDefinition CR by running the following command. USD oc get network-attachment-definitions -n <namespace> 1 1 Replace <namespace> with the namespace that you specified when configuring the network attachment, for example, additional-sriov-network-1 . Example output NAME AGE additional-sriov-network-1 14m Note There might be a delay before the SR-IOV Network Operator creates the CR. Verifying that the additional SR-IOV network attachment is successful To verify that the VRF CNI is correctly configured and the additional SR-IOV network attachment is attached, do the following: Create an SR-IOV network that uses the VRF CNI. Assign the network to a pod. Verify that the pod network attachment is connected to the SR-IOV additional network. Remote shell into the pod and run the following command: USD ip vrf show Example output Name Table ----------------------- red 10 Confirm the VRF interface is master of the secondary interface: USD ip link Example output ... 5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode ... 14.4.5. steps Configuring an SR-IOV network attachment 14.5. Configuring an SR-IOV Ethernet network attachment You can configure an Ethernet network attachment for an Single Root I/O Virtualization (SR-IOV) device in the cluster. 14.5.1. Ethernet device configuration object You can configure an Ethernet network device by defining an SriovNetwork object. The following YAML describes an SriovNetwork object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: "<spoof_check>" 6 ipam: |- 7 {} linkState: <link_state> 8 maxTxRate: <max_tx_rate> 9 minTxRate: <min_tx_rate> 10 vlanQoS: <vlan_qos> 11 trust: "<trust_vf>" 12 capabilities: <capabilities> 13 1 A name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 The namespace where the SR-IOV Network Operator is installed. 3 The value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 The target namespace for the SriovNetwork object. Only pods in the target namespace can attach to the additional network. 5 Optional: A Virtual LAN (VLAN) ID for the additional network. The integer value must be from 0 to 4095 . The default value is 0 . 6 Optional: The spoof check mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value you specify in quotes or the object is rejected by the SR-IOV Network Operator. 7 A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. 8 Optional: The link state of virtual function (VF). Allowed value are enable , disable and auto . 9 Optional: A maximum transmission rate, in Mbps, for the VF. 10 Optional: A minimum transmission rate, in Mbps, for the VF. This value must be less than or equal to the maximum transmission rate. Note Intel NICs do not support the minTxRate parameter. For more information, see BZ#1772847 . 11 Optional: An IEEE 802.1p priority level for the VF. The default value is 0 . 12 Optional: The trust mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value that you specify in quotes, or the SR-IOV Network Operator rejects the object. 13 Optional: The capabilities to configure for this additional network. You can specify "{ "ips": true }" to enable IP address support or "{ "mac": true }" to enable MAC address support. 14.5.1.1. Configuration of IP address assignment for an additional network The IP address management (IPAM) Container Network Interface (CNI) plugin provides IP addresses for other CNI plugins. You can use the following IP address assignment types: Static assignment. Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network. Dynamic assignment through the Whereabouts IPAM CNI plugin. 14.5.1.1.1. Static IP address assignment configuration The following table describes the configuration for static IP address assignment: Table 14.3. ipam static configuration object Field Type Description type string The IPAM address type. The value static is required. addresses array An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. routes array An array of objects specifying routes to configure inside the pod. dns array Optional: An array of objects specifying the DNS configuration. The addresses array requires objects with the following fields: Table 14.4. ipam.addresses[] array Field Type Description address string An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24 , then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0 . gateway string The default gateway to route egress network traffic to. Table 14.5. ipam.routes[] array Field Type Description dst string The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route. gw string The gateway where network traffic is routed. Table 14.6. ipam.dns object Field Type Description nameservers array An of array of one or more IP addresses for to send DNS queries to. domain array The default domain to append to a hostname. For example, if the domain is set to example.com , a DNS lookup query for example-host is rewritten as example-host.example.com . search array An array of domain names to append to an unqualified hostname, such as example-host , during a DNS lookup query. Static IP address assignment configuration example { "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.7/24" } ] } } 14.5.1.1.2. Dynamic IP address (DHCP) assignment configuration The following JSON describes the configuration for dynamic IP address address assignment with DHCP. Renewal of DHCP leases A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster. The SR-IOV Network Operator does not create a DHCP server deployment; The Cluster Network Operator is responsible for creating the minimal DHCP server deployment. To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example: Example shim network attachment definition apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { "name": "dhcp-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "dhcp" } } # ... Table 14.7. ipam DHCP configuration object Field Type Description type string The IPAM address type. The value dhcp is required. Dynamic IP address (DHCP) assignment configuration example { "ipam": { "type": "dhcp" } } 14.5.1.1.3. Dynamic IP address assignment configuration with Whereabouts The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server. The following table describes the configuration for dynamic IP address assignment with Whereabouts: Table 14.8. ipam whereabouts configuration object Field Type Description type string The IPAM address type. The value whereabouts is required. range string An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. exclude array Optional: A list of zero ore more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. Dynamic IP address assignment configuration example that uses Whereabouts { "ipam": { "type": "whereabouts", "range": "192.0.2.192/27", "exclude": [ "192.0.2.192/30", "192.0.2.196/32" ] } } 14.5.2. Configuring SR-IOV additional network You can configure an additional network that uses SR-IOV hardware by creating an SriovNetwork object. When you create an SriovNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object. Note Do not modify or delete an SriovNetwork object if it is attached to any pods in a running state. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a SriovNetwork object, and then save the YAML in the <name>.yaml file, where <name> is a name for this additional network. The object specification might resemble the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "gateway": "10.56.217.1" } To create the object, enter the following command: USD oc create -f <name>.yaml where <name> specifies the name of the additional network. Optional: To confirm that the NetworkAttachmentDefinition object that is associated with the SriovNetwork object that you created in the step exists, enter the following command. Replace <namespace> with the networkNamespace you specified in the SriovNetwork object. USD oc get net-attach-def -n <namespace> 14.5.3. steps Adding a pod to an SR-IOV additional network 14.5.4. Additional resources Configuring an SR-IOV network device 14.6. Configuring an SR-IOV InfiniBand network attachment You can configure an InfiniBand (IB) network attachment for an Single Root I/O Virtualization (SR-IOV) device in the cluster. 14.6.1. InfiniBand device configuration object You can configure an InfiniBand (IB) network device by defining an SriovIBNetwork object. The following YAML describes an SriovIBNetwork object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 ipam: |- 5 {} linkState: <link_state> 6 capabilities: <capabilities> 7 1 A name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 The namespace where the SR-IOV Operator is installed. 3 The value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 The target namespace for the SriovIBNetwork object. Only pods in the target namespace can attach to the network device. 5 Optional: A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. 6 Optional: The link state of virtual function (VF). Allowed values are enable , disable and auto . 7 Optional: The capabilities to configure for this network. You can specify "{ "ips": true }" to enable IP address support or "{ "infinibandGUID": true }" to enable IB Global Unique Identifier (GUID) support. 14.6.1.1. Configuration of IP address assignment for an additional network The IP address management (IPAM) Container Network Interface (CNI) plugin provides IP addresses for other CNI plugins. You can use the following IP address assignment types: Static assignment. Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network. Dynamic assignment through the Whereabouts IPAM CNI plugin. 14.6.1.1.1. Static IP address assignment configuration The following table describes the configuration for static IP address assignment: Table 14.9. ipam static configuration object Field Type Description type string The IPAM address type. The value static is required. addresses array An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. routes array An array of objects specifying routes to configure inside the pod. dns array Optional: An array of objects specifying the DNS configuration. The addresses array requires objects with the following fields: Table 14.10. ipam.addresses[] array Field Type Description address string An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24 , then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0 . gateway string The default gateway to route egress network traffic to. Table 14.11. ipam.routes[] array Field Type Description dst string The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route. gw string The gateway where network traffic is routed. Table 14.12. ipam.dns object Field Type Description nameservers array An of array of one or more IP addresses for to send DNS queries to. domain array The default domain to append to a hostname. For example, if the domain is set to example.com , a DNS lookup query for example-host is rewritten as example-host.example.com . search array An array of domain names to append to an unqualified hostname, such as example-host , during a DNS lookup query. Static IP address assignment configuration example { "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.7/24" } ] } } 14.6.1.1.2. Dynamic IP address (DHCP) assignment configuration The following JSON describes the configuration for dynamic IP address address assignment with DHCP. Renewal of DHCP leases A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster. To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example: Example shim network attachment definition apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { "name": "dhcp-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "dhcp" } } # ... Table 14.13. ipam DHCP configuration object Field Type Description type string The IPAM address type. The value dhcp is required. Dynamic IP address (DHCP) assignment configuration example { "ipam": { "type": "dhcp" } } 14.6.1.1.3. Dynamic IP address assignment configuration with Whereabouts The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server. The following table describes the configuration for dynamic IP address assignment with Whereabouts: Table 14.14. ipam whereabouts configuration object Field Type Description type string The IPAM address type. The value whereabouts is required. range string An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. exclude array Optional: A list of zero ore more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. Dynamic IP address assignment configuration example that uses Whereabouts { "ipam": { "type": "whereabouts", "range": "192.0.2.192/27", "exclude": [ "192.0.2.192/30", "192.0.2.196/32" ] } } 14.6.2. Configuring SR-IOV additional network You can configure an additional network that uses SR-IOV hardware by creating an SriovIBNetwork object. When you create an SriovIBNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object. Note Do not modify or delete an SriovIBNetwork object if it is attached to any pods in a running state. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a SriovIBNetwork object, and then save the YAML in the <name>.yaml file, where <name> is a name for this additional network. The object specification might resemble the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "gateway": "10.56.217.1" } To create the object, enter the following command: USD oc create -f <name>.yaml where <name> specifies the name of the additional network. Optional: To confirm that the NetworkAttachmentDefinition object that is associated with the SriovIBNetwork object that you created in the step exists, enter the following command. Replace <namespace> with the networkNamespace you specified in the SriovIBNetwork object. USD oc get net-attach-def -n <namespace> 14.6.3. steps Adding a pod to an SR-IOV additional network 14.6.4. Additional resources Configuring an SR-IOV network device 14.7. Adding a pod to an SR-IOV additional network You can add a pod to an existing Single Root I/O Virtualization (SR-IOV) network. 14.7.1. Runtime configuration for a network attachment When attaching a pod to an additional network, you can specify a runtime configuration to make specific customizations for the pod. For example, you can request a specific MAC hardware address. You specify the runtime configuration by setting an annotation in the pod specification. The annotation key is k8s.v1.cni.cncf.io/networks , and it accepts a JSON object that describes the runtime configuration. 14.7.1.1. Runtime configuration for an Ethernet-based SR-IOV attachment The following JSON describes the runtime configuration options for an Ethernet-based SR-IOV network attachment. [ { "name": "<name>", 1 "mac": "<mac_address>", 2 "ips": ["<cidr_range>"] 3 } ] 1 The name of the SR-IOV network attachment definition CR. 2 Optional: The MAC address for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. To use this feature, you also must specify { "mac": true } in the SriovNetwork object. 3 Optional: IP addresses for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovNetwork object. Example runtime configuration apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "net1", "mac": "20:04:0f:f1:88:01", "ips": ["192.168.10.1/24", "2001::1/64"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: ["sleep", "infinity"] 14.7.1.2. Runtime configuration for an InfiniBand-based SR-IOV attachment The following JSON describes the runtime configuration options for an InfiniBand-based SR-IOV network attachment. [ { "name": "<network_attachment>", 1 "infiniband-guid": "<guid>", 2 "ips": ["<cidr_range>"] 3 } ] 1 The name of the SR-IOV network attachment definition CR. 2 The InfiniBand GUID for the SR-IOV device. To use this feature, you also must specify { "infinibandGUID": true } in the SriovIBNetwork object. 3 The IP addresses for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovIBNetwork object. Example runtime configuration apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "ib1", "infiniband-guid": "c2:11:22:33:44:55:66:77", "ips": ["192.168.10.1/24", "2001::1/64"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: ["sleep", "infinity"] 14.7.2. Adding a pod to an additional network You can add a pod to an additional network. The pod continues to send normal cluster-related network traffic over the default network. When a pod is created additional networks are attached to it. However, if a pod already exists, you cannot attach additional networks to it. The pod must be in the same namespace as the additional network. Note The SR-IOV Network Resource Injector adds the resource field to the first container in a pod automatically. If you are using an Intel network interface controller (NIC) in Data Plane Development Kit (DPDK) mode, only the first container in your pod is configured to access the NIC. Your SR-IOV additional network is configured for DPDK mode if the deviceType is set to vfio-pci in the SriovNetworkNodePolicy object. You can work around this issue by either ensuring that the container that needs access to the NIC is the first container defined in the Pod object or by disabling the Network Resource Injector. For more information, see BZ#1990953 . Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster. Install the SR-IOV Operator. Create either an SriovNetwork object or an SriovIBNetwork object to attach the pod to. Procedure Add an annotation to the Pod object. Only one of the following annotation formats can be used: To attach an additional network without any customization, add an annotation with the following format. Replace <network> with the name of the additional network to associate with the pod: metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1 1 To specify more than one additional network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same additional network multiple times, that pod will have multiple network interfaces attached to that network. To attach an additional network with customizations, add an annotation with the following format: metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "<network>", 1 "namespace": "<namespace>", 2 "default-route": ["<default-route>"] 3 } ] 1 Specify the name of the additional network defined by a NetworkAttachmentDefinition object. 2 Specify the namespace where the NetworkAttachmentDefinition object is defined. 3 Optional: Specify an override for the default route, such as 192.168.17.1 . To create the pod, enter the following command. Replace <name> with the name of the pod. USD oc create -f <name>.yaml Optional: To Confirm that the annotation exists in the Pod CR, enter the following command, replacing <name> with the name of the pod. USD oc get pod <name> -o yaml In the following example, the example-pod pod is attached to the net1 additional network: USD oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/networks-status: |- 1 [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.128.2.14" ], "default": true, "dns": {} },{ "name": "macvlan-bridge", "interface": "net1", "ips": [ "20.2.2.100" ], "mac": "22:2f:60:a5:f8:00", "dns": {} }] name: example-pod namespace: default spec: ... status: ... 1 The k8s.v1.cni.cncf.io/networks-status parameter is a JSON array of objects. Each object describes the status of an additional network attached to the pod. The annotation value is stored as a plain text value. 14.7.3. Creating a non-uniform memory access (NUMA) aligned SR-IOV pod You can create a NUMA aligned SR-IOV pod by restricting SR-IOV and the CPU resources allocated from the same NUMA node with restricted or single-numa-node Topology Manager polices. Prerequisites You have installed the OpenShift CLI ( oc ). You have configured the CPU Manager policy to static . For more information on CPU Manager, see the "Additional resources" section. You have configured the Topology Manager policy to single-numa-node . Note When single-numa-node is unable to satisfy the request, you can configure the Topology Manager policy to restricted . Procedure Create the following SR-IOV pod spec, and then save the YAML in the <name>-sriov-pod.yaml file. Replace <name> with a name for this pod. The following example shows an SR-IOV pod spec: apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: <name> 1 spec: containers: - name: sample-container image: <image> 2 command: ["sleep", "infinity"] resources: limits: memory: "1Gi" 3 cpu: "2" 4 requests: memory: "1Gi" cpu: "2" 1 Replace <name> with the name of the SR-IOV network attachment definition CR. 2 Replace <image> with the name of the sample-pod image. 3 To create the SR-IOV pod with guaranteed QoS, set memory limits equal to memory requests . 4 To create the SR-IOV pod with guaranteed QoS, set cpu limits equals to cpu requests . Create the sample SR-IOV pod by running the following command: USD oc create -f <filename> 1 1 Replace <filename> with the name of the file you created in the step. Confirm that the sample-pod is configured with guaranteed QoS. USD oc describe pod sample-pod Confirm that the sample-pod is allocated with exclusive CPUs. USD oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus Confirm that the SR-IOV device and CPUs that are allocated for the sample-pod are on the same NUMA node. USD oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus 14.7.4. Additional resources Configuring an SR-IOV Ethernet network attachment Configuring an SR-IOV InfiniBand network attachment Using CPU Manager 14.8. Using high performance multicast You can use multicast on your Single Root I/O Virtualization (SR-IOV) hardware network. 14.8.1. High performance multicast The OpenShift SDN default Container Network Interface (CNI) network provider supports multicast between pods on the default network. This is best used for low-bandwidth coordination or service discovery, and not high-bandwidth applications. For applications such as streaming media, like Internet Protocol television (IPTV) and multipoint videoconferencing, you can utilize Single Root I/O Virtualization (SR-IOV) hardware to provide near-native performance. When using additional SR-IOV interfaces for multicast: Multicast packages must be sent or received by a pod through the additional SR-IOV interface. The physical network which connects the SR-IOV interfaces decides the multicast routing and topology, which is not controlled by OpenShift Container Platform. 14.8.2. Configuring an SR-IOV interface for multicast The follow procedure creates an example SR-IOV interface for multicast. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Create a SriovNetworkNodePolicy object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-example namespace: openshift-sriov-network-operator spec: resourceName: example nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 4 nicSelector: vendor: "8086" pfNames: ['ens803f0'] rootDevices: ['0000:86:00.0'] Create a SriovNetwork object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: net-example namespace: openshift-sriov-network-operator spec: networkNamespace: default ipam: | 1 { "type": "host-local", 2 "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "routes": [ {"dst": "224.0.0.0/5"}, {"dst": "232.0.0.0/5"} ], "gateway": "10.56.217.1" } resourceName: example 1 2 If you choose to configure DHCP as IPAM, ensure that you provision the following default routes through your DHCP server: 224.0.0.0/5 and 232.0.0.0/5 . This is to override the static multicast route set by the default network provider. Create a pod with multicast application: apiVersion: v1 kind: Pod metadata: name: testpmd namespace: default annotations: k8s.v1.cni.cncf.io/networks: nic1 spec: containers: - name: example image: rhel7:latest securityContext: capabilities: add: ["NET_ADMIN"] 1 command: [ "sleep", "infinity"] 1 The NET_ADMIN capability is required only if your application needs to assign the multicast IP address to the SR-IOV interface. Otherwise, it can be omitted. 14.9. Using DPDK and RDMA The containerized Data Plane Development Kit (DPDK) application is supported on OpenShift Container Platform. You can use Single Root I/O Virtualization (SR-IOV) network hardware with the Data Plane Development Kit (DPDK) and with remote direct memory access (RDMA). For information on supported devices, refer to Supported devices . 14.9.1. Using a virtual function in DPDK mode with an Intel NIC Prerequisites Install the OpenShift CLI ( oc ). Install the SR-IOV Network Operator. Log in as a user with cluster-admin privileges. Procedure Create the following SriovNetworkNodePolicy object, and then save the YAML in the intel-dpdk-node-policy.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: intel-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: intelnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: <priority> numVfs: <num> nicSelector: vendor: "8086" deviceID: "158b" pfNames: ["<pf_name>", ...] rootDevices: ["<pci_bus_id>", "..."] deviceType: vfio-pci 1 1 Specify the driver type for the virtual functions to vfio-pci . Note See the Configuring SR-IOV network devices section for a detailed explanation on each option in SriovNetworkNodePolicy . When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand. After the configuration update is applied, all the pods in openshift-sriov-network-operator namespace will change to a Running status. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f intel-dpdk-node-policy.yaml Create the following SriovNetwork object, and then save the YAML in the intel-dpdk-network.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: intel-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- # ... 1 vlan: <vlan> resourceName: intelnics 1 Specify a configuration object for the ipam CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. Note See the "Configuring SR-IOV additional network" section for a detailed explanation on each option in SriovNetwork . An optional library, app-netutil, provides several API methods for gathering network information about a container's parent pod. Create the SriovNetwork object by running the following command: USD oc create -f intel-dpdk-network.yaml Create the following Pod spec, and then save the YAML in the intel-dpdk-pod.yaml file. apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: intel-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] 3 volumeMounts: - mountPath: /dev/hugepages 4 name: hugepage resources: limits: openshift.io/intelnics: "1" 5 memory: "1Gi" cpu: "4" 6 hugepages-1Gi: "4Gi" 7 requests: openshift.io/intelnics: "1" memory: "1Gi" cpu: "4" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the same target_namespace where the SriovNetwork object intel-dpdk-network is created. If you would like to create the pod in a different namespace, change target_namespace in both the Pod spec and the SriovNetowrk object. 2 Specify the DPDK image which includes your application and the DPDK library used by application. 3 Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. 4 Mount a hugepage volume to the DPDK pod under /dev/hugepages . The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages . 5 Optional: Specify the number of DPDK devices allocated to DPDK pod. This resource request and limit, if not explicitly specified, will be automatically added by the SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by the SR-IOV Operator. It is enabled by default and can be disabled by setting enableInjector option to false in the default SriovOperatorConfig CR. 6 Specify the number of CPUs. The DPDK pod usually requires exclusive CPUs to be allocated from the kubelet. This is achieved by setting CPU Manager policy to static and creating a pod with Guaranteed QoS. 7 Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to the DPDK pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepage requires adding kernel arguments to Nodes. For example, adding kernel arguments default_hugepagesz=1GB , hugepagesz=1G and hugepages=16 will result in 16*1Gi hugepages be allocated during system boot. Create the DPDK pod by running the following command: USD oc create -f intel-dpdk-pod.yaml 14.9.2. Using a virtual function in DPDK mode with a Mellanox NIC Prerequisites Install the OpenShift CLI ( oc ). Install the SR-IOV Network Operator. Log in as a user with cluster-admin privileges. Procedure Create the following SriovNetworkNodePolicy object, and then save the YAML in the mlx-dpdk-node-policy.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: <priority> numVfs: <num> nicSelector: vendor: "15b3" deviceID: "1015" 1 pfNames: ["<pf_name>", ...] rootDevices: ["<pci_bus_id>", "..."] deviceType: netdevice 2 isRdma: true 3 1 Specify the device hex code of the SR-IOV network device. The only allowed values for Mellanox cards are 1015 , 1017 . 2 Specify the driver type for the virtual functions to netdevice . Mellanox SR-IOV VF can work in DPDK mode without using the vfio-pci device type. VF device appears as a kernel network interface inside a container. 3 Enable RDMA mode. This is required by Mellanox cards to work in DPDK mode. Note See the Configuring SR-IOV network devices section for detailed explanation on each option in SriovNetworkNodePolicy . When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand. After the configuration update is applied, all the pods in the openshift-sriov-network-operator namespace will change to a Running status. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f mlx-dpdk-node-policy.yaml Create the following SriovNetwork object, and then save the YAML in the mlx-dpdk-network.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 # ... vlan: <vlan> resourceName: mlxnics 1 Specify a configuration object for the ipam CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. Note See the "Configuring SR-IOV additional network" section for a detailed explanation on each option in SriovNetwork . An optional library, app-netutil, provides several API methods for gathering network information about a container's parent pod. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f mlx-dpdk-network.yaml Create the following Pod spec, and then save the YAML in the mlx-dpdk-pod.yaml file. apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] 3 volumeMounts: - mountPath: /dev/hugepages 4 name: hugepage resources: limits: openshift.io/mlxnics: "1" 5 memory: "1Gi" cpu: "4" 6 hugepages-1Gi: "4Gi" 7 requests: openshift.io/mlxnics: "1" memory: "1Gi" cpu: "4" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the same target_namespace where SriovNetwork object mlx-dpdk-network is created. If you would like to create the pod in a different namespace, change target_namespace in both Pod spec and SriovNetowrk object. 2 Specify the DPDK image which includes your application and the DPDK library used by application. 3 Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. 4 Mount the hugepage volume to the DPDK pod under /dev/hugepages . The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages . 5 Optional: Specify the number of DPDK devices allocated to the DPDK pod. This resource request and limit, if not explicitly specified, will be automatically added by SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by SR-IOV Operator. It is enabled by default and can be disabled by setting the enableInjector option to false in the default SriovOperatorConfig CR. 6 Specify the number of CPUs. The DPDK pod usually requires exclusive CPUs be allocated from kubelet. This is achieved by setting CPU Manager policy to static and creating a pod with Guaranteed QoS. 7 Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to DPDK pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepage requires adding kernel arguments to Nodes. Create the DPDK pod by running the following command: USD oc create -f mlx-dpdk-pod.yaml 14.9.3. Using a virtual function in RDMA mode with a Mellanox NIC Important RDMA over Converged Ethernet (RoCE) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . RDMA over Converged Ethernet (RoCE) is the only supported mode when using RDMA on OpenShift Container Platform. Prerequisites Install the OpenShift CLI ( oc ). Install the SR-IOV Network Operator. Log in as a user with cluster-admin privileges. Procedure Create the following SriovNetworkNodePolicy object, and then save the YAML in the mlx-rdma-node-policy.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-rdma-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: <priority> numVfs: <num> nicSelector: vendor: "15b3" deviceID: "1015" 1 pfNames: ["<pf_name>", ...] rootDevices: ["<pci_bus_id>", "..."] deviceType: netdevice 2 isRdma: true 3 1 Specify the device hex code of SR-IOV network device. The only allowed values for Mellanox cards are 1015 , 1017 . 2 Specify the driver type for the virtual functions to netdevice . 3 Enable RDMA mode. Note See the Configuring SR-IOV network devices section for a detailed explanation on each option in SriovNetworkNodePolicy . When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand. After the configuration update is applied, all the pods in the openshift-sriov-network-operator namespace will change to a Running status. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f mlx-rdma-node-policy.yaml Create the following SriovNetwork object, and then save the YAML in the mlx-rdma-network.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-rdma-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 # ... vlan: <vlan> resourceName: mlxnics 1 Specify a configuration object for the ipam CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. Note See the "Configuring SR-IOV additional network" section for a detailed explanation on each option in SriovNetwork . An optional library, app-netutil, provides several API methods for gathering network information about a container's parent pod. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f mlx-rdma-network.yaml Create the following Pod spec, and then save the YAML in the mlx-rdma-pod.yaml file. apiVersion: v1 kind: Pod metadata: name: rdma-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-rdma-network spec: containers: - name: testpmd image: <RDMA_image> 2 securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] 3 volumeMounts: - mountPath: /dev/hugepages 4 name: hugepage resources: limits: memory: "1Gi" cpu: "4" 5 hugepages-1Gi: "4Gi" 6 requests: memory: "1Gi" cpu: "4" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the same target_namespace where SriovNetwork object mlx-rdma-network is created. If you would like to create the pod in a different namespace, change target_namespace in both Pod spec and SriovNetowrk object. 2 Specify the RDMA image which includes your application and RDMA library used by application. 3 Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. 4 Mount the hugepage volume to RDMA pod under /dev/hugepages . The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages . 5 Specify number of CPUs. The RDMA pod usually requires exclusive CPUs be allocated from the kubelet. This is achieved by setting CPU Manager policy to static and create pod with Guaranteed QoS. 6 Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to the RDMA pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepage requires adding kernel arguments to Nodes. Create the RDMA pod by running the following command: USD oc create -f mlx-rdma-pod.yaml 14.9.4. Additional resources Configuring an SR-IOV Ethernet network attachment . The app-netutil library , provides several API methods for gathering network information about a container's parent pod. 14.10. Uninstalling the SR-IOV Network Operator To uninstall the SR-IOV Network Operator, you must delete any running SR-IOV workloads, uninstall the Operator, and delete the webhooks that the Operator used. 14.10.1. Uninstalling the SR-IOV Network Operator As a cluster administrator, you can uninstall the SR-IOV Network Operator. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have the SR-IOV Network Operator installed. Procedure Delete all SR-IOV custom resources (CRs): USD oc delete sriovnetwork -n openshift-sriov-network-operator --all USD oc delete sriovnetworknodepolicy -n openshift-sriov-network-operator --all USD oc delete sriovibnetwork -n openshift-sriov-network-operator --all Follow the instructions in the "Deleting Operators from a cluster" section to remove the SR-IOV Network Operator from your cluster. Delete the SR-IOV custom resource definitions that remain in the cluster after the SR-IOV Network Operator is uninstalled: USD oc delete crd sriovibnetworks.sriovnetwork.openshift.io USD oc delete crd sriovnetworknodepolicies.sriovnetwork.openshift.io USD oc delete crd sriovnetworknodestates.sriovnetwork.openshift.io USD oc delete crd sriovnetworkpoolconfigs.sriovnetwork.openshift.io USD oc delete crd sriovnetworks.sriovnetwork.openshift.io USD oc delete crd sriovoperatorconfigs.sriovnetwork.openshift.io Delete the SR-IOV webhooks: USD oc delete mutatingwebhookconfigurations network-resources-injector-config USD oc delete MutatingWebhookConfiguration sriov-operator-webhook-config USD oc delete ValidatingWebhookConfiguration sriov-operator-webhook-config Delete the SR-IOV Network Operator namespace: USD oc delete namespace openshift-sriov-network-operator Additional resources Deleting Operators from a cluster
[ "oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable=\"true\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState metadata: name: node-25 1 namespace: openshift-sriov-network-operator ownerReferences: - apiVersion: sriovnetwork.openshift.io/v1 blockOwnerDeletion: true controller: true kind: SriovNetworkNodePolicy name: default spec: dpConfigVersion: \"39824\" status: interfaces: 2 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f0 pciAddress: \"0000:18:00.0\" totalvfs: 8 vendor: 15b3 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f1 pciAddress: \"0000:18:00.1\" totalvfs: 8 vendor: 15b3 - deviceID: 158b driver: i40e mtu: 1500 name: ens817f0 pciAddress: 0000:81:00.0 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens817f1 pciAddress: 0000:81:00.1 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens803f0 pciAddress: 0000:86:00.0 totalvfs: 64 vendor: \"8086\" syncStatus: Succeeded", "apiVersion: v1 kind: Pod metadata: name: rdma-app annotations: k8s.v1.cni.cncf.io/networks: sriov-rdma-mlnx spec: containers: - name: testpmd image: <RDMA_image> imagePullPolicy: IfNotPresent securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] command: [\"sleep\", \"infinity\"]", "apiVersion: v1 kind: Pod metadata: name: dpdk-app annotations: k8s.v1.cni.cncf.io/networks: sriov-dpdk-net spec: containers: - name: testpmd image: <DPDK_image> securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: memory: \"1Gi\" cpu: \"2\" hugepages-1Gi: \"4Gi\" requests: memory: \"1Gi\" cpu: \"2\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management EOF", "cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator EOF", "OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | grep -o '[0-9]*[.][0-9]*' | head -1)", "cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: \"USD{OC_VERSION}\" name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc get csv -n openshift-sriov-network-operator -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase sriov-network-operator.4.9.0-202110121402 Succeeded", "oc annotate ns/openshift-sriov-network-operator workload.openshift.io/allowed=management", "oc get pods -n openshift-sriov-network-operator", "NAME READY STATUS RESTARTS AGE network-resources-injector-5cz5p 1/1 Running 0 10m network-resources-injector-dwqpx 1/1 Running 0 10m network-resources-injector-lktz5 1/1 Running 0 10m", "oc get pods -n openshift-sriov-network-operator", "NAME READY STATUS RESTARTS AGE operator-webhook-9jkw6 1/1 Running 0 16m operator-webhook-kbr5p 1/1 Running 0 16m operator-webhook-rpfrl 1/1 Running 0 16m", "oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"enableInjector\": <value> } }'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableInjector: <value>", "oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"enableOperatorWebhook\": <value> } }'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableOperatorWebhook: <value>", "oc patch sriovoperatorconfig default --type=json -n openshift-sriov-network-operator --patch '[{ \"op\": \"replace\", \"path\": \"/spec/configDaemonNodeSelector\", \"value\": {<node_label>} }]'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: <node_label>", "oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"disableDrain\": true } }'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: disableDrain: true", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 needVhostNet: false 7 numVfs: <num> 8 nicSelector: 9 vendor: \"<vendor_code>\" 10 deviceID: \"<device_id>\" 11 pfNames: [\"<pf_name>\", ...] 12 rootDevices: [\"<pci_bus_id>\", ...] 13 netFilter: \"<filter_string>\" 14 deviceType: <device_type> 15 isRdma: false 16 linkType: <link_type> 17", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-ib-net-1 namespace: openshift-sriov-network-operator spec: resourceName: ibnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 4 nicSelector: vendor: \"15b3\" deviceID: \"101b\" rootDevices: - \"0000:19:00.0\" linkType: ib isRdma: true", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-sriov-net-openstack-1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 1 1 nicSelector: vendor: \"15b3\" deviceID: \"101b\" netFilter: \"openstack/NetworkID:ea24bd04-8674-4f69-b0ee-fa0b3bd20509\" 2", "pfNames: [\"netpf0#2-7\"]", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1 namespace: openshift-sriov-network-operator spec: resourceName: net1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 16 nicSelector: pfNames: [\"netpf0#0-0\"] deviceType: netdevice", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1-dpdk namespace: openshift-sriov-network-operator spec: resourceName: net1dpdk nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 16 nicSelector: pfNames: [\"netpf0#8-15\"] deviceType: vfio-pci", "oc create -f <name>-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name>", "\"lastSyncError\": \"write /sys/bus/pci/devices/0000:3b:00.1/sriov_numvfs: cannot allocate memory\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: example-network namespace: additional-sriov-network-1 spec: ipam: | { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } vlan: 0 resourceName: intelnics metaPlugins : | { \"type\": \"vrf\", 1 \"vrfname\": \"example-vrf-name\" 2 }", "oc create -f sriov-network-attachment.yaml", "oc get network-attachment-definitions -n <namespace> 1", "NAME AGE additional-sriov-network-1 14m", "ip vrf show", "Name Table ----------------------- red 10", "ip link", "5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 ipam: |- 7 {} linkState: <link_state> 8 maxTxRate: <max_tx_rate> 9 minTxRate: <min_tx_rate> 10 vlanQoS: <vlan_qos> 11 trust: \"<trust_vf>\" 12 capabilities: <capabilities> 13", "{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #", "{ \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"gateway\": \"10.56.217.1\" }", "oc create -f <name>.yaml", "oc get net-attach-def -n <namespace>", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 ipam: |- 5 {} linkState: <link_state> 6 capabilities: <capabilities> 7", "{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #", "{ \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"gateway\": \"10.56.217.1\" }", "oc create -f <name>.yaml", "oc get net-attach-def -n <namespace>", "[ { \"name\": \"<name>\", 1 \"mac\": \"<mac_address>\", 2 \"ips\": [\"<cidr_range>\"] 3 } ]", "apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"net1\", \"mac\": \"20:04:0f:f1:88:01\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]", "[ { \"name\": \"<network_attachment>\", 1 \"infiniband-guid\": \"<guid>\", 2 \"ips\": [\"<cidr_range>\"] 3 } ]", "apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"ib1\", \"infiniband-guid\": \"c2:11:22:33:44:55:66:77\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]", "metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1", "metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]", "oc create -f <name>.yaml", "oc get pod <name> -o yaml", "oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/networks-status: |- 1 [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:", "apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: <name> 1 spec: containers: - name: sample-container image: <image> 2 command: [\"sleep\", \"infinity\"] resources: limits: memory: \"1Gi\" 3 cpu: \"2\" 4 requests: memory: \"1Gi\" cpu: \"2\"", "oc create -f <filename> 1", "oc describe pod sample-pod", "oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus", "oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-example namespace: openshift-sriov-network-operator spec: resourceName: example nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 4 nicSelector: vendor: \"8086\" pfNames: ['ens803f0'] rootDevices: ['0000:86:00.0']", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: net-example namespace: openshift-sriov-network-operator spec: networkNamespace: default ipam: | 1 { \"type\": \"host-local\", 2 \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [ {\"dst\": \"224.0.0.0/5\"}, {\"dst\": \"232.0.0.0/5\"} ], \"gateway\": \"10.56.217.1\" } resourceName: example", "apiVersion: v1 kind: Pod metadata: name: testpmd namespace: default annotations: k8s.v1.cni.cncf.io/networks: nic1 spec: containers: - name: example image: rhel7:latest securityContext: capabilities: add: [\"NET_ADMIN\"] 1 command: [ \"sleep\", \"infinity\"]", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: intel-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: intelnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"8086\" deviceID: \"158b\" pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: vfio-pci 1", "oc create -f intel-dpdk-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: intel-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- ... 1 vlan: <vlan> resourceName: intelnics", "oc create -f intel-dpdk-network.yaml", "apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: intel-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /dev/hugepages 4 name: hugepage resources: limits: openshift.io/intelnics: \"1\" 5 memory: \"1Gi\" cpu: \"4\" 6 hugepages-1Gi: \"4Gi\" 7 requests: openshift.io/intelnics: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "oc create -f intel-dpdk-pod.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"15b3\" deviceID: \"1015\" 1 pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: netdevice 2 isRdma: true 3", "oc create -f mlx-dpdk-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 vlan: <vlan> resourceName: mlxnics", "oc create -f mlx-dpdk-network.yaml", "apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /dev/hugepages 4 name: hugepage resources: limits: openshift.io/mlxnics: \"1\" 5 memory: \"1Gi\" cpu: \"4\" 6 hugepages-1Gi: \"4Gi\" 7 requests: openshift.io/mlxnics: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "oc create -f mlx-dpdk-pod.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-rdma-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"15b3\" deviceID: \"1015\" 1 pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: netdevice 2 isRdma: true 3", "oc create -f mlx-rdma-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-rdma-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 vlan: <vlan> resourceName: mlxnics", "oc create -f mlx-rdma-network.yaml", "apiVersion: v1 kind: Pod metadata: name: rdma-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-rdma-network spec: containers: - name: testpmd image: <RDMA_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /dev/hugepages 4 name: hugepage resources: limits: memory: \"1Gi\" cpu: \"4\" 5 hugepages-1Gi: \"4Gi\" 6 requests: memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "oc create -f mlx-rdma-pod.yaml", "oc delete sriovnetwork -n openshift-sriov-network-operator --all", "oc delete sriovnetworknodepolicy -n openshift-sriov-network-operator --all", "oc delete sriovibnetwork -n openshift-sriov-network-operator --all", "oc delete crd sriovibnetworks.sriovnetwork.openshift.io", "oc delete crd sriovnetworknodepolicies.sriovnetwork.openshift.io", "oc delete crd sriovnetworknodestates.sriovnetwork.openshift.io", "oc delete crd sriovnetworkpoolconfigs.sriovnetwork.openshift.io", "oc delete crd sriovnetworks.sriovnetwork.openshift.io", "oc delete crd sriovoperatorconfigs.sriovnetwork.openshift.io", "oc delete mutatingwebhookconfigurations network-resources-injector-config", "oc delete MutatingWebhookConfiguration sriov-operator-webhook-config", "oc delete ValidatingWebhookConfiguration sriov-operator-webhook-config", "oc delete namespace openshift-sriov-network-operator" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/networking/hardware-networks
Chapter 22. Tracing Routes
Chapter 22. Tracing Routes Debugging a route often involves solving one of two problems: A message was improperly transformed. A message failed to reach its destination endpoint. Tracing one or more test messages through the route is the easiest way to discover the source of such problems. The tooling's route tracing feature enables you to monitor the path a message takes through a route and see how the message is transformed as it passes from processor to processor. The Diagram View displays a graphical representation of the route, which enables you to see the path a message takes through it. For each processor in a route, it also displays the average processing time, in milliseconds, for all messages processed since route start-up and the number of messages processed since route start-up. The Messages View displays the messages processed by a JMS destination or route endpoint selected in the JMX Navigator tree. Selecting an individual message trace in the Messages View displays the full details and content of the message in the Properties view and highlights the correspoding node in the Diagram View . Tracing messages through a route involves the following steps: Section 22.1, "Creating test messages for route tracing" Section 22.2, "Activating route tracing" Section 22.3, "Tracing messages through a routing context" Section 22.4, "Deactivating route tracing" 22.1. Creating test messages for route tracing Overview Route tracing works with any kind of message structure. The Fuse Message wizard creates an empty .xml message, leaving the structuring of the message entirely up to you. Note If the folder where you want to store the test messages does not exist, you need to create it before you create the messages. Creating a new folder to store test messages To create a new folder: In the Project Explorer view, right-click the project root to open the context menu. Select New Folder to open the New Folder wizard. The project root appears in the Enter or select the parent folder field. Expand the nodes in the graphical representation of the project's hierarchy, and select the node you want to be the parent folder. In the Folder name field, enter a name for the new folder. Click Finish . The new folder appears in the Project Explorer view, under the selected parent folder. Note If the new folder does not appear, right-click the parent foler and select Refresh . Creating a test message To create a test message: In the Project Explorer view, right-click the project to open the context menu. Select New Fuse Message to open the New File wizard. Expand the nodes in the graphical representation of the project's hierarchy, and select the folder in which you want to store the new test message. In the File name field, enter a name for the message, or accept the default ( message.xml ). Click Finish . The new message opens in the XML editor. Enter the message contents, both body and header text. Note You may see the warning, No grammar constraints (DTD or XML Schema) referenced in the document , depending on the header text you entered. You can safely ignore this warning. Related topics Section 22.3, "Tracing messages through a routing context" 22.2. Activating route tracing Overview You must activate route tracing for the routing context before you can trace messages through that routing context. Procedure To activate tracing on a routing context: In the JMX Navigator view, select the running routing context on which you want to start tracing. Note You can select any route in the context to start tracing on the entire context. Right-click the selected routing context to open the context menu, and then select Start Tracing to start the trace. If Stop Tracing Context is enabled on the context menu, then tracing is already active. Related topics Section 22.3, "Tracing messages through a routing context" Section 22.4, "Deactivating route tracing" 22.3. Tracing messages through a routing context Overview The best way to see what is happening in a routing context is to watch what happens to a message at each stop along the way. The tooling provides a mechanism for dropping messages into a running routing context and tracing the path the messages take through it. Procedure To trace messages through a routing context: Create one or more test messages as described in Section 22.1, "Creating test messages for route tracing" . In the Project Explorer view, right-click the project's Camel context file to open the context menu, and select Run As Local Camel Context (without Tests) . Note Do not run it as Local Camel Context unless you have created a comprehensive JUnit test for the project. Activate tracing for the running routing context as described in Section 22.2, "Activating route tracing" . Drag one of the test messages from the Project Explorer view onto the routing context's starting point in the JMX Navigator view. In the JMX Navigator view, select the routing context being traced. The tooling populates the Messages View with message instances that represent the message at each stage in the traced context. The Diagram View displays a graphical representation of the selected routing context. In the Messages View , select one of the message instances. The Properties view displays the details and content of the message instance. In the Diagram View , the route step corresponding to the selected message instance is highlighted. If the route step is a processing step, the tooling tags the exiting path with timing and processing metrics. Repeat this prodedure as needed. Related topics Section 22.1, "Creating test messages for route tracing" Section 22.2, "Activating route tracing" Section 22.4, "Deactivating route tracing" 22.4. Deactivating route tracing Overview When you are finished debugging the routes in a routing context, you should deactivate tracing. Important Deactivating tracing stops tracing and flushes the trace data for all of the routes in the routing context. This means that you cannot review any past tracing sessions. Procedure To stop tracing for a routing context: In the JMX Navigator view, select the running routing context for which you want to deactivate tracing. Note You can select any route in the context to stop tracing for the context. Right-click the selected routing context to open the context menu, and then select Stop Tracing Context . If Start Tracing appears on the context menu, tracing is not activated for the routing context. Related topics Section 22.2, "Activating route tracing" Section 22.3, "Tracing messages through a routing context"
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/ridertracing